From patchwork Wed Nov 8 12:53:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133973 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68FFA432D6; Wed, 8 Nov 2023 13:53:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1BE0842E10; Wed, 8 Nov 2023 13:53:31 +0100 (CET) Received: from mail-ed1-f54.google.com (mail-ed1-f54.google.com [209.85.208.54]) by mails.dpdk.org (Postfix) with ESMTP id 3D95C40A73 for ; Wed, 8 Nov 2023 13:53:27 +0100 (CET) Received: by mail-ed1-f54.google.com with SMTP id 4fb4d7f45d1cf-53e07db272cso11635513a12.3 for ; Wed, 08 Nov 2023 04:53:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448007; x=1700052807; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=o5OgTVzJODANDOBqIUh4rKdOwoKz/WvRfL+p6mvGZAE=; b=u61XJW23CuskThQoVFCpr5uPSL+m1mt3QIgYG9vkhkbqHI5PiWcKyVVy25MvmRlFZE D+pMqC8JpRDgz9xTdWU71a9vj9LjS9oX9eWhKKOePXRFu9mJTVqEFwgBUVSoxQveOvdU lrk+NSoMNCEnpWdKb0l/lRgX++I2KiHlniBF8s57AUIhw94XMO0zblklCDas8rMgwhpF ofHD9hUjSa41yoq96KDaTzan2BpdM9G+WBu15cseMVXDs2qzXD++FStitwPJxjt/yByl WzLnG9zXsSblnEhH0CMge54ATrAtt4MHkRcEry9ahqhZ1M29s+X9Iv2NftZSPrsXOwVf IFtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448007; x=1700052807; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o5OgTVzJODANDOBqIUh4rKdOwoKz/WvRfL+p6mvGZAE=; b=nQxJSOMRBlXB7xOp+i6Ntgvc4q9sJxgG0FGfrdxXRJSWuMC9ZgWB2mNYj1g7U5SYrM pfMyUEKXerx5nLBf5eA3kS24kwdh+UvAwr6FVNJn1KJ1GNRevY+nQ8vt1BjSzkmnQVUg sbW+lvhQZ428pEfti3te5o95nwAss/ZL7MVajy6gufKjiR1YecrDb22g9n8CT3GTLuKr 3yWr3ArHtycyJoIBuBDmhiQgwkWpTDfimLUcnDx38qXtP6xxrZOewYjACRynOxDrYs7F lMkKdph/wRA01vPQvdqvIzIlLqyUAgZkz9D2AlF3BmqXEWTizdQsolOPY7TZA4tqEGru VTHQ== X-Gm-Message-State: AOJu0YySQMhlrZvUBMTImOY4kNlnCM52zwUrS9LFIrremvp0vqLyUG8H Re+MjmR5NcUHjrQbrHvw9iIfmA== X-Google-Smtp-Source: AGHT+IF4cXu5kvgCnj5CPGD1TZozlh4OD/LMXr9IhkDJp7dpXq9zKnZwhWM+l3Og6ZlEUPP1qNH+yw== X-Received: by 2002:a50:9356:0:b0:540:f0d8:d748 with SMTP id n22-20020a509356000000b00540f0d8d748mr1497746eda.23.1699448006599; Wed, 08 Nov 2023 04:53:26 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:26 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 01/23] dts: code adjustments for doc generation Date: Wed, 8 Nov 2023 13:53:02 +0100 Message-Id: <20231108125324.191005-1-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231106171601.160749-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The standard Python tool for generating API documentation, Sphinx, imports modules one-by-one when generating the documentation. This requires code changes: * properly guarding argument parsing in the if __name__ == '__main__' block, * the logger used by DTS runner underwent the same treatment so that it doesn't create log files outside of a DTS run, * however, DTS uses the arguments to construct an object holding global variables. The defaults for the global variables needed to be moved from argument parsing elsewhere, * importing the remote_session module from framework resulted in circular imports because of one module trying to import another module. This is fixed by reorganizing the code, * some code reorganization was done because the resulting structure makes more sense, improving documentation clarity. The are some other changes which are documentation related: * added missing type annotation so they appear in the generated docs, * reordered arguments in some methods, * removed superfluous arguments and attributes, * change private functions/methods/attributes to private and vice-versa. The above all appear in the generated documentation and the with them, the documentation is improved. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 10 ++- dts/framework/dts.py | 33 +++++-- dts/framework/exception.py | 54 +++++------- dts/framework/remote_session/__init__.py | 41 ++++----- .../interactive_remote_session.py | 0 .../{remote => }/interactive_shell.py | 0 .../{remote => }/python_shell.py | 0 .../remote_session/remote/__init__.py | 27 ------ .../{remote => }/remote_session.py | 0 .../{remote => }/ssh_session.py | 12 +-- .../{remote => }/testpmd_shell.py | 0 dts/framework/settings.py | 87 +++++++++++-------- dts/framework/test_result.py | 4 +- dts/framework/test_suite.py | 7 +- dts/framework/testbed_model/__init__.py | 12 +-- dts/framework/testbed_model/{hw => }/cpu.py | 13 +++ dts/framework/testbed_model/hw/__init__.py | 27 ------ .../linux_session.py | 6 +- dts/framework/testbed_model/node.py | 26 ++++-- .../os_session.py | 22 ++--- dts/framework/testbed_model/{hw => }/port.py | 0 .../posix_session.py | 4 +- dts/framework/testbed_model/sut_node.py | 8 +- dts/framework/testbed_model/tg_node.py | 30 +------ .../traffic_generator/__init__.py | 24 +++++ .../capturing_traffic_generator.py | 6 +- .../{ => traffic_generator}/scapy.py | 23 ++--- .../traffic_generator.py | 16 +++- .../testbed_model/{hw => }/virtual_device.py | 0 dts/framework/utils.py | 46 +++------- dts/main.py | 9 +- 31 files changed, 259 insertions(+), 288 deletions(-) rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%) rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%) rename dts/framework/remote_session/{remote => }/python_shell.py (100%) delete mode 100644 dts/framework/remote_session/remote/__init__.py rename dts/framework/remote_session/{remote => }/remote_session.py (100%) rename dts/framework/remote_session/{remote => }/ssh_session.py (91%) rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%) rename dts/framework/testbed_model/{hw => }/cpu.py (95%) delete mode 100644 dts/framework/testbed_model/hw/__init__.py rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%) rename dts/framework/{remote_session => testbed_model}/os_session.py (95%) rename dts/framework/testbed_model/{hw => }/port.py (100%) rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%) create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%) rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%) rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%) rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%) diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index cb7e00ba34..2044c82611 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -17,6 +17,7 @@ import warlock # type: ignore[import] import yaml +from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -89,7 +90,7 @@ class TrafficGeneratorConfig: traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict): + def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": # This looks useless now, but is designed to allow expansion to traffic # generators that require more configuration later. match TrafficGeneratorType(d["type"]): @@ -97,6 +98,10 @@ def from_dict(d: dict): return ScapyTrafficGeneratorConfig( traffic_generator_type=TrafficGeneratorType.SCAPY ) + case _: + raise ConfigurationError( + f'Unknown traffic generator type "{d["type"]}".' + ) @dataclass(slots=True, frozen=True) @@ -324,6 +329,3 @@ def load_config() -> Configuration: config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) config_obj: Configuration = Configuration.from_dict(dict(config)) return config_obj - - -CONFIGURATION = load_config() diff --git a/dts/framework/dts.py b/dts/framework/dts.py index f773f0c38d..4c7fb0c40a 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -6,19 +6,19 @@ import sys from .config import ( - CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration, TestSuiteConfig, + load_config, ) from .exception import BlockingTestSuiteError from .logger import DTSLOG, getLogger from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result from .test_suite import get_test_suites from .testbed_model import SutNode, TGNode -from .utils import check_dts_python_version -dts_logger: DTSLOG = getLogger("DTSRunner") +# dummy defaults to satisfy linters +dts_logger: DTSLOG = None # type: ignore[assignment] result: DTSResult = DTSResult(dts_logger) @@ -30,14 +30,18 @@ def run_all() -> None: global dts_logger global result + # create a regular DTS logger and create a new result with it + dts_logger = getLogger("DTSRunner") + result = DTSResult(dts_logger) + # check the python version of the server that run dts - check_dts_python_version() + _check_dts_python_version() sut_nodes: dict[str, SutNode] = {} tg_nodes: dict[str, TGNode] = {} try: # for all Execution sections - for execution in CONFIGURATION.executions: + for execution in load_config().executions: sut_node = sut_nodes.get(execution.system_under_test_node.name) tg_node = tg_nodes.get(execution.traffic_generator_node.name) @@ -82,6 +86,25 @@ def run_all() -> None: _exit_dts() +def _check_dts_python_version() -> None: + def RED(text: str) -> str: + return f"\u001B[31;1m{str(text)}\u001B[0m" + + if sys.version_info.major < 3 or ( + sys.version_info.major == 3 and sys.version_info.minor < 10 + ): + print( + RED( + ( + "WARNING: DTS execution node's python version is lower than" + "python 3.10, is deprecated and will not work in future releases." + ) + ), + file=sys.stderr, + ) + print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) + + def _run_execution( sut_node: SutNode, tg_node: TGNode, diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 001a5a5496..7489c03570 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError): Command execution timeout. """ - command: str - output: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _command: str - def __init__(self, command: str, output: str): - self.command = command - self.output = output + def __init__(self, command: str): + self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self.command}" - - def get_output(self) -> str: - return self.output + return f"TIMEOUT on {self._command}" class SSHConnectionError(DTSError): @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError): SSH connection error. """ - host: str - errors: list[str] severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str + _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): - self.host = host - self.errors = [] if errors is None else errors + self._host = host + self._errors = [] if errors is None else errors def __str__(self) -> str: - message = f"Error trying to connect with {self.host}." - if self.errors: - message += f" Errors encountered while retrying: {', '.join(self.errors)}" + message = f"Error trying to connect with {self._host}." + if self._errors: + message += f" Errors encountered while retrying: {', '.join(self._errors)}" return message @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError): It can no longer be used. """ - host: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str def __init__(self, host: str): - self.host = host + self._host = host def __str__(self) -> str: - return f"SSH session with {self.host} has died" + return f"SSH session with {self._host} has died" class ConfigurationError(DTSError): @@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError): Raised when a command executed on a Node returns a non-zero exit status. """ - command: str - command_return_code: int severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + command: str + _command_return_code: int def __init__(self, command: str, command_return_code: int): self.command = command - self.command_return_code = command_return_code + self._command_return_code = command_return_code def __str__(self) -> str: return ( f"Command {self.command} returned a non-zero exit code: " - f"{self.command_return_code}" + f"{self._command_return_code}" ) @@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError): Used in test cases to verify the expected behavior. """ - value: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR - def __init__(self, value: str): - self.value = value - - def __str__(self) -> str: - return repr(self.value) - class BlockingTestSuiteError(DTSError): - suite_name: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR + _suite_name: str def __init__(self, suite_name: str) -> None: - self.suite_name = suite_name + self._suite_name = suite_name def __str__(self) -> str: - return f"Blocking suite {self.suite_name} failed." + return f"Blocking suite {self._suite_name} failed." diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 00b6d1f03a..5e7ddb2b05 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -12,29 +12,24 @@ # pylama:ignore=W0611 -from framework.config import OS, NodeConfiguration -from framework.exception import ConfigurationError +from framework.config import NodeConfiguration from framework.logger import DTSLOG -from .linux_session import LinuxSession -from .os_session import InteractiveShellType, OSSession -from .remote import ( - CommandResult, - InteractiveRemoteSession, - InteractiveShell, - PythonShell, - RemoteSession, - SSHSession, - TestPmdDevice, - TestPmdShell, -) - - -def create_session( +from .interactive_remote_session import InteractiveRemoteSession +from .interactive_shell import InteractiveShell +from .python_shell import PythonShell +from .remote_session import CommandResult, RemoteSession +from .ssh_session import SSHSession +from .testpmd_shell import TestPmdShell + + +def create_remote_session( node_config: NodeConfiguration, name: str, logger: DTSLOG -) -> OSSession: - match node_config.os: - case OS.linux: - return LinuxSession(node_config, name, logger) - case _: - raise ConfigurationError(f"Unsupported OS {node_config.os}") +) -> RemoteSession: + return SSHSession(node_config, name, logger) + + +def create_interactive_session( + node_config: NodeConfiguration, logger: DTSLOG +) -> InteractiveRemoteSession: + return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_remote_session.py rename to dts/framework/remote_session/interactive_remote_session.py diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_shell.py rename to dts/framework/remote_session/interactive_shell.py diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py similarity index 100% rename from dts/framework/remote_session/remote/python_shell.py rename to dts/framework/remote_session/python_shell.py diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py deleted file mode 100644 index 06403691a5..0000000000 --- a/dts/framework/remote_session/remote/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. -# Copyright(c) 2023 University of New Hampshire - -# pylama:ignore=W0611 - -from framework.config import NodeConfiguration -from framework.logger import DTSLOG - -from .interactive_remote_session import InteractiveRemoteSession -from .interactive_shell import InteractiveShell -from .python_shell import PythonShell -from .remote_session import CommandResult, RemoteSession -from .ssh_session import SSHSession -from .testpmd_shell import TestPmdDevice, TestPmdShell - - -def create_remote_session( - node_config: NodeConfiguration, name: str, logger: DTSLOG -) -> RemoteSession: - return SSHSession(node_config, name, logger) - - -def create_interactive_session( - node_config: NodeConfiguration, logger: DTSLOG -) -> InteractiveRemoteSession: - return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/remote_session.py rename to dts/framework/remote_session/remote_session.py diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py similarity index 91% rename from dts/framework/remote_session/remote/ssh_session.py rename to dts/framework/remote_session/ssh_session.py index 8d127f1601..cee11d14d6 100644 --- a/dts/framework/remote_session/remote/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -18,9 +18,7 @@ SSHException, ) -from framework.config import NodeConfiguration from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError -from framework.logger import DTSLOG from .remote_session import CommandResult, RemoteSession @@ -45,14 +43,6 @@ class SSHSession(RemoteSession): session: Connection - def __init__( - self, - node_config: NodeConfiguration, - session_name: str, - logger: DTSLOG, - ): - super(SSHSession, self).__init__(node_config, session_name, logger) - def _connect(self) -> None: errors = [] retry_attempts = 10 @@ -117,7 +107,7 @@ def _send_command( except CommandTimedOut as e: self._logger.exception(e) - raise SSHTimeoutError(command, e.result.stderr) from e + raise SSHTimeoutError(command) from e return CommandResult( self.name, command, output.stdout, output.stderr, output.return_code diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py similarity index 100% rename from dts/framework/remote_session/remote/testpmd_shell.py rename to dts/framework/remote_session/testpmd_shell.py diff --git a/dts/framework/settings.py b/dts/framework/settings.py index cfa39d011b..7f5841d073 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -6,7 +6,7 @@ import argparse import os from collections.abc import Callable, Iterable, Sequence -from dataclasses import dataclass +from dataclasses import dataclass, field from pathlib import Path from typing import Any, TypeVar @@ -22,8 +22,8 @@ def __init__( option_strings: Sequence[str], dest: str, nargs: str | int | None = None, - const: str | None = None, - default: str = None, + const: bool | None = None, + default: Any = None, type: Callable[[str], _T | argparse.FileType | None] = None, choices: Iterable[_T] | None = None, required: bool = False, @@ -32,6 +32,12 @@ def __init__( ) -> None: env_var_value = os.environ.get(env_var) default = env_var_value or default + if const is not None: + nargs = 0 + default = const if env_var_value else default + type = None + choices = None + metavar = None super(_EnvironmentArgument, self).__init__( option_strings, dest, @@ -52,22 +58,28 @@ def __call__( values: Any, option_string: str = None, ) -> None: - setattr(namespace, self.dest, values) + if self.const is not None: + setattr(namespace, self.dest, self.const) + else: + setattr(namespace, self.dest, values) return _EnvironmentArgument -@dataclass(slots=True, frozen=True) -class _Settings: - config_file_path: str - output_dir: str - timeout: float - verbose: bool - skip_setup: bool - dpdk_tarball_path: Path - compile_timeout: float - test_cases: list - re_run: int +@dataclass(slots=True) +class Settings: + config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + output_dir: str = "output" + timeout: float = 15 + verbose: bool = False + skip_setup: bool = False + dpdk_tarball_path: Path | str = "dpdk.tar.xz" + compile_timeout: float = 1200 + test_cases: list[str] = field(default_factory=list) + re_run: int = 0 + + +SETTINGS: Settings = Settings() def _get_parser() -> argparse.ArgumentParser: @@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--config-file", action=_env_arg("DTS_CFG_FILE"), - default="conf.yaml", + default=SETTINGS.config_file_path, + type=Path, help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs " "and targets.", ) @@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser: "--output-dir", "--output", action=_env_arg("DTS_OUTPUT_DIR"), - default="output", + default=SETTINGS.output_dir, help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.", ) @@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser: "-t", "--timeout", action=_env_arg("DTS_TIMEOUT"), - default=15, + default=SETTINGS.timeout, type=float, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for " "compiling DPDK.", @@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser: "-v", "--verbose", action=_env_arg("DTS_VERBOSE"), - default="N", - help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages " + default=SETTINGS.verbose, + const=True, + help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages " "to the console.", ) @@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser: "-s", "--skip-setup", action=_env_arg("DTS_SKIP_SETUP"), - default="N", - help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.", + const=True, + help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.", ) parser.add_argument( @@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser: "--snapshot", "--git-ref", action=_env_arg("DTS_DPDK_TARBALL"), - default="dpdk.tar.xz", + default=SETTINGS.dpdk_tarball_path, type=Path, help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, " "tag ID or tree ID to test. To test local changes, first commit them, " @@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--compile-timeout", action=_env_arg("DTS_COMPILE_TIMEOUT"), - default=1200, + default=SETTINGS.compile_timeout, type=float, help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", ) @@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser: "--re-run", "--re_run", action=_env_arg("DTS_RERUN"), - default=0, + default=SETTINGS.re_run, type=int, help="[DTS_RERUN] Re-run each test case the specified amount of times " "if a test failure occurs", @@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser: return parser -def _get_settings() -> _Settings: +def get_settings() -> Settings: parsed_args = _get_parser().parse_args() - return _Settings( + return Settings( config_file_path=parsed_args.config_file, output_dir=parsed_args.output_dir, timeout=parsed_args.timeout, - verbose=(parsed_args.verbose == "Y"), - skip_setup=(parsed_args.skip_setup == "Y"), + verbose=parsed_args.verbose, + skip_setup=parsed_args.skip_setup, dpdk_tarball_path=Path( - DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir) - ) - if not os.path.exists(parsed_args.tarball) - else Path(parsed_args.tarball), + Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)) + if not os.path.exists(parsed_args.tarball) + else Path(parsed_args.tarball) + ), compile_timeout=parsed_args.compile_timeout, - test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [], + test_cases=( + parsed_args.test_cases.split(",") if parsed_args.test_cases else [] + ), re_run=parsed_args.re_run, ) - - -SETTINGS: _Settings = _get_settings() diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index f0fbe80f6f..603e18872c 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -254,7 +254,7 @@ def add_build_target( self._inner_results.append(build_target_result) return build_target_result - def add_sut_info(self, sut_info: NodeInfo): + def add_sut_info(self, sut_info: NodeInfo) -> None: self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version @@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: self._inner_results.append(execution_result) return execution_result - def add_error(self, error) -> None: + def add_error(self, error: Exception) -> None: self._errors.append(error) def process(self) -> None: diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 3b890c0451..d53553bf34 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -11,7 +11,7 @@ import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface from types import MethodType -from typing import Union +from typing import Any, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] @@ -26,8 +26,7 @@ from .logger import DTSLOG, getLogger from .settings import SETTINGS from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult -from .testbed_model import SutNode, TGNode -from .testbed_model.hw.port import Port, PortLink +from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -453,7 +452,7 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: - def is_test_suite(object) -> bool: + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: return True diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 5cbb859e47..8ced05653b 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -9,15 +9,9 @@ # pylama:ignore=W0611 -from .hw import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreList, - LogicalCoreListFilter, - VirtualDevice, - lcore_filter, -) +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList from .node import Node +from .port import Port, PortLink from .sut_node import SutNode from .tg_node import TGNode +from .virtual_device import VirtualDevice diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py similarity index 95% rename from dts/framework/testbed_model/hw/cpu.py rename to dts/framework/testbed_model/cpu.py index d1918a12dc..8fe785dfe4 100644 --- a/dts/framework/testbed_model/hw/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]: ) return filtered_lcores + + +def lcore_filter( + core_list: list[LogicalCore], + filter_specifier: LogicalCoreCount | LogicalCoreList, + ascending: bool, +) -> LogicalCoreFilter: + if isinstance(filter_specifier, LogicalCoreList): + return LogicalCoreListFilter(core_list, filter_specifier, ascending) + elif isinstance(filter_specifier, LogicalCoreCount): + return LogicalCoreCountFilter(core_list, filter_specifier, ascending) + else: + raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py deleted file mode 100644 index 88ccac0b0e..0000000000 --- a/dts/framework/testbed_model/hw/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. - -# pylama:ignore=W0611 - -from .cpu import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreFilter, - LogicalCoreList, - LogicalCoreListFilter, -) -from .virtual_device import VirtualDevice - - -def lcore_filter( - core_list: list[LogicalCore], - filter_specifier: LogicalCoreCount | LogicalCoreList, - ascending: bool, -) -> LogicalCoreFilter: - if isinstance(filter_specifier, LogicalCoreList): - return LogicalCoreListFilter(core_list, filter_specifier, ascending) - elif isinstance(filter_specifier, LogicalCoreCount): - return LogicalCoreCountFilter(core_list, filter_specifier, ascending) - else: - raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py similarity index 97% rename from dts/framework/remote_session/linux_session.py rename to dts/framework/testbed_model/linux_session.py index a3f1a6bf3b..f472bb8f0f 100644 --- a/dts/framework/remote_session/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -9,10 +9,10 @@ from typing_extensions import NotRequired from framework.exception import RemoteCommandExecutionError -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port from framework.utils import expand_range +from .cpu import LogicalCore +from .port import Port from .posix_session import PosixSession @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: lcores.append(LogicalCore(lcore, core, socket, node)) return lcores - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return dpdk_prefix def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index fc01e0bf8e..7571e7b98d 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -12,23 +12,26 @@ from typing import Any, Callable, Type, Union from framework.config import ( + OS, BuildTargetConfiguration, ExecutionConfiguration, NodeConfiguration, ) +from framework.exception import ConfigurationError from framework.logger import DTSLOG, getLogger -from framework.remote_session import InteractiveShellType, OSSession, create_session from framework.settings import SETTINGS -from .hw import ( +from .cpu import ( LogicalCore, LogicalCoreCount, LogicalCoreList, LogicalCoreListFilter, - VirtualDevice, lcore_filter, ) -from .hw.port import Port +from .linux_session import LinuxSession +from .os_session import InteractiveShellType, OSSession +from .port import Port +from .virtual_device import VirtualDevice class Node(ABC): @@ -69,6 +72,7 @@ def __init__(self, node_config: NodeConfiguration): def _init_ports(self) -> None: self.ports = [Port(self.name, port_config) for port_config in self.config.ports] self.main_session.update_ports(self.ports) + for port in self.ports: self.configure_port_state(port) @@ -172,9 +176,9 @@ def create_interactive_shell( return self.main_session.create_interactive_shell( shell_cls, - app_args, timeout, privileged, + app_args, ) def filter_lcores( @@ -205,7 +209,7 @@ def _get_remote_cpus(self) -> None: self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) - def _setup_hugepages(self): + def _setup_hugepages(self) -> None: """ Setup hugepages on the Node. Different architectures can supply different amounts of memory for hugepages and numa-based hugepage allocation may need @@ -249,3 +253,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: return lambda *args: None else: return func + + +def create_session( + node_config: NodeConfiguration, name: str, logger: DTSLOG +) -> OSSession: + match node_config.os: + case OS.linux: + return LinuxSession(node_config, name, logger) + case _: + raise ConfigurationError(f"Unsupported OS {node_config.os}") diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py similarity index 95% rename from dts/framework/remote_session/os_session.py rename to dts/framework/testbed_model/os_session.py index 8a709eac1c..76e595a518 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -10,19 +10,19 @@ from framework.config import Architecture, NodeConfiguration, NodeInfo from framework.logger import DTSLOG -from framework.remote_session.remote import InteractiveShell -from framework.settings import SETTINGS -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port -from framework.utils import MesonArgs - -from .remote import ( +from framework.remote_session import ( CommandResult, InteractiveRemoteSession, + InteractiveShell, RemoteSession, create_interactive_session, create_remote_session, ) +from framework.settings import SETTINGS +from framework.utils import MesonArgs + +from .cpu import LogicalCore +from .port import Port InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell) @@ -85,9 +85,9 @@ def send_command( def create_interactive_shell( self, shell_cls: Type[InteractiveShellType], - eal_parameters: str, timeout: float, privileged: bool, + app_args: str, ) -> InteractiveShellType: """ See "create_interactive_shell" in SutNode @@ -96,7 +96,7 @@ def create_interactive_shell( self.interactive_session.session, self._logger, self._get_privileged_command if privileged else None, - eal_parameters, + app_args, timeout, ) @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str: """ @abstractmethod - def guess_dpdk_remote_dir(self, remote_dir) -> PurePath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: """ Try to find DPDK remote dir in remote_dir. """ @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: """ @abstractmethod - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: """ Get the DPDK file prefix that will be used when running DPDK apps. """ diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py similarity index 100% rename from dts/framework/testbed_model/hw/port.py rename to dts/framework/testbed_model/port.py diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py similarity index 98% rename from dts/framework/remote_session/posix_session.py rename to dts/framework/testbed_model/posix_session.py index 5da0516e05..1d1d5b1b26 100644 --- a/dts/framework/remote_session/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str: return ret_opts - def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) @@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs( for dpdk_runtime_dir in dpdk_runtime_dirs: self.remove_remote_dir(dpdk_runtime_dir) - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return "" def get_compiler_version(self, compiler_name: str) -> str: diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 202aebfd06..4e33cf02ea 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -15,12 +15,14 @@ NodeInfo, SutNodeConfiguration, ) -from framework.remote_session import CommandResult, InteractiveShellType, OSSession +from framework.remote_session import CommandResult from framework.settings import SETTINGS from framework.utils import MesonArgs -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice +from .cpu import LogicalCoreCount, LogicalCoreList from .node import Node +from .os_session import InteractiveShellType, OSSession +from .virtual_device import VirtualDevice class EalParameters(object): @@ -289,7 +291,7 @@ def create_eal_parameters( prefix: str = "dpdk", append_prefix_timestamp: bool = True, no_pci: bool = False, - vdevs: list[VirtualDevice] = None, + vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": """ diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 27025cfa31..166eb8430e 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -16,16 +16,11 @@ from scapy.packet import Packet # type: ignore[import] -from framework.config import ( - ScapyTrafficGeneratorConfig, - TGNodeConfiguration, - TrafficGeneratorType, -) -from framework.exception import ConfigurationError - -from .capturing_traffic_generator import CapturingTrafficGenerator -from .hw.port import Port +from framework.config import TGNodeConfiguration + from .node import Node +from .port import Port +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator class TGNode(Node): @@ -80,20 +75,3 @@ def close(self) -> None: """Free all resources used by the node""" self.traffic_generator.close() super(TGNode, self).close() - - -def create_traffic_generator( - tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig -) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" - - from .scapy import ScapyTrafficGenerator - - match traffic_generator_config.traffic_generator_type: - case TrafficGeneratorType.SCAPY: - return ScapyTrafficGenerator(tg_node, traffic_generator_config) - case _: - raise ConfigurationError( - "Unknown traffic generator: " - f"{traffic_generator_config.traffic_generator_type}" - ) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py new file mode 100644 index 0000000000..11bfa1ee0f --- /dev/null +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType +from framework.exception import ConfigurationError +from framework.testbed_model.node import Node + +from .capturing_traffic_generator import CapturingTrafficGenerator +from .scapy import ScapyTrafficGenerator + + +def create_traffic_generator( + tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig +) -> CapturingTrafficGenerator: + """A factory function for creating traffic generator object from user config.""" + + match traffic_generator_config.traffic_generator_type: + case TrafficGeneratorType.SCAPY: + return ScapyTrafficGenerator(tg_node, traffic_generator_config) + case _: + raise ConfigurationError( + "Unknown traffic generator: " + f"{traffic_generator_config.traffic_generator_type}" + ) diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py similarity index 96% rename from dts/framework/testbed_model/capturing_traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index ab98987f8e..e521211ef0 100644 --- a/dts/framework/testbed_model/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -16,9 +16,9 @@ from scapy.packet import Packet # type: ignore[import] from framework.settings import SETTINGS +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port from .traffic_generator import TrafficGenerator @@ -130,7 +130,9 @@ def _send_packets_and_capture( for the specified duration. It must be able to handle no received packets. """ - def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]): + def _write_capture_from_packets( + self, capture_name: str, packets: list[Packet] + ) -> None: file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap" self._logger.debug(f"Writing packets to {file_name}.") scapy.utils.wrpcap(file_name, packets) diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py similarity index 95% rename from dts/framework/testbed_model/scapy.py rename to dts/framework/testbed_model/traffic_generator/scapy.py index af0d4dbb25..51864b6e6b 100644 --- a/dts/framework/testbed_model/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -24,16 +24,15 @@ from scapy.packet import Packet # type: ignore[import] from framework.config import OS, ScapyTrafficGeneratorConfig -from framework.logger import DTSLOG, getLogger from framework.remote_session import PythonShell from framework.settings import SETTINGS +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from .capturing_traffic_generator import ( CapturingTrafficGenerator, _get_default_capture_name, ) -from .hw.port import Port -from .tg_node import TGNode """ ========= BEGIN RPC FUNCTIONS ========= @@ -146,7 +145,7 @@ def quit(self) -> None: self._BaseServer__shutdown_request = True return None - def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary): + def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: """Add a function to the server. This is meant to be executed remotely. @@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): session: PythonShell rpc_server_proxy: xmlrpc.client.ServerProxy _config: ScapyTrafficGeneratorConfig - _tg_node: TGNode - _logger: DTSLOG - - def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): - self._config = config - self._tg_node = tg_node - self._logger = getLogger( - f"{self._tg_node.name} {self._config.traffic_generator_type}" - ) + + def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + super().__init__(tg_node, config) assert ( self._tg_node.config.os == OS.linux @@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): function_bytes = marshal.dumps(function.__code__) self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes) - def _start_xmlrpc_server_in_remote_python(self, listen_port: int): + def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: # load the source of the function src = inspect.getsource(QuittableXMLRPCServer) # Lines with only whitespace break the repl if in the middle of a function @@ -280,7 +273,7 @@ def _send_packets_and_capture( scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets] return scapy_packets - def close(self): + def close(self) -> None: try: self.rpc_server_proxy.quit() except ConnectionRefusedError: diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py similarity index 80% rename from dts/framework/testbed_model/traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py index 28c35d3ce4..ea7c3963da 100644 --- a/dts/framework/testbed_model/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -12,11 +12,12 @@ from scapy.packet import Packet # type: ignore[import] -from framework.logger import DTSLOG +from framework.config import TrafficGeneratorConfig +from framework.logger import DTSLOG, getLogger +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port - class TrafficGenerator(ABC): """The base traffic generator. @@ -24,8 +25,17 @@ class TrafficGenerator(ABC): Defines the few basic methods that each traffic generator must implement. """ + _config: TrafficGeneratorConfig + _tg_node: Node _logger: DTSLOG + def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + self._config = config + self._tg_node = tg_node + self._logger = getLogger( + f"{self._tg_node.name} {self._config.traffic_generator_type}" + ) + def send_packet(self, packet: Packet, port: Port) -> None: """Send a packet and block until it is fully sent. diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py similarity index 100% rename from dts/framework/testbed_model/hw/virtual_device.py rename to dts/framework/testbed_model/virtual_device.py diff --git a/dts/framework/utils.py b/dts/framework/utils.py index d27c2c5b5f..f0c916471c 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -7,7 +7,6 @@ import json import os import subprocess -import sys from enum import Enum from pathlib import Path from subprocess import SubprocessError @@ -16,35 +15,7 @@ from .exception import ConfigurationError - -class StrEnum(Enum): - @staticmethod - def _generate_next_value_( - name: str, start: int, count: int, last_values: object - ) -> str: - return name - - def __str__(self) -> str: - return self.name - - -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" - - -def check_dts_python_version() -> None: - if sys.version_info.major < 3 or ( - sys.version_info.major == 3 and sys.version_info.minor < 10 - ): - print( - RED( - ( - "WARNING: DTS execution node's python version is lower than" - "python 3.10, is deprecated and will not work in future releases." - ) - ), - file=sys.stderr, - ) - print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" def expand_range(range_str: str) -> list[int]: @@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]: return expanded_range -def get_packet_summaries(packets: list[Packet]): +def get_packet_summaries(packets: list[Packet]) -> str: if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]): return f"Packet contents: \n{packet_summaries}" -def RED(text: str) -> str: - return f"\u001B[31;1m{str(text)}\u001B[0m" +class StrEnum(Enum): + @staticmethod + def _generate_next_value_( + name: str, start: int, count: int, last_values: object + ) -> str: + return name + + def __str__(self) -> str: + return self.name class MesonArgs(object): @@ -225,5 +203,5 @@ def _delete_tarball(self) -> None: if self._tarball_path and os.path.exists(self._tarball_path): os.remove(self._tarball_path) - def __fspath__(self): + def __fspath__(self) -> str: return str(self._tarball_path) diff --git a/dts/main.py b/dts/main.py index 43311fa847..5d4714b0c3 100755 --- a/dts/main.py +++ b/dts/main.py @@ -10,10 +10,17 @@ import logging -from framework import dts +from framework import settings def main() -> None: + """Set DTS settings, then run DTS. + + The DTS settings are taken from the command line arguments and the environment variables. + """ + settings.SETTINGS = settings.get_settings() + from framework import dts + dts.run_all() From patchwork Wed Nov 8 12:53:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133972 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C525432D6; Wed, 8 Nov 2023 13:53:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07B6C40A73; Wed, 8 Nov 2023 13:53:30 +0100 (CET) Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by mails.dpdk.org (Postfix) with ESMTP id 3BAF8402DA for ; Wed, 8 Nov 2023 13:53:27 +0100 (CET) Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-5446c9f3a77so6887898a12.0 for ; Wed, 08 Nov 2023 04:53:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448007; x=1700052807; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U6/AvRBkllFQ+XgMDx3380G1DrnvTmtIBIE/1+tA3H8=; b=aIt0slACDW4C5R1boDAtjty0eMb0hWTrPlc/z7/Ws0h7/qeY0ltgrhqWusi6kR6upQ AuauPF5u13FutdO/Y6LNT71tvhyC+pha07eBPvXPQMA8uVQaPur4//VFChOKLU5U4OFy vbkRSOqh727zZiihSMxbaqh2GJETEeiUkB4p5nGf/MCZ8EXA69X19xXYAeQj5ZaRg0ea GSo2rTmwdbeDx8gZuDae5tBMRHk6jhe5Ye12QYVarvfLR1pcr8u2zpQJfA8wBjIPx1md mVA5YVyDVWXBQ3vpRc7nFkaxhEMvTipRjGViPkClnMez8rcGlX4fkZWAPUGdXnWMWy0t 3WNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448007; x=1700052807; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U6/AvRBkllFQ+XgMDx3380G1DrnvTmtIBIE/1+tA3H8=; b=WYQfzjySLrUl+Uxm8xupANr7ltFAf418UMkrpDGFHB9DoEWtovKmy6P9aH234m3+X9 aF5IQnaKr79AwEJVymK5bO1ff6+eOPHj9ZuqPlFC5ThLidZleeTwCQTzynRqGPVbqmZK uNIY1tPZVX7pY4+JdHvuW2qG7f8bkEKmJrAnr3RtokwSShBIWJoBchGJKTMNNSs1APZR FYpzBRVNcyN9BJn2tKAx5S39GLIWmyDuuh4ZgxT0Qn1o6n40O+onYNZcQ7OjUqyO9rWA BAqt5IWEfJjqBk4YsP2zn2sQFgwN61fPrDMnHLDtrrmoyz/3jArfCylGRY/1G1qoAWSt cVdg== X-Gm-Message-State: AOJu0Yz8iC88WF6oWPMf+5qA3bbEXVDu3sYGoBLgcyDl+W4v6ciSU6w3 zQI9Ce3ocexAB78z1C/N4SkF8A== X-Google-Smtp-Source: AGHT+IHhgpkBIYWaUok5biGYQHjBTV4i0Xk8250+jZllVqWHgjSb0atK+tjAJXPr5oogtkylkQQQ4w== X-Received: by 2002:a50:d494:0:b0:543:7c4f:7ed0 with SMTP id s20-20020a50d494000000b005437c4f7ed0mr1345608edi.18.1699448007342; Wed, 08 Nov 2023 04:53:27 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:27 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 02/23] dts: add docstring checker Date: Wed, 8 Nov 2023 13:53:03 +0100 Message-Id: <20231108125324.191005-2-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Python docstrings are the in-code way to document the code. The docstring checker of choice is pydocstyle which we're executing from Pylama, but the current latest versions are not complatible due to [0], so pin the pydocstyle version to the latest working version. [0] https://github.com/klen/pylama/issues/232 Signed-off-by: Juraj Linkeš --- dts/poetry.lock | 12 ++++++------ dts/pyproject.toml | 6 +++++- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index f7b3b6d602..a734fa71f0 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -489,20 +489,20 @@ files = [ [[package]] name = "pydocstyle" -version = "6.3.0" +version = "6.1.1" description = "Python docstring style checker" optional = false python-versions = ">=3.6" files = [ - {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"}, - {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"}, + {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"}, + {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"}, ] [package.dependencies] -snowballstemmer = ">=2.2.0" +snowballstemmer = "*" [package.extras] -toml = ["tomli (>=1.2.3)"] +toml = ["toml"] [[package]] name = "pyflakes" @@ -837,4 +837,4 @@ jsonschema = ">=4,<5" [metadata] lock-version = "2.0" python-versions = "^3.10" -content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639" +content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b" diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 6762edfa6b..3943c87c87 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -25,6 +25,7 @@ PyYAML = "^6.0" types-PyYAML = "^6.0.8" fabric = "^2.7.1" scapy = "^2.5.0" +pydocstyle = "6.1.1" [tool.poetry.group.dev.dependencies] mypy = "^0.961" @@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.pylama] -linters = "mccabe,pycodestyle,pyflakes" +linters = "mccabe,pycodestyle,pydocstyle,pyflakes" format = "pylint" max_line_length = 88 # https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length +[tool.pylama.linter.pydocstyle] +convention = "google" + [tool.mypy] python_version = "3.10" enable_error_code = ["ignore-without-code"] From patchwork Wed Nov 8 12:53:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133974 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3F56432D6; Wed, 8 Nov 2023 13:53:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F41342E1D; Wed, 8 Nov 2023 13:53:33 +0100 (CET) Received: from mail-ed1-f45.google.com (mail-ed1-f45.google.com [209.85.208.45]) by mails.dpdk.org (Postfix) with ESMTP id 8FF8D402DA for ; Wed, 8 Nov 2023 13:53:28 +0100 (CET) Received: by mail-ed1-f45.google.com with SMTP id 4fb4d7f45d1cf-53db360294fso11824797a12.3 for ; Wed, 08 Nov 2023 04:53:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448008; x=1700052808; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=W2rlRyRwQyTzYiq8mWLL2HpuJd5eZBUcuyeeC7TrAnjsqWb+mKLMlGuPItQVsv6o+x QfHPgy4XNfTOa+ZBvrU78dlh3Y2yZSJQZoi8iZrCjT9UDC5S6+jH4IneI58lZsieAUvo Xkrw8XhvAlS//5KibaDT/y9quYc+d1lNhx5LKnrcuNGQRKEtnu6DU2SRf4S+v0IWXiVb 1ifCOqoMuvVEsue3Z5/cLuRUFtPS/fyTl9HHSf3xLAoX/RTKHQxVM6zTG2LH8aFZ+Dcv 07gs8DkF92LSLQX2NJ0o0leu9yp1S4Wx0tOFBzL2ZoNl7eqi82rqmgSUmpKv9gk87Q49 Ditg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448008; x=1700052808; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=UyVH0im5kr1lS4MLNlCmuDwSj89hcE6G8jHTCKlOHqcmJEJTQxP0fO3tF7N1IhvOXy dSHwYXogK/wu4/BzTyHVyIFMXttoq5TVWj/NtoQMVo0IiRIfUPuKFIINVPckviPex16t 4zXm08xdavU0VMUEAZ3ZI32wqEj17WahxsEFW51ujY/WsWaMmhHHEWCjRA+gxSEVykY0 k2I+OkThM8xpvKnrzSTce2uvzF0ydNAfrizWMafOvLTmHQWnvtfEy58E78FK4SOEZxPG PzizTZXh6lwTRcs0jI0S4++U82Bjj4ZE5DQJYJ4a+arJaWowdDp4ZGv6hsyqyOiBojnr nBfg== X-Gm-Message-State: AOJu0YxgSI5CKCH4bKb3LJsFFrhXuBJRtw+ppqcEraMTmmo7g+Tdhl8Y A0dGZBjP/YbokU1V5ZqccZUyXA== X-Google-Smtp-Source: AGHT+IEqZYHXG73/nTUQDIrwzBMgiYx1QucW/t7a5pVjZw0DlW9dRzkxpQdB1qS+1ILlHmEaLsLmIg== X-Received: by 2002:a50:d7dc:0:b0:531:1455:7528 with SMTP id m28-20020a50d7dc000000b0053114557528mr1209959edj.40.1699448008493; Wed, 08 Nov 2023 04:53:28 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:27 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 03/23] dts: add basic developer docs Date: Wed, 8 Nov 2023 13:53:04 +0100 Message-Id: <20231108125324.191005-3-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Expand the framework contribution guidelines and add how to document the code with Python docstrings. Signed-off-by: Juraj Linkeš --- doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index 32c18ee472..cd771a428c 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument. The results contain basic statistics of passed/failed test cases and DPDK version. +Contributing to DTS +------------------- + +There are two areas of contribution: The DTS framework and DTS test suites. + +The framework contains the logic needed to run test cases, such as connecting to nodes, +running DPDK apps and collecting results. + +The test cases call APIs from the framework to test their scenarios. Adding test cases may +require adding code to the framework as well. + + +Framework Coding Guidelines +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When adding code to the DTS framework, pay attention to the rest of the code +and try not to divert much from it. The :ref:`DTS developer tools ` will issue +warnings when some of the basics are not met. + +The code must be properly documented with docstrings. The style must conform to +the `Google style `_. +See an example of the style +`here `_. +For cases which are not covered by the Google style, refer +to `PEP 257 `_. There are some cases which are not covered by +the two style guides, where we deviate or where some additional clarification is helpful: + + * The __init__() methods of classes are documented separately from the docstring of the class + itself. + * The docstrigs of implemented abstract methods should refer to the superclass's definition + if there's no deviation. + * Instance variables/attributes should be documented in the docstring of the class + in the ``Attributes:`` section. + * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass + attributes which result in instance variables/attributes should also be recorded + in the ``Attributes:`` section. + * Class variables/attributes, on the other hand, should be documented with ``#:`` above + the type annotated line. The description may be omitted if the meaning is obvious. + * The Enum and TypedDict also process the attributes in particular ways and should be documented + with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value. + * When referencing a parameter of a function or a method in their docstring, don't use + any articles and put the parameter into single backticks. This mimics the style of + `Python's documentation `_. + * When specifying a value, use double backticks:: + + def foo(greet: bool) -> None: + """Demonstration of single and double backticks. + + `greet` controls whether ``Hello World`` is printed. + + Args: + greet: Whether to print the ``Hello World`` message. + """ + if greet: + print(f"Hello World") + + * The docstring maximum line length is the same as the code maximum line length. + + How To Write a Test Suite ------------------------- @@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite: | These methods don't need to be implemented if there's no need for them in a test suite. In that case, nothing will happen when they're is executed. +#. **Configuration, traffic and other logic** + + The ``TestSuite`` class contains a variety of methods for anything that + a test suite setup, a teardown, or a test case may need to do. + + The test suites also frequently use a DPDK app, such as testpmd, in interactive mode + and use the interactive shell instances directly. + + These are the two main ways to call the framework logic in test suites. If there's any + functionality or logic missing from the framework, it should be implemented so that + the test suites can use one of these two ways. + #. **Test case verification** Test case verification should be done with the ``verify`` method, which records the result. @@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite: and used by the test suite via the ``sut_node`` field. +.. _dts_dev_tools: + DTS Developer Tools ------------------- From patchwork Wed Nov 8 12:53:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133975 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71560432D6; Wed, 8 Nov 2023 13:53:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A48B542E22; Wed, 8 Nov 2023 13:53:34 +0100 (CET) Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by mails.dpdk.org (Postfix) with ESMTP id 3FCE542E0B for ; Wed, 8 Nov 2023 13:53:30 +0100 (CET) Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-507bd644a96so9842807e87.3 for ; Wed, 08 Nov 2023 04:53:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448009; x=1700052809; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1nf+Rn9xF8AJgKgVM5dfS3bAeJ3oXogtFzeJDmarYnY=; b=OMn38aXUuQ1HAeFB/8ObVWlwDvtY/87/nET0k564QP6hpCOOfP29rz4ejN5wojFCMg 84G+XwgHrQCRNH7WSWUWwTd0jQoN8ZbOgea5syqaUCoWM3KGjHoM2VqBSmmQuoBfxBSR 5qwt48Oyonx6FFb9OfUq09m2DBC3NKLjIboYfrqJ1LvuVcciydOnjsAha1tQa5f8iBmN xU9f9IG7/a58ff5zU+U01T66b9uWPQ3gvJILJQa4dtenmJw1QwMWshoWrjuaRpoqrvAF TFmGJ2tRccFiQqTy4kB2D58bNMAsOnVN7T/AyqCX0C3/wCUbY/4iPvpbE9qXypsdlx35 Gw0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448010; x=1700052810; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1nf+Rn9xF8AJgKgVM5dfS3bAeJ3oXogtFzeJDmarYnY=; b=IaOS2MtiUuI2ldxJapvxgeKfRMnZvUKUEJJCOXma0jGXyRUBeATOp+4Fq834veTlEE gISvPLnngelue2DEOs9OyDhKiBbiqwTxwzmBuqLFTElHQXvSlCIu9UmUMDfzqVGV15SA wRolLgo7Ixmg2PFjyp/2fC9mBf1Nawvg0XL524Nn1y10u/rtjwbWgKY3WI/4g6+Qwfmy DNox9Uj5JOxb50RrJ8zOkI8x4zJq+CCQ1eOkvyD7oW0OZvOXU5BIbItTdGyUkejocQvO 4UMAYMLE++52Ip4ZnWAygO0gqodlAxImJvpqMJdijC8c8AMa257U0x6Us8Bn+iIW4ugj 8tcA== X-Gm-Message-State: AOJu0YyttIUyA5Hdc6PB5WP7L/xP8z+DDrFIbmEI5xy7/OG1XO6q1jvb Zpgz2WTRTMGzV/Mk6knD63KYGg== X-Google-Smtp-Source: AGHT+IF6PN4p2bA0mcw6u0ImMYv/6aGvSKyVAKGaoZz7J0zk8WsCQw1GREOdkovbcFxOqlEIZh5YBg== X-Received: by 2002:a05:6512:4811:b0:509:4405:d5a8 with SMTP id eo17-20020a056512481100b005094405d5a8mr1205112lfb.68.1699448009601; Wed, 08 Nov 2023 04:53:29 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:29 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 04/23] dts: exceptions docstring update Date: Wed, 8 Nov 2023 13:53:05 +0100 Message-Id: <20231108125324.191005-4-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/__init__.py | 12 ++++- dts/framework/exception.py | 106 +++++++++++++++++++++++++------------ 2 files changed, 83 insertions(+), 35 deletions(-) diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py index d551ad4bf0..662e6ccad2 100644 --- a/dts/framework/__init__.py +++ b/dts/framework/__init__.py @@ -1,3 +1,13 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire + +"""Libraries and utilities for running DPDK Test Suite (DTS). + +The various modules in the DTS framework offer: + +* Connections to nodes, both interactive and non-interactive, +* A straightforward way to add support for different operating systems of remote nodes, +* Test suite setup, execution and teardown, along with test case setup, execution and teardown, +* Pre-test suite setup and post-test suite teardown. +""" diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 7489c03570..ee1562c672 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -3,8 +3,10 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -User-defined exceptions used across the framework. +"""DTS exceptions. + +The exceptions all have different severities expressed as an integer. +The highest severity of all raised exception is used as the exit code of DTS. """ from enum import IntEnum, unique @@ -13,59 +15,79 @@ @unique class ErrorSeverity(IntEnum): - """ - The severity of errors that occur during DTS execution. + """The severity of errors that occur during DTS execution. + All exceptions are caught and the most severe error is used as return code. """ + #: NO_ERR = 0 + #: GENERIC_ERR = 1 + #: CONFIG_ERR = 2 + #: REMOTE_CMD_EXEC_ERR = 3 + #: SSH_ERR = 4 + #: DPDK_BUILD_ERR = 10 + #: TESTCASE_VERIFY_ERR = 20 + #: BLOCKING_TESTSUITE_ERR = 25 class DTSError(Exception): - """ - The base exception from which all DTS exceptions are derived. - Stores error severity. + """The base exception from which all DTS exceptions are subclassed. + + Do not use this exception, only use subclassed exceptions. """ + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR class SSHTimeoutError(DTSError): - """ - Command execution timeout. - """ + """The SSH execution of a command timed out.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _command: str def __init__(self, command: str): + """Define the meaning of the first argument. + + Args: + command: The executed command. + """ self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self._command}" + """Add some context to the string representation.""" + return f"{self._command} execution timed out." class SSHConnectionError(DTSError): - """ - SSH connection error. - """ + """An unsuccessful SSH connection.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): + """Define the meaning of the first two arguments. + + Args: + host: The hostname to which we're trying to connect. + errors: Any errors that occurred during the connection attempt. + """ self._host = host self._errors = [] if errors is None else errors def __str__(self) -> str: + """Include the errors in the string representation.""" message = f"Error trying to connect with {self._host}." if self._errors: message += f" Errors encountered while retrying: {', '.join(self._errors)}" @@ -74,43 +96,53 @@ def __str__(self) -> str: class SSHSessionDeadError(DTSError): - """ - SSH session is not alive. - It can no longer be used. - """ + """The SSH session is no longer alive.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str def __init__(self, host: str): + """Define the meaning of the first argument. + + Args: + host: The hostname of the disconnected node. + """ self._host = host def __str__(self) -> str: - return f"SSH session with {self._host} has died" + """Add some context to the string representation.""" + return f"SSH session with {self._host} has died." class ConfigurationError(DTSError): - """ - Raised when an invalid configuration is encountered. - """ + """An invalid configuration.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR class RemoteCommandExecutionError(DTSError): - """ - Raised when a command executed on a Node returns a non-zero exit status. - """ + """An unsuccessful execution of a remote command.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + #: The executed command. command: str _command_return_code: int def __init__(self, command: str, command_return_code: int): + """Define the meaning of the first two arguments. + + Args: + command: The executed command. + command_return_code: The return code of the executed command. + """ self.command = command self._command_return_code = command_return_code def __str__(self) -> str: + """Include both the command and return code in the string representation.""" return ( f"Command {self.command} returned a non-zero exit code: " f"{self._command_return_code}" @@ -118,35 +150,41 @@ def __str__(self) -> str: class RemoteDirectoryExistsError(DTSError): - """ - Raised when a remote directory to be created already exists. - """ + """A directory that exists on a remote node.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR class DPDKBuildError(DTSError): - """ - Raised when DPDK build fails for any reason. - """ + """A DPDK build failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR class TestCaseVerifyError(DTSError): - """ - Used in test cases to verify the expected behavior. - """ + """A test case failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR class BlockingTestSuiteError(DTSError): + """A failure in a blocking test suite.""" + + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR _suite_name: str def __init__(self, suite_name: str) -> None: + """Define the meaning of the first argument. + + Args: + suite_name: The blocking test suite. + """ self._suite_name = suite_name def __str__(self) -> str: + """Add some context to the string representation.""" return f"Blocking suite {self._suite_name} failed." From patchwork Wed Nov 8 12:53:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133976 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9AB1432D6; Wed, 8 Nov 2023 13:54:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD6A142E27; Wed, 8 Nov 2023 13:53:35 +0100 (CET) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id 0BF4B42E14 for ; Wed, 8 Nov 2023 13:53:32 +0100 (CET) Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-53d9f001b35so11500774a12.2 for ; Wed, 08 Nov 2023 04:53:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448011; x=1700052811; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OsLxKn4mOXV2TuK2LbXZ03RA5dZb5BIPscylrtxoXE4=; b=Kdz3nbqogmOnXRb8gHCToGncsQiy43HDrIN4aTLSRmHYqxS99QGh7E5FneK5BqQfqH Q5y89LAI1wY9J/2n/vUdJ2ke/950XAcUTps9z/mUPHbhobgoSkE0Dt9yT0WwDRfEO2ty IyLZWDIBAZ2j45+kq2JaH4KCYzq5zIOHwS/oGPtAPuo9WtEfWq+Gfsyphg+uQnmkpELR InyGZbkd7LF+msie0cJgTTYluH0QH5u7nh4d8LB73fxv8ZM5RaveIFvNh2XUe5qfHOz+ ANhNTHKw0lG4JaU7JbN5uicr5cE8j5RI3HKYRdpfczkaOrwmhZFRx0+2ArvgcBsWW0zW 0ayg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448011; x=1700052811; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OsLxKn4mOXV2TuK2LbXZ03RA5dZb5BIPscylrtxoXE4=; b=JdgwYHnB8bVy3jpU523ElSYz/pifNmgIEgRKs++3mqrfjh343s2OU24Yvw/MnQGQTv NKH3mNuXEbXFC4eHOAslyR5HEmCHG6fFxpv6SIlVt3/mStsLu3wQmZvJm3GHqf3eoNyb fB8Dng6UEcKlxy1q/KenCQ8BwXBdPdgjl7QDldc0sBEblqttk0sAdZXrfM5plW7peE9K DsziOpj00m5OEv5RBoZ6uU2ciU0Ff4NGw/q2AbQ1qcjYTjPigx6Da6Y25gGKqqljSCUn TAhoYqeCLF7XPPsN9Vujn6TOa6o6XQxjs3roa8imlx5Zqhe3tHzliWzDoCuulSLg/Z6x NUbw== X-Gm-Message-State: AOJu0YzXa0KOEUM2fZjyGASUSw4kAUwV1TCTAzb3t3BfKShpniu9orBo tKOI/X2Xlc70y7oqf3Ot3/fJWw== X-Google-Smtp-Source: AGHT+IEAoXMEOtFh78UqEzTVa5waRWL4Yg+Bcy90urSp+7bSnAEiUbQ9yTVAkIOaDKvCzBUvHHpfMA== X-Received: by 2002:a50:d51d:0:b0:543:b9ae:a0d5 with SMTP id u29-20020a50d51d000000b00543b9aea0d5mr1480197edi.4.1699448011768; Wed, 08 Nov 2023 04:53:31 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:30 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 05/23] dts: settings docstring update Date: Wed, 8 Nov 2023 13:53:06 +0100 Message-Id: <20231108125324.191005-5-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/settings.py | 101 +++++++++++++++++++++++++++++++++++++- 1 file changed, 100 insertions(+), 1 deletion(-) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 7f5841d073..787db7c198 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -3,6 +3,70 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire +"""Environment variables and command line arguments parsing. + +This is a simple module utilizing the built-in argparse module to parse command line arguments, +augment them with values from environment variables and make them available across the framework. + +The command line value takes precedence, followed by the environment variable value, +followed by the default value defined in this module. + +The command line arguments along with the supported environment variables are: + +.. option:: --config-file +.. envvar:: DTS_CFG_FILE + + The path to the YAML test run configuration file. + +.. option:: --output-dir, --output +.. envvar:: DTS_OUTPUT_DIR + + The directory where DTS logs and results are saved. + +.. option:: --compile-timeout +.. envvar:: DTS_COMPILE_TIMEOUT + + The timeout for compiling DPDK. + +.. option:: -t, --timeout +.. envvar:: DTS_TIMEOUT + + The timeout for all DTS operation except for compiling DPDK. + +.. option:: -v, --verbose +.. envvar:: DTS_VERBOSE + + Set to any value to enable logging everything to the console. + +.. option:: -s, --skip-setup +.. envvar:: DTS_SKIP_SETUP + + Set to any value to skip building DPDK. + +.. option:: --tarball, --snapshot, --git-ref +.. envvar:: DTS_DPDK_TARBALL + + The path to a DPDK tarball, git commit ID, tag ID or tree ID to test. + +.. option:: --test-cases +.. envvar:: DTS_TESTCASES + + A comma-separated list of test cases to execute. Unknown test cases will be silently ignored. + +.. option:: --re-run, --re_run +.. envvar:: DTS_RERUN + + Re-run each test case this many times in case of a failure. + +Attributes: + SETTINGS: The module level variable storing framework-wide DTS settings. + +Typical usage example:: + + from framework.settings import SETTINGS + foo = SETTINGS.foo +""" + import argparse import os from collections.abc import Callable, Iterable, Sequence @@ -16,6 +80,23 @@ def _env_arg(env_var: str) -> Any: + """A helper method augmenting the argparse Action with environment variables. + + If the supplied environment variable is defined, then the default value + of the argument is modified. This satisfies the priority order of + command line argument > environment variable > default value. + + Arguments with no values (flags) should be defined using the const keyword argument + (True or False). When the argument is specified, it will be set to const, if not specified, + the default will be stored (possibly modified by the corresponding environment variable). + + Other arguments work the same as default argparse arguments, that is using + the default 'store' action. + + Returns: + The modified argparse.Action. + """ + class _EnvironmentArgument(argparse.Action): def __init__( self, @@ -68,14 +149,28 @@ def __call__( @dataclass(slots=True) class Settings: + """Default framework-wide user settings. + + The defaults may be modified at the start of the run. + """ + + #: config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + #: output_dir: str = "output" + #: timeout: float = 15 + #: verbose: bool = False + #: skip_setup: bool = False + #: dpdk_tarball_path: Path | str = "dpdk.tar.xz" + #: compile_timeout: float = 1200 + #: test_cases: list[str] = field(default_factory=list) + #: re_run: int = 0 @@ -169,7 +264,7 @@ def _get_parser() -> argparse.ArgumentParser: action=_env_arg("DTS_RERUN"), default=SETTINGS.re_run, type=int, - help="[DTS_RERUN] Re-run each test case the specified amount of times " + help="[DTS_RERUN] Re-run each test case the specified number of times " "if a test failure occurs", ) @@ -177,6 +272,10 @@ def _get_parser() -> argparse.ArgumentParser: def get_settings() -> Settings: + """Create new settings with inputs from the user. + + The inputs are taken from the command line and from environment variables. + """ parsed_args = _get_parser().parse_args() return Settings( config_file_path=parsed_args.config_file, From patchwork Wed Nov 8 12:53:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133977 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BF8C432D6; Wed, 8 Nov 2023 13:54:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EB47F42E2C; Wed, 8 Nov 2023 13:53:36 +0100 (CET) Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by mails.dpdk.org (Postfix) with ESMTP id F1A5642E1C for ; Wed, 8 Nov 2023 13:53:32 +0100 (CET) Received: by mail-ed1-f44.google.com with SMTP id 4fb4d7f45d1cf-53dd752685fso11485396a12.3 for ; Wed, 08 Nov 2023 04:53:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448012; x=1700052812; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qj0Q+Of/s/zCT7fsjUhrKL4jIXa45lOwCl04VSdBSJA=; b=WCI/4YZ5iSE521WZtfCiMoMl/Ipme03vxfzk00I+yKJkhFNA430dTgvNMifc0FyaFi 2TD7MEbxeFTrpR99si1uCzJdXAVs0Y4oZ2vKV2PuNYrILwXYem1QzH9fy/CWJzXmponk 8NhEIeC8MFfmkeBHxHXMtuHUvoBaG0KOIGaztctyRTMwhH3TAN4NDLEgnOBzzJFpexj1 tJb/NNBXg8ATvInoo8SWvnSrsDk+v66JsdStu5jx179b8caZ0jxx+UyWVoHrPg65uWsa WEMfzZpaTGt+fWOn/fVoIuX8EP+zPNGdcQaUI2SKFZ9yfyfyXoK03CjH1pt8cQrTKIev 67eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448012; x=1700052812; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qj0Q+Of/s/zCT7fsjUhrKL4jIXa45lOwCl04VSdBSJA=; b=Y8xeszmb8Mbol9nV1pSjIaZTJdfGwz7qucIffwU3SHmcTxytO9lEMHlyBz9rKeS/Sw 90l43rvRF7bOUK+YzpS9BEAs1gYsJS83HRRqEvPgpE1gEwS7gM/uDYpE8NiJacnKdvSo 9X2op+X3pT0vPDvYm3kAXbHrh1s9kvSS9nwDk2rzm2fGCA0mmT90Vux8Q6/vmgMNCSsh 6/Cs/POHKZWOHr4r472r+Lu5KRHOHaa2kkC/DMoCK+eP6UTrMT1bOjJuz5ouPmVpwUG+ Wh2mCyEr7zf1apAfNvuOtFkVqXVqfgTejGfzqtYYL3aXGmAPlPxV/lemuXJvE/6oACaU KV4Q== X-Gm-Message-State: AOJu0Yw48DBEOP6NVatAfvtaRzcmNWtjTOIbf4dxT23Zpim1pSJ1ub+C 5lKUVhPNszvcHLECROSPrHCs3w== X-Google-Smtp-Source: AGHT+IHQegs1Vujixn961uJaLX4sLMupfM/Vk+g0wg0M5oq96LR7uNdXp/eiiIlERb/xacZAsNrqSQ== X-Received: by 2002:a50:c04b:0:b0:53e:343a:76bf with SMTP id u11-20020a50c04b000000b0053e343a76bfmr1542085edd.1.1699448012578; Wed, 08 Nov 2023 04:53:32 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:32 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 06/23] dts: logger and settings docstring update Date: Wed, 8 Nov 2023 13:53:07 +0100 Message-Id: <20231108125324.191005-6-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/logger.py | 72 +++++++++++++++++++++---------- dts/framework/utils.py | 96 ++++++++++++++++++++++++++++++----------- 2 files changed, 121 insertions(+), 47 deletions(-) diff --git a/dts/framework/logger.py b/dts/framework/logger.py index bb2991e994..d3eb75a4e4 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -3,9 +3,9 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -DTS logger module with several log level. DTS framework and TestSuite logs -are saved in different log files. +"""DTS logger module. + +DTS framework and TestSuite logs are saved in different log files. """ import logging @@ -18,19 +18,21 @@ stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s" -class LoggerDictType(TypedDict): - logger: "DTSLOG" - name: str - node: str - +class DTSLOG(logging.LoggerAdapter): + """DTS logger adapter class for framework and testsuites. -# List for saving all using loggers -Loggers: list[LoggerDictType] = [] + The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment + variable control the verbosity of output. If enabled, all messages will be emitted to the + console. + The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment + variable modify the directory where the logs will be stored. -class DTSLOG(logging.LoggerAdapter): - """ - DTS log class for framework and testsuite. + Attributes: + node: The additional identifier. Currently unused. + sh: The handler which emits logs to console. + fh: The handler which emits logs to a file. + verbose_fh: Just as fh, but logs with a different, more verbose, format. """ _logger: logging.Logger @@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter): verbose_fh: logging.FileHandler def __init__(self, logger: logging.Logger, node: str = "suite"): + """Extend the constructor with additional handlers. + + One handler logs to the console, the other one to a file, with either a regular or verbose + format. + + Args: + logger: The logger from which to create the logger adapter. + node: An additional identifier. Currently unused. + """ self._logger = logger # 1 means log everything, this will be used by file handlers if their level # is not set @@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"): super(DTSLOG, self).__init__(self._logger, dict(node=self.node)) def logger_exit(self) -> None: - """ - Remove stream handler and logfile handler. - """ + """Remove the stream handler and the logfile handler.""" for handler in (self.sh, self.fh, self.verbose_fh): handler.flush() self._logger.removeHandler(handler) +class _LoggerDictType(TypedDict): + logger: DTSLOG + name: str + node: str + + +# List for saving all loggers in use +_Loggers: list[_LoggerDictType] = [] + + def getLogger(name: str, node: str = "suite") -> DTSLOG: + """Get DTS logger adapter identified by name and node. + + An existing logger will be return if one with the exact name and node already exists. + A new one will be created and stored otherwise. + + Args: + name: The name of the logger. + node: An additional identifier for the logger. + + Returns: + A logger uniquely identified by both name and node. """ - Get logger handler and if there's no handler for specified Node will create one. - """ - global Loggers + global _Loggers # return saved logger - logger: LoggerDictType - for logger in Loggers: + logger: _LoggerDictType + for logger in _Loggers: if logger["name"] == name and logger["node"] == node: return logger["logger"] # return new logger dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node) - Loggers.append({"logger": dts_logger, "name": name, "node": node}) + _Loggers.append({"logger": dts_logger, "name": name, "node": node}) return dts_logger diff --git a/dts/framework/utils.py b/dts/framework/utils.py index f0c916471c..0613adf7ad 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -3,6 +3,16 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Various utility classes and functions. + +These are used in multiple modules across the framework. They're here because +they provide some non-specific functionality, greatly simplify imports or just don't +fit elsewhere. + +Attributes: + REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``. +""" + import atexit import json import os @@ -19,12 +29,20 @@ def expand_range(range_str: str) -> list[int]: - """ - Process range string into a list of integers. There are two possible formats: - n - a single integer - n-m - a range of integers + """Process `range_str` into a list of integers. + + There are two possible formats of `range_str`: + + * ``n`` - a single integer, + * ``n-m`` - a range of integers. - The returned range includes both n and m. Empty string returns an empty list. + The returned range includes both ``n`` and ``m``. Empty string returns an empty list. + + Args: + range_str: The range to expand. + + Returns: + All the numbers from the range. """ expanded_range: list[int] = [] if range_str: @@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]: def get_packet_summaries(packets: list[Packet]) -> str: + """Format a string summary from `packets`. + + Args: + packets: The packets to format. + + Returns: + The summary of `packets`. + """ if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str: class StrEnum(Enum): + """Enum with members stored as strings.""" + @staticmethod def _generate_next_value_( name: str, start: int, count: int, last_values: object @@ -56,22 +84,29 @@ def _generate_next_value_( return name def __str__(self) -> str: + """The string representation is the name of the member.""" return self.name class MesonArgs(object): - """ - Aggregate the arguments needed to build DPDK: - default_library: Default library type, Meson allows "shared", "static" and "both". - Defaults to None, in which case the argument won't be used. - Keyword arguments: The arguments found in meson_options.txt in root DPDK directory. - Do not use -D with them, for example: - meson_args = MesonArgs(enable_kmods=True). - """ + """Aggregate the arguments needed to build DPDK.""" _default_library: str def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): + """Initialize the meson arguments. + + Args: + default_library: The default library type, Meson supports ``shared``, ``static`` and + ``both``. Defaults to :data:`None`, in which case the argument won't be used. + dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Example: + :: + + meson_args = MesonArgs(enable_kmods=True). + """ self._default_library = ( f"--default-library={default_library}" if default_library else "" ) @@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): ) def __str__(self) -> str: + """The actual args.""" return " ".join(f"{self._default_library} {self._dpdk_args}".split()) @@ -93,35 +129,33 @@ class _TarCompressionFormat(StrEnum): and Enum values are the associated file extensions. """ + #: gzip = "gz" + #: compress = "Z" + #: bzip2 = "bz2" + #: lzip = "lz" + #: lzma = "lzma" + #: lzop = "lzo" + #: xz = "xz" + #: zstd = "zst" class DPDKGitTarball(object): - """Create a compressed tarball of DPDK from the repository. - - The DPDK version is specified with git object git_ref. - The tarball will be compressed with _TarCompressionFormat, - which must be supported by the DTS execution environment. - The resulting tarball will be put into output_dir. + """Compressed tarball of DPDK from the repository. - The class supports the os.PathLike protocol, + The class supports the :class:`os.PathLike` protocol, which is used to get the Path of the tarball:: from pathlib import Path tarball = DPDKGitTarball("HEAD", "output") tarball_path = Path(tarball) - - Arguments: - git_ref: A git commit ID, tag ID or tree ID. - output_dir: The directory where to put the resulting tarball. - tar_compression_format: The compression format to use. """ _git_ref: str @@ -136,6 +170,17 @@ def __init__( output_dir: str, tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz, ): + """Create the tarball during initialization. + + The DPDK version is specified with `git_ref`. The tarball will be compressed with + `tar_compression_format`, which must be supported by the DTS execution environment. + The resulting tarball will be put into `output_dir`. + + Args: + git_ref: A git commit ID, tag ID or tree ID. + output_dir: The directory where to put the resulting tarball. + tar_compression_format: The compression format to use. + """ self._git_ref = git_ref self._tar_compression_format = tar_compression_format @@ -204,4 +249,5 @@ def _delete_tarball(self) -> None: os.remove(self._tarball_path) def __fspath__(self) -> str: + """The os.PathLike protocol implementation.""" return str(self._tarball_path) From patchwork Wed Nov 8 12:53:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133978 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71E4F432D6; Wed, 8 Nov 2023 13:54:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6747C42E35; Wed, 8 Nov 2023 13:53:38 +0100 (CET) Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by mails.dpdk.org (Postfix) with ESMTP id 4177E402DA for ; Wed, 8 Nov 2023 13:53:34 +0100 (CET) Received: by mail-ed1-f44.google.com with SMTP id 4fb4d7f45d1cf-53d9f001b35so11500832a12.2 for ; Wed, 08 Nov 2023 04:53:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448014; x=1700052814; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2ZNMam0a1R4XoTdcPdnNVy5Fpo9kpNHkFyw2wLxXUoc=; b=e73rX1VNa/pnUCMfYt5A5nuYJQBVL44hby+mLLrij1Mwb6C2vOE+IZyEgVIc6NQ0jZ o32Uw/23n+SkSRmXPTizj2CggWd9mqL/51lNASVJm4dWKbQbRhLB28uYh6w9zX4SLrr5 KpKXvF93Xh1oxbmrF4aoML9oXnoiWUWOiMt/UHurvMGK39GipjARdSbCofFpxxxhwYzN LC9SyOcHzSMPBzwDAsSMjIKo3t80tGoIznR9lEyv77EsT5tohP899XwEH+g2cdKUr1zD AciRNq2pVY8UmRl1gLABjMAFLvTH0OQL5eR+cloqS7wg3e/jQbtbg3sbb2u5TtXdpgfK pDXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448014; x=1700052814; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2ZNMam0a1R4XoTdcPdnNVy5Fpo9kpNHkFyw2wLxXUoc=; b=TTafWNCmFwM+vq3OH3VJyvuV4xEIlQWDFv+BCAY98qWaKX/uFfFPFOE2CI0BHkrtsG hs8nb5xviTjh566RamTXeCFEU/Z3Bpqz+BGhZ+zK/RTpwbZm7MexEIOs6I9fvsAsfAvT pa35YZ+vtdyUCS/KadmIkvirW+qcrjZAR4gr78GgnKgpvJ11WXHbagUJaDMR/iW8s8hr v+6T/bjNgy5JQg7sL4O8skcc50rYtiL0c4W+ogOSOONBTEnUX6Qdi2tauu2w9ee2J303 eZ64B7bSnACdGZgZ02iyj7+cT/paQTyXHN2ulPu+ClPvOlEU3e0FcYdii/viyToTKEgO d64Q== X-Gm-Message-State: AOJu0YxSyAmws5tmtN8aWHJooDtpwrRVLg/KSh6NEeRfKSqg9zeaVlhr fI4s5b341TS4BO1PnGcMpyFvfp41erUySFN2yF/6Zw== X-Google-Smtp-Source: AGHT+IE0JeO/8adVks+0wAYSxXVd1Q+i5LaN50di9sPYzZX9OwdIyKqP0+YIbDBSsqe7abEQUfo2iQ== X-Received: by 2002:a50:cc8a:0:b0:543:5d2e:a9c3 with SMTP id q10-20020a50cc8a000000b005435d2ea9c3mr1512636edi.20.1699448013876; Wed, 08 Nov 2023 04:53:33 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:33 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 07/23] dts: dts runner and main docstring update Date: Wed, 8 Nov 2023 13:53:08 +0100 Message-Id: <20231108125324.191005-7-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++------- dts/main.py | 8 ++- 2 files changed, 112 insertions(+), 24 deletions(-) diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 4c7fb0c40a..331fed7dc4 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -3,6 +3,33 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +r"""Test suite runner module. + +A DTS run is split into stages: + + #. Execution stage, + #. Build target stage, + #. Test suite stage, + #. Test case stage. + +The module is responsible for running tests on testbeds defined in the test run configuration. +Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or +one of its subclasses. The test case results are also recorded. + +If an error occurs, the current stage is aborted, the error is recorded and the run continues in +the next iteration of the same stage. The return code is the highest `severity` of all +:class:`~.framework.exception.DTSError`\s. + +Example: + An error occurs in a build target setup. The current build target is aborted and the run + continues with the next build target. If the errored build target was the last one in the given + execution, the next execution begins. + +Attributes: + dts_logger: The logger instance used in this module. + result: The top level result used in the module. +""" + import sys from .config import ( @@ -23,9 +50,38 @@ def run_all() -> None: - """ - The main process of DTS. Runs all build targets in all executions from the main - config file. + """Run all build targets in all executions from the test run configuration. + + Before running test suites, executions and build targets are first set up. + The executions and build targets defined in the test run configuration are iterated over. + The executions define which tests to run and where to run them and build targets define + the DPDK build setup. + + The tests suites are set up for each execution/build target tuple and each scheduled + test case within the test suite is set up, executed and torn down. After all test cases + have been executed, the test suite is torn down and the next build target will be tested. + + All the nested steps look like this: + + #. Execution setup + + #. Build target setup + + #. Test suite setup + + #. Test case setup + #. Test case logic + #. Test case teardown + + #. Test suite teardown + + #. Build target teardown + + #. Execution teardown + + The test cases are filtered according to the specification in the test run configuration and + the :option:`--test-cases` command line argument or + the :envvar:`DTS_TESTCASES` environment variable. """ global dts_logger global result @@ -87,6 +143,8 @@ def run_all() -> None: def _check_dts_python_version() -> None: + """Check the required Python version - v3.10.""" + def RED(text: str) -> str: return f"\u001B[31;1m{str(text)}\u001B[0m" @@ -111,9 +169,16 @@ def _run_execution( execution: ExecutionConfiguration, result: DTSResult, ) -> None: - """ - Run the given execution. This involves running the execution setup as well as - running all build targets in the given execution. + """Run the given execution. + + This involves running the execution setup as well as running all build targets + in the given execution. After that, execution teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: An execution's test run configuration. + result: The top level result object. """ dts_logger.info( f"Running execution with SUT '{execution.system_under_test_node.name}'." @@ -150,8 +215,18 @@ def _run_build_target( execution: ExecutionConfiguration, execution_result: ExecutionResult, ) -> None: - """ - Run the given build target. + """Run the given build target. + + This involves running the build target setup as well as running all test suites + in the given execution the build target is defined in. + After that, build target teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + build_target: A build target's test run configuration. + execution: The build target's execution's test run configuration. + execution_result: The execution level result object associated with the execution. """ dts_logger.info(f"Running build target '{build_target.name}'.") build_target_result = execution_result.add_build_target(build_target) @@ -183,10 +258,17 @@ def _run_all_suites( execution: ExecutionConfiguration, build_target_result: BuildTargetResult, ) -> None: - """ - Use the given build_target to run execution's test suites - with possibly only a subset of test cases. - If no subset is specified, run all test cases. + """Run the execution's (possibly a subset) test suites using the current build_target. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. """ end_build_target = False if not execution.skip_smoke_tests: @@ -215,16 +297,22 @@ def _run_single_suite( build_target_result: BuildTargetResult, test_suite_config: TestSuiteConfig, ) -> None: - """Runs a single test suite. + """Run all test suite in a single test suite module. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. Args: - sut_node: Node to run tests on. - execution: Execution the test case belongs to. - build_target_result: Build target configuration test case is run on - test_suite_config: Test suite configuration + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. + test_suite_config: Test suite test run configuration specifying the test suite module + and possibly a subset of test cases of test suites in that module. Raises: - BlockingTestSuiteError: If a test suite that was marked as blocking fails. + BlockingTestSuiteError: If a blocking test suite fails. """ try: full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" @@ -248,9 +336,7 @@ def _run_single_suite( def _exit_dts() -> None: - """ - Process all errors and exit with the proper exit code. - """ + """Process all errors and exit with the proper exit code.""" result.process() if dts_logger: diff --git a/dts/main.py b/dts/main.py index 5d4714b0c3..f703615d11 100755 --- a/dts/main.py +++ b/dts/main.py @@ -4,9 +4,7 @@ # Copyright(c) 2022 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire -""" -A test framework for testing DPDK. -""" +"""The DTS executable.""" import logging @@ -17,6 +15,10 @@ def main() -> None: """Set DTS settings, then run DTS. The DTS settings are taken from the command line arguments and the environment variables. + The settings object is stored in the module-level variable settings.SETTINGS which the entire + framework uses. After importing the module (or the variable), any changes to the variable are + not going to be reflected without a re-import. This means that the SETTINGS variable must + be modified before the settings module is imported anywhere else in the framework. """ settings.SETTINGS = settings.get_settings() from framework import dts From patchwork Wed Nov 8 12:53:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133979 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25E54432D6; Wed, 8 Nov 2023 13:54:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8605142E3B; Wed, 8 Nov 2023 13:53:39 +0100 (CET) Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by mails.dpdk.org (Postfix) with ESMTP id 63BC142E19 for ; Wed, 8 Nov 2023 13:53:35 +0100 (CET) Received: by mail-ed1-f50.google.com with SMTP id 4fb4d7f45d1cf-53dfc28a2afso11516014a12.1 for ; Wed, 08 Nov 2023 04:53:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448015; x=1700052815; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=G6YPFi+z5sN9CIKgDbzi+2+pExm1ffPsLTIbtsOXKR0=; b=tQBwKRhGs03EWjzT5keaiK0eIDqWFCs4WVAHVeJLv0SonnuonuzA9QgFWaFo7dRxpI 8dKIkc4NMgqFEix5wBORzQUmF8nxd/LMxMnj/kSUJazHNPItgkwWhoFZYvjxO/gtDVcH GvpVLBzUeaOV897dB2xSY+GkD5P7e4meJ6+ouj/WKdJmYWFpBvcsz0jjYPUMsSzH5N21 C2b9MBAo66jAgR8bQ3CqnMcINlSuw8BTeggLk06mg4cyDoT+kkSDQ1JEYjBEdwM9ex5q IY1lPMGbUG3rAaSE9CtI0R14sS0KK0dovf7bskeJN0JxuEE926s9xDPeA1P/lsL0d5VW 8e/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448015; x=1700052815; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G6YPFi+z5sN9CIKgDbzi+2+pExm1ffPsLTIbtsOXKR0=; b=l+kK5s5of9KDAZJCLq66pzwjy204tvi2o1w7obSGiowsGdAOgMIphvxHzXg4seeCQV 5ZCqIayvB75EApguPy7EZ9P67kL6R5cslh3fSrRkpO0FBNrTDLB/eKNgvDw1X3/j9PNY aE9/jsHH7O0a+3ZSCFww4vq21tIfkNrmRgQMPlb7K4AXDp2UKcjQCxzMzmdsd3qyRpRO jhnr8I0GdMZLlPt/+X/N2ohmjBIsp1axf8Oy25QZQh3jMZusiH3EADHJqg7Zron720lo IScSsWkfHpLpvOR34swotaPG7Zue18HeNvlYWdJ8G6W2pavLFPzOa7qNEPZE8lKi148u 9Lhg== X-Gm-Message-State: AOJu0YxM0IEzZ8CAIvcv6bF+GcbTV98SNOygRe8EW1cchHnvoGdwJ6nM Gs+Si2bWbNvFMTjUOmaVDF+d5Q== X-Google-Smtp-Source: AGHT+IHFeajIFx9H5MAgEaasnxtPbhzrdqKmArq7mA9+2ZlLBGQZfQiTwO1Q5p82n5O38sF/qjIzcw== X-Received: by 2002:a50:8e16:0:b0:544:caac:db11 with SMTP id 22-20020a508e16000000b00544caacdb11mr1515733edw.24.1699448015004; Wed, 08 Nov 2023 04:53:35 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:34 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 08/23] dts: test suite docstring update Date: Wed, 8 Nov 2023 13:53:09 +0100 Message-Id: <20231108125324.191005-8-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_suite.py | 223 +++++++++++++++++++++++++++--------- 1 file changed, 168 insertions(+), 55 deletions(-) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index d53553bf34..8daac35818 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -2,8 +2,19 @@ # Copyright(c) 2010-2014 Intel Corporation # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Base class for creating DTS test cases. +"""Features common to all test suites. + +The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such +must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics +needed by subclasses: + + * Test suite and test case execution flow, + * Testbed (SUT, TG) configuration, + * Packet sending and verification, + * Test case verification. + +The module also defines a function, :func:`get_test_suites`, +for gathering test suites from a Python module. """ import importlib @@ -31,25 +42,44 @@ class TestSuite(object): - """ - The base TestSuite class provides methods for handling basic flow of a test suite: - * test case filtering and collection - * test suite setup/cleanup - * test setup/cleanup - * test case execution - * error handling and results storage - Test cases are implemented by derived classes. Test cases are all methods - starting with test_, further divided into performance test cases - (starting with test_perf_) and functional test cases (all other test cases). - By default, all test cases will be executed. A list of testcase str names - may be specified in conf.yaml or on the command line - to filter which test cases to run. - The methods named [set_up|tear_down]_[suite|test_case] should be overridden - in derived classes if the appropriate suite/test case fixtures are needed. + """The base class with methods for handling the basic flow of a test suite. + + * Test case filtering and collection, + * Test suite setup/cleanup, + * Test setup/cleanup, + * Test case execution, + * Error handling and results storage. + + Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``, + further divided into performance test cases (starting with ``test_perf_``) + and functional test cases (all other test cases). + + By default, all test cases will be executed. A list of testcase names may be specified + in the YAML test run configuration file and in the :option:`--test-cases` command line argument + or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run. + The union of both lists will be used. Any unknown test cases from the latter lists + will be silently ignored. + + If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable + is set, in case of a test case failure, the test case will be executed again until it passes + or it fails that many times in addition of the first failure. + + The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses + if the appropriate test suite/test case fixtures are needed. + + The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can + properly choose the IP addresses and other configuration that must be tailored to the testbed. + + Attributes: + sut_node: The SUT node where the test suite is running. + tg_node: The TG node where the test suite is running. + is_blocking: Whether the test suite is blocking. A failure of a blocking test suite + will block the execution of all subsequent test suites in the current build target. """ sut_node: SutNode - is_blocking = False + tg_node: TGNode + is_blocking: bool = False _logger: DTSLOG _test_cases_to_run: list[str] _func: bool @@ -72,6 +102,19 @@ def __init__( func: bool, build_target_result: BuildTargetResult, ): + """Initialize the test suite testbed information and basic configuration. + + Process what test cases to run, create the associated :class:`TestSuiteResult`, + find links between ports and set up default IP addresses to be used when configuring them. + + Args: + sut_node: The SUT node where the test suite will run. + tg_node: The TG node where the test suite will run. + test_cases: The list of test cases to execute. + If empty, all test cases will be executed. + func: Whether to run functional tests. + build_target_result: The build target result this test suite is run in. + """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) @@ -95,6 +138,7 @@ def __init__( self._tg_ip_address_ingress = ip_interface("192.168.101.3/24") def _process_links(self) -> None: + """Construct links between SUT and TG ports.""" for sut_port in self.sut_node.ports: for tg_port in self.tg_node.ports: if (sut_port.identifier, sut_port.peer) == ( @@ -106,27 +150,42 @@ def _process_links(self) -> None: ) def set_up_suite(self) -> None: - """ - Set up test fixtures common to all test cases; this is done before - any test case is run. + """Set up test fixtures common to all test cases. + + This is done before any test case has been run. """ def tear_down_suite(self) -> None: - """ - Tear down the previously created test fixtures common to all test cases. + """Tear down the previously created test fixtures common to all test cases. + + This is done after all test have been run. """ def set_up_test_case(self) -> None: - """ - Set up test fixtures before each test case. + """Set up test fixtures before each test case. + + This is done before *each* test case. """ def tear_down_test_case(self) -> None: - """ - Tear down the previously created test fixtures after each test case. + """Tear down the previously created test fixtures after each test case. + + This is done after *each* test case. """ def configure_testbed_ipv4(self, restore: bool = False) -> None: + """Configure IPv4 addresses on all testbed ports. + + The configured ports are: + + * SUT ingress port, + * SUT egress port, + * TG ingress port, + * TG egress port. + + Args: + restore: If :data:`True`, will remove the configuration instead. + """ delete = True if restore else False enable = False if restore else True self._configure_ipv4_forwarding(enable) @@ -153,11 +212,13 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None: def send_packet_and_capture( self, packet: Packet, duration: float = 1 ) -> list[Packet]: - """ - Send a packet through the appropriate interface and - receive on the appropriate interface. - Modify the packet with l3/l2 addresses corresponding - to the testbed and desired traffic. + """Send and receive `packet` using the associated TG. + + Send `packet` through the appropriate interface and receive on the appropriate interface. + Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic. + + Returns: + A list of received packets. """ packet = self._adjust_addresses(packet) return self.tg_node.send_packet_and_capture( @@ -165,13 +226,25 @@ def send_packet_and_capture( ) def get_expected_packet(self, packet: Packet) -> Packet: + """Inject the proper L2/L3 addresses into `packet`. + + Args: + packet: The packet to modify. + + Returns: + `packet` with injected L2/L3 addresses. + """ return self._adjust_addresses(packet, expected=True) def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: - """ + """L2 and L3 address additions in both directions. + Assumptions: - Two links between SUT and TG, one link is TG -> SUT, - the other SUT -> TG. + Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG. + + Args: + packet: The packet to modify. + expected: If True, the direction is SUT -> TG, otherwise the direction is TG -> SUT. """ if expected: # The packet enters the TG from SUT @@ -197,6 +270,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: return Ether(packet.build()) def verify(self, condition: bool, failure_description: str) -> None: + """Verify `condition` and handle failures. + + When `condition` is :data:`False`, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + condition: The condition to check. + failure_description: A short description of the failure + that will be stored in the raised exception. + + Raises: + TestCaseVerifyError: `condition` is :data:`False`. + """ if not condition: self._fail_test_case_verify(failure_description) @@ -216,6 +302,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None: def verify_packets( self, expected_packet: Packet, received_packets: list[Packet] ) -> None: + """Verify that `expected_packet` has been received. + + Go through `received_packets` and check that `expected_packet` is among them. + If not, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + expected_packet: The packet we're expecting to receive. + received_packets: The packets where we're looking for `expected_packet`. + + Raises: + TestCaseVerifyError: `expected_packet` is not among `received_packets`. + """ for received_packet in received_packets: if self._compare_packets(expected_packet, received_packet): break @@ -303,10 +402,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: return True def run(self) -> None: - """ - Setup, execute and teardown the whole suite. - Suite execution consists of running all test cases scheduled to be executed. - A test cast run consists of setup, execution and teardown of said test case. + """Set up, execute and tear down the whole suite. + + Test suite execution consists of running all test cases scheduled to be executed. + A test case run consists of setup, execution and teardown of said test case. + + Record the setup and the teardown and handle failures. + + The list of scheduled test cases is constructed when creating the :class:`TestSuite` object. """ test_suite_name = self.__class__.__name__ @@ -338,9 +441,7 @@ def run(self) -> None: raise BlockingTestSuiteError(test_suite_name) def _execute_test_suite(self) -> None: - """ - Execute all test cases scheduled to be executed in this suite. - """ + """Execute all test cases scheduled to be executed in this suite.""" if self._func: for test_case_method in self._get_functional_test_cases(): test_case_name = test_case_method.__name__ @@ -357,14 +458,18 @@ def _execute_test_suite(self) -> None: self._run_test_case(test_case_method, test_case_result) def _get_functional_test_cases(self) -> list[MethodType]: - """ - Get all functional test cases. + """Get all functional test cases defined in this TestSuite. + + Returns: + The list of functional test cases of this TestSuite. """ return self._get_test_cases(r"test_(?!perf_)") def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: - """ - Return a list of test cases matching test_case_regex. + """Return a list of test cases matching test_case_regex. + + Returns: + The list of test cases matching test_case_regex of this TestSuite. """ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.") filtered_test_cases = [] @@ -378,9 +483,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: return filtered_test_cases def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool: - """ - Check whether the test case should be executed. - """ + """Check whether the test case should be scheduled to be executed.""" match = bool(re.match(test_case_regex, test_case_name)) if self._test_cases_to_run: return match and test_case_name in self._test_cases_to_run @@ -390,9 +493,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool def _run_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Setup, execute and teardown a test case in this suite. - Exceptions are caught and recorded in logs and results. + """Setup, execute and teardown a test case in this suite. + + Record the result of the setup and the teardown and handle failures. """ test_case_name = test_case_method.__name__ @@ -427,9 +530,7 @@ def _run_test_case( def _execute_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Execute one test case and handle failures. - """ + """Execute one test case, record the result and handle failures.""" test_case_name = test_case_method.__name__ try: self._logger.info(f"Starting test case execution: {test_case_name}") @@ -452,6 +553,18 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: + r"""Find all :class:`TestSuite`\s in a Python module. + + Args: + testsuite_module_path: The path to the Python module. + + Returns: + The list of :class:`TestSuite`\s found within the Python module. + + Raises: + ConfigurationError: The test suite module was not found. + """ + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: From patchwork Wed Nov 8 12:53:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133980 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CFE04432D6; Wed, 8 Nov 2023 13:54:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A9BDA42E29; Wed, 8 Nov 2023 13:53:40 +0100 (CET) Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by mails.dpdk.org (Postfix) with ESMTP id C510D42E29 for ; Wed, 8 Nov 2023 13:53:36 +0100 (CET) Received: by mail-ed1-f44.google.com with SMTP id 4fb4d7f45d1cf-5401bab7525so11718083a12.2 for ; Wed, 08 Nov 2023 04:53:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448016; x=1700052816; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yeCX8mpZe2CUPZYtSmATPJDvrrVQzkKPDnXl4suDAoA=; b=erXxGEUgTkq0XRQJ97rZjLJ/5dDHZMANVQvXWGywuZVc4D4Iq2FnDE3UJx6hrMRMgP 93FiLQ5LIuXGeqevrv79rK3JFosuYGRVPWxnvGQY7EzajR31mtZb3Ip+ht2l02bEau5d AMjOEsxr5RjqCMtI5Cq5kcNkIhYKA3aIaQo+CntwWjc3LcdK4VRXAIYdMfpeyPEQakPH DG7LpUrzYOR30SKDdKvomjtS+WBKJnt1c+d71SVW+mRUxuL37Qb0yJguPKbDRiLmotlm Zu5UZhmnAZ+zbiMqDsijYR0AS4Wu4T27nOVVCvOUqtK0JrwQraGfMM1L+u85c4zxsRNK L/eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448016; x=1700052816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yeCX8mpZe2CUPZYtSmATPJDvrrVQzkKPDnXl4suDAoA=; b=HztFJgcyapj6TizRQV+Gq3VBlt+6LRKctLWYIGK2HXDRkGiJWLNWRnuiCGNvV95H5V aYTpjJAraWK26Y6fC4s5xVO5W50lM25KZGewJVmyXPl1w8OkYxeOr0JgOTtSAMjFuKht fw91NpB0A7Gpc9IRmtxdFHp+7b8dv908CmpUumAHX1Uc6vE4msXy7yuwbiCU2OAgTH8h EkcO8ZQji0ILkVX7GGudqzIoLnbjuakk8smFyfYkQ9+MCWkX3X57ibC31YR+z0DYr0Y7 CFicIKpWvKuWwi+zsbmHX7xIeWpH89GLhzSXZ+owaL/bdYmnVXCvlV2SV9kFDHkncAbD RFkQ== X-Gm-Message-State: AOJu0YyPZ86uYR6T/vQTMpLx9PHM9G8XJQ4byEey6vhN7CR252TgQ5kW Ute4Tic1NP1pnEjSyzwteZkT+Q== X-Google-Smtp-Source: AGHT+IEqneGLM3tG6AFlcuS5zwy5BLnhnXklApJ0fdLTAui101RgFgicQv2a/Ra9Qsi5QWT8X/Ykcg== X-Received: by 2002:a50:9544:0:b0:53f:738c:3eeb with SMTP id v4-20020a509544000000b0053f738c3eebmr1512432eda.28.1699448016391; Wed, 08 Nov 2023 04:53:36 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:35 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 09/23] dts: test result docstring update Date: Wed, 8 Nov 2023 13:53:10 +0100 Message-Id: <20231108125324.191005-9-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++------- 1 file changed, 234 insertions(+), 58 deletions(-) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 603e18872c..f553948454 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -2,8 +2,25 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -Generic result container and reporters +r"""Record and process DTS results. + +The results are recorded in a hierarchical manner: + + * :class:`DTSResult` contains + * :class:`ExecutionResult` contains + * :class:`BuildTargetResult` contains + * :class:`TestSuiteResult` contains + * :class:`TestCaseResult` + +Each result may contain multiple lower level results, e.g. there are multiple +:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`. +The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`, +which also defines some common behaviors in its methods. + +Each result class has its own idiosyncrasies which they implement in overridden methods. + +The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment +variable modify the directory where the files with results will be stored. """ import os.path @@ -26,26 +43,34 @@ class Result(Enum): - """ - An Enum defining the possible states that - a setup, a teardown or a test case may end up in. - """ + """The possible states that a setup, a teardown or a test case may end up in.""" + #: PASS = auto() + #: FAIL = auto() + #: ERROR = auto() + #: SKIP = auto() def __bool__(self) -> bool: + """Only PASS is True.""" return self is self.PASS class FixtureResult(object): - """ - A record that stored the result of a setup or a teardown. - The default is FAIL because immediately after creating the object - the setup of the corresponding stage will be executed, which also guarantees - the execution of teardown. + """A record that stores the result of a setup or a teardown. + + FAIL is a sensible default since it prevents false positives + (which could happen if the default was TRUE). + + Preventing false positives or other false results is preferable since a failure + is mostly likely to be investigated (the other false results may not be investigated at all). + + Attributes: + result: The associated result. + error: The error in case of a failure. """ result: Result @@ -56,21 +81,32 @@ def __init__( result: Result = Result.FAIL, error: Exception | None = None, ): + """Initialize the constructor with the fixture result and store a possible error. + + Args: + result: The result to store. + error: The error which happened when a failure occurred. + """ self.result = result self.error = error def __bool__(self) -> bool: + """A wrapper around the stored :class:`Result`.""" return bool(self.result) class Statistics(dict): - """ - A helper class used to store the number of test cases by its result - along a few other basic information. - Using a dict provides a convenient way to format the data. + """How many test cases ended in which result state along some other basic information. + + Subclassing :class:`dict` provides a convenient way to format the data. """ def __init__(self, dpdk_version: str | None): + """Extend the constructor with relevant keys. + + Args: + dpdk_version: The version of tested DPDK. + """ super(Statistics, self).__init__() for result in Result: self[result.name] = 0 @@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None): self["DPDK VERSION"] = dpdk_version def __iadd__(self, other: Result) -> "Statistics": - """ - Add a Result to the final count. + """Add a Result to the final count. + + Example: + stats: Statistics = Statistics() # empty Statistics + stats += Result.PASS # add a Result to `stats` + + Args: + other: The Result to add to this statistics object. + + Returns: + The modified statistics object. """ self[other.name] += 1 self["PASS RATE"] = ( @@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics": return self def __str__(self) -> str: - """ - Provide a string representation of the data. - """ + """Each line contains the formatted key = value pair.""" stats_str = "" for key, value in self.items(): stats_str += f"{key:<12} = {value}\n" @@ -102,10 +145,16 @@ def __str__(self) -> str: class BaseResult(object): - """ - The Base class for all results. Stores the results of - the setup and teardown portions of the corresponding stage - and a list of results from each inner stage in _inner_results. + """Common data and behavior of DTS results. + + Stores the results of the setup and teardown portions of the corresponding stage. + The hierarchical nature of DTS results is captured recursively in an internal list. + A stage is each level in this particular hierarchy (pre-execution or the top-most level, + execution, build target, test suite and test case.) + + Attributes: + setup_result: The result of the setup of the particular stage. + teardown_result: The results of the teardown of the particular stage. """ setup_result: FixtureResult @@ -113,15 +162,28 @@ class BaseResult(object): _inner_results: MutableSequence["BaseResult"] def __init__(self): + """Initialize the constructor.""" self.setup_result = FixtureResult() self.teardown_result = FixtureResult() self._inner_results = [] def update_setup(self, result: Result, error: Exception | None = None) -> None: + """Store the setup result. + + Args: + result: The result of the setup. + error: The error that occurred in case of a failure. + """ self.setup_result.result = result self.setup_result.error = error def update_teardown(self, result: Result, error: Exception | None = None) -> None: + """Store the teardown result. + + Args: + result: The result of the teardown. + error: The error that occurred in case of a failure. + """ self.teardown_result.result = result self.teardown_result.error = error @@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]: ] def get_errors(self) -> list[Exception]: + """Compile errors from the whole result hierarchy. + + Returns: + The errors from setup, teardown and all errors found in the whole result hierarchy. + """ return self._get_setup_teardown_errors() + self._get_inner_errors() def add_stats(self, statistics: Statistics) -> None: + """Collate stats from the whole result hierarchy. + + Args: + statistics: The :class:`Statistics` object where the stats will be collated. + """ for inner_result in self._inner_results: inner_result.add_stats(statistics) class TestCaseResult(BaseResult, FixtureResult): - """ - The test case specific result. - Stores the result of the actual test case. - Also stores the test case name. + r"""The test case specific result. + + Stores the result of the actual test case. This is done by adding an extra superclass + in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and + the class is itself a record of the test case. + + Attributes: + test_case_name: The test case name. """ test_case_name: str def __init__(self, test_case_name: str): + """Extend the constructor with `test_case_name`. + + Args: + test_case_name: The test case's name. + """ super(TestCaseResult, self).__init__() self.test_case_name = test_case_name def update(self, result: Result, error: Exception | None = None) -> None: + """Update the test case result. + + This updates the result of the test case itself and doesn't affect + the results of the setup and teardown steps in any way. + + Args: + result: The result of the test case. + error: The error that occurred in case of a failure. + """ self.result = result self.error = error @@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]: return [] def add_stats(self, statistics: Statistics) -> None: + r"""Add the test case result to statistics. + + The base method goes through the hierarchy recursively and this method is here to stop + the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. + + Args: + statistics: The :class:`Statistics` object where the stats will be added. + """ statistics += self.result def __bool__(self) -> bool: + """The test case passed only if setup, teardown and the test case itself passed.""" return ( bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) ) class TestSuiteResult(BaseResult): - """ - The test suite specific result. - The _inner_results list stores results of test cases in a given test suite. - Also stores the test suite name. + """The test suite specific result. + + The internal list stores the results of all test cases in a given test suite. + + Attributes: + suite_name: The test suite name. """ suite_name: str def __init__(self, suite_name: str): + """Extend the constructor with `suite_name`. + + Args: + suite_name: The test suite's name. + """ super(TestSuiteResult, self).__init__() self.suite_name = suite_name def add_test_case(self, test_case_name: str) -> TestCaseResult: + """Add and return the inner result (test case). + + Returns: + The test case's result. + """ test_case_result = TestCaseResult(test_case_name) self._inner_results.append(test_case_result) return test_case_result class BuildTargetResult(BaseResult): - """ - The build target specific result. - The _inner_results list stores results of test suites in a given build target. - Also stores build target specifics, such as compiler used to build DPDK. + """The build target specific result. + + The internal list stores the results of all test suites in a given build target. + + Attributes: + arch: The DPDK build target architecture. + os: The DPDK build target operating system. + cpu: The DPDK build target CPU. + compiler: The DPDK build target compiler. + compiler_version: The DPDK build target compiler version. + dpdk_version: The built DPDK version. """ arch: Architecture @@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult): dpdk_version: str | None def __init__(self, build_target: BuildTargetConfiguration): + """Extend the constructor with the `build_target`'s build target config. + + Args: + build_target: The build target's test run configuration. + """ super(BuildTargetResult, self).__init__() self.arch = build_target.arch self.os = build_target.os @@ -222,20 +345,35 @@ def __init__(self, build_target: BuildTargetConfiguration): self.dpdk_version = None def add_build_target_info(self, versions: BuildTargetInfo) -> None: + """Add information about the build target gathered at runtime. + + Args: + versions: The additional information. + """ self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: + """Add and return the inner result (test suite). + + Returns: + The test suite's result. + """ test_suite_result = TestSuiteResult(test_suite_name) self._inner_results.append(test_suite_result) return test_suite_result class ExecutionResult(BaseResult): - """ - The execution specific result. - The _inner_results list stores results of build targets in a given execution. - Also stores the SUT node configuration. + """The execution specific result. + + The internal list stores the results of all build targets in a given execution. + + Attributes: + sut_node: The SUT node used in the execution. + sut_os_name: The operating system of the SUT node. + sut_os_version: The operating system version of the SUT node. + sut_kernel_version: The operating system kernel version of the SUT node. """ sut_node: NodeConfiguration @@ -244,36 +382,55 @@ class ExecutionResult(BaseResult): sut_kernel_version: str def __init__(self, sut_node: NodeConfiguration): + """Extend the constructor with the `sut_node`'s config. + + Args: + sut_node: The SUT node's test run configuration used in the execution. + """ super(ExecutionResult, self).__init__() self.sut_node = sut_node def add_build_target( self, build_target: BuildTargetConfiguration ) -> BuildTargetResult: + """Add and return the inner result (build target). + + Args: + build_target: The build target's test run configuration. + + Returns: + The build target's result. + """ build_target_result = BuildTargetResult(build_target) self._inner_results.append(build_target_result) return build_target_result def add_sut_info(self, sut_info: NodeInfo) -> None: + """Add SUT information gathered at runtime. + + Args: + sut_info: The additional SUT node information. + """ self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version class DTSResult(BaseResult): - """ - Stores environment information and test results from a DTS run, which are: - * Execution level information, such as SUT and TG hardware. - * Build target level information, such as compiler, target OS and cpu. - * Test suite results. - * All errors that are caught and recorded during DTS execution. + """Stores environment information and test results from a DTS run. - The information is stored in nested objects. + * Execution level information, such as testbed and the test suite list, + * Build target level information, such as compiler, target OS and cpu, + * Test suite and test case results, + * All errors that are caught and recorded during DTS execution. - The class is capable of computing the return code used to exit DTS with - from the stored error. + The information is stored hierarchically. This is the first level of the hierarchy + and as such is where the data form the whole hierarchy is collated or processed. - It also provides a brief statistical summary of passed/failed test cases. + The internal list stores the results of all executions. + + Attributes: + dpdk_version: The DPDK version to record. """ dpdk_version: str | None @@ -284,6 +441,11 @@ class DTSResult(BaseResult): _stats_filename: str def __init__(self, logger: DTSLOG): + """Extend the constructor with top-level specifics. + + Args: + logger: The logger instance the whole result will use. + """ super(DTSResult, self).__init__() self.dpdk_version = None self._logger = logger @@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG): self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: + """Add and return the inner result (execution). + + Args: + sut_node: The SUT node's test run configuration. + + Returns: + The execution's result. + """ execution_result = ExecutionResult(sut_node) self._inner_results.append(execution_result) return execution_result def add_error(self, error: Exception) -> None: + """Record an error that occurred outside any execution. + + Args: + error: The exception to record. + """ self._errors.append(error) def process(self) -> None: - """ - Process the data after a DTS run. - The data is added to nested objects during runtime and this parent object - is not updated at that time. This requires us to process the nested data - after it's all been gathered. + """Process the data after a whole DTS run. + + The data is added to inner objects during runtime and this object is not updated + at that time. This requires us to process the inner data after it's all been gathered. - The processing gathers all errors and the result statistics of test cases. + The processing gathers all errors and the statistics of test case results. """ self._errors += self.get_errors() if self._errors and self._logger: @@ -321,8 +495,10 @@ def process(self) -> None: stats_file.write(str(self._stats_result)) def get_return_code(self) -> int: - """ - Go through all stored Exceptions and return the highest error code found. + """Go through all stored Exceptions and return the final DTS error code. + + Returns: + The highest error code found. """ for error in self._errors: error_return_code = ErrorSeverity.GENERIC_ERR From patchwork Wed Nov 8 12:53:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133981 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7FC09432D6; Wed, 8 Nov 2023 13:54:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4EA4842E4A; Wed, 8 Nov 2023 13:53:42 +0100 (CET) Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by mails.dpdk.org (Postfix) with ESMTP id 1C5EE42E33 for ; Wed, 8 Nov 2023 13:53:38 +0100 (CET) Received: by mail-ed1-f51.google.com with SMTP id 4fb4d7f45d1cf-53dfc28a2afso11516111a12.1 for ; Wed, 08 Nov 2023 04:53:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448018; x=1700052818; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UPLVAuBLaKUHh0e7ghZ5KBaj+aS4lDT4JMKEmjiTvQs=; b=X9Iot2wFeh2C8T6pDXIDTYxszNHxQCX8e2xNXnq9tlNFNuvRtSKziZhqabLQJ63urA SguUOU4rVt9IgdLKXyyn/3zfgR1Xke4sXUvW+UeAPjWKLuaWB2UooJTx4LzwzbgW2l4o yZp5hrGR51X3GKf+Z/SiVALbZYeILtLBppNdT1kBeACHxxgsSp7jVgNO7hvDuPcoln1R lqUJcST+3CEeZPskqTOcmOKVyaZ1MMCj0r2XmIX/JUb72Y/uo9sY7nOQaFlPFNYVzCmA /jAXsyS/0vFxQquU+BuEwUeu4qIJA0+Kdsr6NQFWd5So6Xl1Kv98PQcgRgnGhDLb6WoN fzjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448018; x=1700052818; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UPLVAuBLaKUHh0e7ghZ5KBaj+aS4lDT4JMKEmjiTvQs=; b=tRRgA5ZCNRZXQ6TX2DUZIv7p54OrZwquQhLZ/oS8ydMeeIzufDVojcDh3szjcviU0W 9FozsSjHxL9YAdDRaiTUEhXovViWA1QcI3smcmhq3+Zc3CSgp7OmdMth7veA2rxvpeal IUVfccoxcrZj4ovkDiVN7+KF1gkcJdW5T36+fe7QCbyfbJuko3CMGvlQGpxqyfqq7fRJ 0PunhMdC6a3Z0LHWcvLFfp3Ml2k1qsCgwdDda/gi8xmSuZuzqVMAlz457SqsdJC3kUb8 hJLGCbsEMeC1hYyYefVzPXBiWYLw1MhvsvTPaEOwqLR0eIZ/UYxGOD6IrRoAsBXUF8sC bP1w== X-Gm-Message-State: AOJu0YwUp9C1OxE7Fn6HAU0kKvk7ibP/Pzac4YVTEzZKhqLbAdKmj2P5 WftOt0MOc68d4vnUh2Rqbw/dFi/Vce2BR3tOhBNjuQ== X-Google-Smtp-Source: AGHT+IGyVRVQ0gw3n3d6LE6oXMNdFtvaVmBwdexnC++qS8PWcyiOdoTYjJNgu5vBkPL4Dgd76Lm65Q== X-Received: by 2002:a50:c05a:0:b0:53e:29c1:ae1f with SMTP id u26-20020a50c05a000000b0053e29c1ae1fmr1447494edd.19.1699448017702; Wed, 08 Nov 2023 04:53:37 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:36 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 10/23] dts: config docstring update Date: Wed, 8 Nov 2023 13:53:11 +0100 Message-Id: <20231108125324.191005-10-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 371 ++++++++++++++++++++++++++----- dts/framework/config/types.py | 132 +++++++++++ 2 files changed, 446 insertions(+), 57 deletions(-) create mode 100644 dts/framework/config/types.py diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 2044c82611..0aa149a53d 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -3,8 +3,34 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Yaml config parsing methods +"""Testbed configuration and test suite specification. + +This package offers classes that hold real-time information about the testbed, hold test run +configuration describing the tested testbed and a loader function, :func:`load_config`, which loads +the YAML test run configuration file +and validates it according to :download:`the schema `. + +The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout +this package. The allowed keys and types inside this dictionary are defined in +the :doc:`types ` module. + +The test run configuration has two main sections: + + * The :class:`ExecutionConfiguration` which defines what tests are going to be run + and how DPDK will be built. It also references the testbed where these tests and DPDK + are going to be run, + * The nodes of the testbed are defined in the other section, + a :class:`list` of :class:`NodeConfiguration` objects. + +The real-time information about testbed is supposed to be gathered at runtime. + +The classes defined in this package make heavy use of :mod:`dataclasses`. +All of them use slots and are frozen: + + * Slots enables some optimizations, by pre-allocating space for the defined + attributes in the underlying data structure, + * Frozen makes the object immutable. This enables further optimizations, + and makes it thread safe should we every want to move in that direction. """ import json @@ -12,11 +38,20 @@ import pathlib from dataclasses import dataclass from enum import auto, unique -from typing import Any, TypedDict, Union +from typing import Union import warlock # type: ignore[import] import yaml +from framework.config.types import ( + BuildTargetConfigDict, + ConfigurationDict, + ExecutionConfigDict, + NodeConfigDict, + PortConfigDict, + TestSuiteConfigDict, + TrafficGeneratorConfigDict, +) from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -24,55 +59,97 @@ @unique class Architecture(StrEnum): + r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s.""" + + #: i686 = auto() + #: x86_64 = auto() + #: x86_32 = auto() + #: arm64 = auto() + #: ppc64le = auto() @unique class OS(StrEnum): + r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s.""" + + #: linux = auto() + #: freebsd = auto() + #: windows = auto() @unique class CPUType(StrEnum): + r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s.""" + + #: native = auto() + #: armv8a = auto() + #: dpaa2 = auto() + #: thunderx = auto() + #: xgene1 = auto() @unique class Compiler(StrEnum): + r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s.""" + + #: gcc = auto() + #: clang = auto() + #: icc = auto() + #: msvc = auto() @unique class TrafficGeneratorType(StrEnum): + """The supported traffic generators.""" + + #: SCAPY = auto() -# Slots enables some optimizations, by pre-allocating space for the defined -# attributes in the underlying data structure. -# -# Frozen makes the object immutable. This enables further optimizations, -# and makes it thread safe should we every want to move in that direction. @dataclass(slots=True, frozen=True) class HugepageConfiguration: + r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + amount: The number of hugepages. + force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node. + """ + amount: int force_first_numa: bool @dataclass(slots=True, frozen=True) class PortConfig: + r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + node: The :class:`~framework.testbed_model.node.Node` where this port exists. + pci: The PCI address of the port. + os_driver_for_dpdk: The operating system driver name for use with DPDK. + os_driver: The operating system driver name when the operating system controls the port. + peer_node: The :class:`~framework.testbed_model.node.Node` of the port + connected to this port. + peer_pci: The PCI address of the port connected to this port. + """ + node: str pci: str os_driver_for_dpdk: str @@ -81,18 +158,44 @@ class PortConfig: peer_pci: str @staticmethod - def from_dict(node: str, d: dict) -> "PortConfig": + def from_dict(node: str, d: PortConfigDict) -> "PortConfig": + """A convenience method that creates the object from fewer inputs. + + Args: + node: The node where this port exists. + d: The configuration dictionary. + + Returns: + The port configuration instance. + """ return PortConfig(node=node, **d) @dataclass(slots=True, frozen=True) class TrafficGeneratorConfig: + """The configuration of traffic generators. + + The class will be expanded when more configuration is needed. + + Attributes: + traffic_generator_type: The type of the traffic generator. + """ + traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": - # This looks useless now, but is designed to allow expansion to traffic - # generators that require more configuration later. + def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig": + """A convenience method that produces traffic generator config of the proper type. + + Args: + d: The configuration dictionary. + + Returns: + The traffic generator configuration instance. + + Raises: + ConfigurationError: An unknown traffic generator type was encountered. + """ match TrafficGeneratorType(d["type"]): case TrafficGeneratorType.SCAPY: return ScapyTrafficGeneratorConfig( @@ -106,11 +209,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": @dataclass(slots=True, frozen=True) class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig): + """Scapy traffic generator specific configuration.""" + pass @dataclass(slots=True, frozen=True) class NodeConfiguration: + r"""The configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + name: The name of the :class:`~framework.testbed_model.node.Node`. + hostname: The hostname of the :class:`~framework.testbed_model.node.Node`. + Can be an IP or a domain name. + user: The name of the user used to connect to + the :class:`~framework.testbed_model.node.Node`. + password: The password of the user. The use of passwords is heavily discouraged. + Please use keys instead. + arch: The architecture of the :class:`~framework.testbed_model.node.Node`. + os: The operating system of the :class:`~framework.testbed_model.node.Node`. + lcores: A comma delimited list of logical cores to use when running DPDK. + use_first_core: If :data:`True`, the first logical core won't be used. + hugepages: An optional hugepage configuration. + ports: The ports that can be used in testing. + """ + name: str hostname: str user: str @@ -123,57 +246,91 @@ class NodeConfiguration: ports: list[PortConfig] @staticmethod - def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: - hugepage_config = d.get("hugepages") - if hugepage_config: - if "force_first_numa" not in hugepage_config: - hugepage_config["force_first_numa"] = False - hugepage_config = HugepageConfiguration(**hugepage_config) - - common_config = { - "name": d["name"], - "hostname": d["hostname"], - "user": d["user"], - "password": d.get("password"), - "arch": Architecture(d["arch"]), - "os": OS(d["os"]), - "lcores": d.get("lcores", "1"), - "use_first_core": d.get("use_first_core", False), - "hugepages": hugepage_config, - "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]], - } - + def from_dict( + d: NodeConfigDict, + ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: + """A convenience method that processes the inputs before creating a specialized instance. + + Args: + d: The configuration dictionary. + + Returns: + Either an SUT or TG configuration instance. + """ + hugepage_config = None + if "hugepages" in d: + hugepage_config_dict = d["hugepages"] + if "force_first_numa" not in hugepage_config_dict: + hugepage_config_dict["force_first_numa"] = False + hugepage_config = HugepageConfiguration(**hugepage_config_dict) + + # The calls here contain duplicated code which is here because Mypy doesn't + # properly support dictionary unpacking with TypedDicts if "traffic_generator" in d: return TGNodeConfiguration( + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], traffic_generator=TrafficGeneratorConfig.from_dict( d["traffic_generator"] ), - **common_config, ) else: return SutNodeConfiguration( - memory_channels=d.get("memory_channels", 1), **common_config + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], + memory_channels=d.get("memory_channels", 1), ) @dataclass(slots=True, frozen=True) class SutNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration. + + Attributes: + memory_channels: The number of memory channels to use when running DPDK. + """ + memory_channels: int @dataclass(slots=True, frozen=True) class TGNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration. + + Attributes: + traffic_generator: The configuration of the traffic generator present on the TG node. + """ + traffic_generator: ScapyTrafficGeneratorConfig @dataclass(slots=True, frozen=True) class NodeInfo: - """Class to hold important versions within the node. - - This class, unlike the NodeConfiguration class, cannot be generated at the start. - This is because we need to initialize a connection with the node before we can - collect the information needed in this class. Therefore, it cannot be a part of - the configuration class above. + """Supplemental node information. + + Attributes: + os_name: The name of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + os_version: The version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + kernel_version: The kernel version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. """ os_name: str @@ -183,6 +340,20 @@ class NodeInfo: @dataclass(slots=True, frozen=True) class BuildTargetConfiguration: + """DPDK build configuration. + + The configuration used for building DPDK. + + Attributes: + arch: The target architecture to build for. + os: The target os to build for. + cpu: The target CPU to build for. + compiler: The compiler executable to use. + compiler_wrapper: This string will be put in front of the compiler when + executing the build. Useful for adding wrapper commands, such as ``ccache``. + name: The name of the compiler. + """ + arch: Architecture os: OS cpu: CPUType @@ -191,7 +362,18 @@ class BuildTargetConfiguration: name: str @staticmethod - def from_dict(d: dict) -> "BuildTargetConfiguration": + def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration": + r"""A convenience method that processes the inputs before creating an instance. + + `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and + `name` is constructed from `arch`, `os`, `cpu` and `compiler`. + + Args: + d: The configuration dictionary. + + Returns: + The build target configuration instance. + """ return BuildTargetConfiguration( arch=Architecture(d["arch"]), os=OS(d["os"]), @@ -204,23 +386,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": @dataclass(slots=True, frozen=True) class BuildTargetInfo: - """Class to hold important versions within the build target. + """Various versions and other information about a build target. - This is very similar to the NodeInfo class, it just instead holds information - for the build target. + Attributes: + dpdk_version: The DPDK version that was built. + compiler_version: The version of the compiler used to build DPDK. """ dpdk_version: str compiler_version: str -class TestSuiteConfigDict(TypedDict): - suite: str - cases: list[str] - - @dataclass(slots=True, frozen=True) class TestSuiteConfig: + """Test suite configuration. + + Information about a single test suite to be executed. + + Attributes: + test_suite: The name of the test suite module without the starting ``TestSuite_``. + test_cases: The names of test cases from this test suite to execute. + If empty, all test cases will be executed. + """ + test_suite: str test_cases: list[str] @@ -228,6 +416,14 @@ class TestSuiteConfig: def from_dict( entry: str | TestSuiteConfigDict, ) -> "TestSuiteConfig": + """Create an instance from two different types. + + Args: + entry: Either a suite name or a dictionary containing the config. + + Returns: + The test suite configuration instance. + """ if isinstance(entry, str): return TestSuiteConfig(test_suite=entry, test_cases=[]) elif isinstance(entry, dict): @@ -238,19 +434,49 @@ def from_dict( @dataclass(slots=True, frozen=True) class ExecutionConfiguration: + """The configuration of an execution. + + The configuration contains testbed information, what tests to execute + and with what DPDK build. + + Attributes: + build_targets: A list of DPDK builds to test. + perf: Whether to run performance tests. + func: Whether to run functional tests. + skip_smoke_tests: Whether to skip smoke tests. + test_suites: The names of test suites and/or test cases to execute. + system_under_test_node: The SUT node to use in this execution. + traffic_generator_node: The TG node to use in this execution. + vdevs: The names of virtual devices to test. + """ + build_targets: list[BuildTargetConfiguration] perf: bool func: bool + skip_smoke_tests: bool test_suites: list[TestSuiteConfig] system_under_test_node: SutNodeConfiguration traffic_generator_node: TGNodeConfiguration vdevs: list[str] - skip_smoke_tests: bool @staticmethod def from_dict( - d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]] + d: ExecutionConfigDict, + node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]], ) -> "ExecutionConfiguration": + """A convenience method that processes the inputs before creating an instance. + + The build target and the test suite config is transformed into their respective objects. + SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are + just stored. + + Args: + d: The configuration dictionary. + node_map: A dictionary mapping node names to their config objects. + + Returns: + The execution configuration instance. + """ build_targets: list[BuildTargetConfiguration] = list( map(BuildTargetConfiguration.from_dict, d["build_targets"]) ) @@ -291,10 +517,31 @@ def from_dict( @dataclass(slots=True, frozen=True) class Configuration: + """DTS testbed and test configuration. + + The node configuration is not stored in this object. Rather, all used node configurations + are stored inside the execution configuration where the nodes are actually used. + + Attributes: + executions: Execution configurations. + """ + executions: list[ExecutionConfiguration] @staticmethod - def from_dict(d: dict) -> "Configuration": + def from_dict(d: ConfigurationDict) -> "Configuration": + """A convenience method that processes the inputs before creating an instance. + + Build target and test suite config is transformed into their respective objects. + SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are + just stored. + + Args: + d: The configuration dictionary. + + Returns: + The whole configuration instance. + """ nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list( map(NodeConfiguration.from_dict, d["nodes"]) ) @@ -313,9 +560,17 @@ def from_dict(d: dict) -> "Configuration": def load_config() -> Configuration: - """ - Loads the configuration file and the configuration file schema, - validates the configuration file, and creates a configuration object. + """Load DTS test run configuration from a file. + + Load the YAML test run configuration file + and :download:`the configuration file schema `, + validate the test run configuration file, and create a test run configuration object. + + The YAML test run configuration file is specified in the :option:`--config-file` command line + argument or the :envvar:`DTS_CFG_FILE` environment variable. + + Returns: + The parsed test run configuration. """ with open(SETTINGS.config_file_path, "r") as f: config_data = yaml.safe_load(f) @@ -326,6 +581,8 @@ def load_config() -> Configuration: with open(schema_path, "r") as f: schema = json.load(f) - config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) - config_obj: Configuration = Configuration.from_dict(dict(config)) + config = warlock.model_factory(schema, name="_Config")(config_data) + config_obj: Configuration = Configuration.from_dict( + dict(config) # type: ignore[arg-type] + ) return config_obj diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py new file mode 100644 index 0000000000..1927910d88 --- /dev/null +++ b/dts/framework/config/types.py @@ -0,0 +1,132 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +"""Configuration dictionary contents specification. + +These type definitions serve as documentation of the configuration dictionary contents. + +The definitions use the built-in :class:`~typing.TypedDict` construct. +""" + +from typing import TypedDict + + +class PortConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + pci: str + #: + os_driver_for_dpdk: str + #: + os_driver: str + #: + peer_node: str + #: + peer_pci: str + + +class TrafficGeneratorConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + type: str + + +class HugepageConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + amount: int + #: + force_first_numa: bool + + +class NodeConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + hugepages: HugepageConfigurationDict + #: + name: str + #: + hostname: str + #: + user: str + #: + password: str + #: + arch: str + #: + os: str + #: + lcores: str + #: + use_first_core: bool + #: + ports: list[PortConfigDict] + #: + memory_channels: int + #: + traffic_generator: TrafficGeneratorConfigDict + + +class BuildTargetConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + arch: str + #: + os: str + #: + cpu: str + #: + compiler: str + #: + compiler_wrapper: str + + +class TestSuiteConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + suite: str + #: + cases: list[str] + + +class ExecutionSUTConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + node_name: str + #: + vdevs: list[str] + + +class ExecutionConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + build_targets: list[BuildTargetConfigDict] + #: + perf: bool + #: + func: bool + #: + skip_smoke_tests: bool + #: + test_suites: TestSuiteConfigDict + #: + system_under_test_node: ExecutionSUTConfigDict + #: + traffic_generator_node: str + + +class ConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + nodes: list[NodeConfigDict] + #: + executions: list[ExecutionConfigDict] From patchwork Wed Nov 8 12:53:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133982 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5514432D6; Wed, 8 Nov 2023 13:55:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 921F842E4D; Wed, 8 Nov 2023 13:53:43 +0100 (CET) Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by mails.dpdk.org (Postfix) with ESMTP id 9CAA842E3D for ; Wed, 8 Nov 2023 13:53:39 +0100 (CET) Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-507a98517f3so8534204e87.0 for ; Wed, 08 Nov 2023 04:53:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448019; x=1700052819; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t7XOoo85AsM+xPhfLGE4ILJ7NUrsoVLB8WhGDmtkhUM=; b=FJqiXkIsuKnFsjFialmmBiFhQjoErOVURhiEPQSvRddxA7NV8nSe+7+g3Ffq89NDl/ phe4ptHxtkCAZKNsk9G/lDglVLtywopT0OPMPo5YZnX5cuzqnCmNXWGywUu2ijAPi3Ou EXFiURY7ra9iGxUsAP0kUaB5smQg4eJ8iu224I+2fviDsNQk2QqamVgZ6EYJ70nZ123w gERE59WirUeY6Imlpwy8tvgmhIht6bzKjEa1iOOBVfhyzH5hoXWF7uTDtC1mMHmhza81 ygMR8cw53HCZjYpkUwVCD3tWLGwKZjoAKi/Ccylv46+2OmiO83j4yuij2bbhBOxEW1Me E16Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448019; x=1700052819; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t7XOoo85AsM+xPhfLGE4ILJ7NUrsoVLB8WhGDmtkhUM=; b=v8Hf3x7SU+ILAv7fuZi4C5twTZwiRCZ9WzMmfuXxB+PM67/E9P2Kj3+WRpFEoRCf3O XzG0OG73tgjY925TNWhJSRuNRYVwptSpm4ubWp5j5uinxfVpIUoDiPmsPeJhpeN04t/Q gMY4aXaVDm03LHR4ugLxHVhhg2aBzXESi4FdO+C5B7ybsqHWo/RKaDhA++qpdcVNR0ds USWeXVi11JJibumpHUqz0wYDkWsCnEkeCioQZob/MIK4TQPhNDwUr2TGOP+Jy+mqrBS9 ve/IDZ6AVX9wAYSmQu65uw1NwclChxv/ZDd2TAq4C3gh2qzLOlAdXQW393CCjUgoq1Hq P+0w== X-Gm-Message-State: AOJu0YzkSLJ9xuCO+hNJe48IGni0x98s9Qswyk5Lpv18CwmSYJh70+XN j3QxVSj8NrKJPR2udwYTMyHHSw== X-Google-Smtp-Source: AGHT+IGw0egeuvOYjQJdACi+aabXcQWERYQTAAmeUpjE1jg1c/uSEL+tERBjPWNrCrbedlrfJHkUwA== X-Received: by 2002:a05:6512:31d0:b0:509:fbf:f235 with SMTP id j16-20020a05651231d000b005090fbff235mr1519857lfe.6.1699448019058; Wed, 08 Nov 2023 04:53:39 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:38 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 11/23] dts: remote session docstring update Date: Wed, 8 Nov 2023 13:53:12 +0100 Message-Id: <20231108125324.191005-11-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/remote_session/__init__.py | 39 +++++- .../remote_session/remote_session.py | 128 +++++++++++++----- dts/framework/remote_session/ssh_session.py | 16 +-- 3 files changed, 135 insertions(+), 48 deletions(-) diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 5e7ddb2b05..51a01d6b5e 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -2,12 +2,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -The package provides modules for managing remote connections to a remote host (node), -differentiated by OS. -The package provides a factory function, create_session, that returns the appropriate -remote connection based on the passed configuration. The differences are in the -underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux). +"""Remote interactive and non-interactive sessions. + +This package provides modules for managing remote connections to a remote host (node). + +The non-interactive sessions send commands and return their output and exit code. + +The interactive sessions open an interactive shell which is continuously open, +allowing it to send and receive data within that particular shell. """ # pylama:ignore=W0611 @@ -26,10 +28,35 @@ def create_remote_session( node_config: NodeConfiguration, name: str, logger: DTSLOG ) -> RemoteSession: + """Factory for non-interactive remote sessions. + + The function returns an SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + + Returns: + The SSH remote session. + """ return SSHSession(node_config, name, logger) def create_interactive_session( node_config: NodeConfiguration, logger: DTSLOG ) -> InteractiveRemoteSession: + """Factory for interactive remote sessions. + + The function returns an interactive SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + + Returns: + The interactive SSH remote session. + """ return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py index 0647d93de4..629c2d7b9c 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote_session.py @@ -3,6 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Base remote session. + +This module contains the abstract base class for remote sessions and defines +the structure of the result of a command execution. +""" + + import dataclasses from abc import ABC, abstractmethod from pathlib import PurePath @@ -15,8 +22,14 @@ @dataclasses.dataclass(slots=True, frozen=True) class CommandResult: - """ - The result of remote execution of a command. + """The result of remote execution of a command. + + Attributes: + name: The name of the session that executed the command. + command: The executed command. + stdout: The standard output the command produced. + stderr: The standard error output the command produced. + return_code: The return code the command exited with. """ name: str @@ -26,6 +39,7 @@ class CommandResult: return_code: int def __str__(self) -> str: + """Format the command outputs.""" return ( f"stdout: '{self.stdout}'\n" f"stderr: '{self.stderr}'\n" @@ -34,13 +48,24 @@ def __str__(self) -> str: class RemoteSession(ABC): - """ - The base class for defining which methods must be implemented in order to connect - to a remote host (node) and maintain a remote session. The derived classes are - supposed to implement/use some underlying transport protocol (e.g. SSH) to - implement the methods. On top of that, it provides some basic services common to - all derived classes, such as keeping history and logging what's being executed - on the remote node. + """Non-interactive remote session. + + The abstract methods must be implemented in order to connect to a remote host (node) + and maintain a remote session. + The subclasses must use (or implement) some underlying transport protocol (e.g. SSH) + to implement the methods. On top of that, it provides some basic services common to all + subclasses, such as keeping history and logging what's being executed on the remote node. + + Attributes: + name: The name of the session. + hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon) + or a domain name. + ip: The IP address of the node or a domain name, whichever was used in `hostname`. + port: The port of the node, if given in `hostname`. + username: The username used in the connection. + password: The password used in the connection. Most frequently empty, + as the use of passwords is discouraged. + history: The executed commands during this session. """ name: str @@ -59,6 +84,16 @@ def __init__( session_name: str, logger: DTSLOG, ): + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + session_name: The name of the session. + logger: The logger instance this session will use. + + Raises: + SSHConnectionError: If the connection to the node was not successful. + """ self._node_config = node_config self.name = session_name @@ -79,8 +114,13 @@ def __init__( @abstractmethod def _connect(self) -> None: - """ - Create connection to assigned node. + """Create a connection to the node. + + The implementation must assign the established session to self.session. + + The implementation must except all exceptions and convert them to an SSHConnectionError. + + The implementation may optionally implement retry attempts. """ def send_command( @@ -90,11 +130,24 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - Send a command to the connected node using optional env vars - and return CommandResult. - If verify is True, check the return code of the executed command - and raise a RemoteCommandExecutionError if the command failed. + """Send `command` to the connected node. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds to execute `command`. + verify: If :data:`True`, will check the exit code of `command`. + env: A dictionary with environment variables to be used with `command` execution. + + Raises: + SSHSessionDeadError: If the session isn't alive when sending `command`. + SSHTimeoutError: If `command` execution timed out. + RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed. + + Returns: + The output of the command along with the return code. """ self._logger.info( f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "") @@ -115,29 +168,36 @@ def send_command( def _send_command( self, command: str, timeout: float, env: dict | None ) -> CommandResult: - """ - Use the underlying protocol to execute the command using optional env vars - and return CommandResult. + """Send a command to the connected node. + + The implementation must execute the command remotely with `env` environment variables + and return the result. + + The implementation must except all exceptions and raise an SSHSessionDeadError if + the session is not alive and an SSHTimeoutError if the command execution times out. """ def close(self, force: bool = False) -> None: - """ - Close the remote session and free all used resources. + """Close the remote session and free all used resources. + + Args: + force: Force the closure of the connection. This may not clean up all resources. """ self._logger.logger_exit() self._close(force) @abstractmethod def _close(self, force: bool = False) -> None: - """ - Execute protocol specific steps needed to close the session properly. + """Protocol specific steps needed to close the session properly. + + Args: + force: Force the closure of the connection. This may not clean up all resources. + This doesn't have to be implemented in the overloaded method. """ @abstractmethod def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the remote session is still responding.""" @abstractmethod def copy_from( @@ -147,12 +207,12 @@ def copy_from( ) -> None: """Copy a file from the remote Node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote Node associated with this remote session + to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. - destination_file: a file or directory path on the local filesystem. + source_file: The file on the remote Node. + destination_file: A file or directory path on the local filesystem. """ @abstractmethod @@ -163,10 +223,10 @@ def copy_to( ) -> None: """Copy a file from local filesystem to the remote Node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` on the remote Node + associated with this remote session. Args: - source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + source_file: The file on the local filesystem. + destination_file: A file or directory path on the remote Node. """ diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py index cee11d14d6..7186490a9a 100644 --- a/dts/framework/remote_session/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -1,6 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""SSH session remote session.""" + import socket import traceback from pathlib import PurePath @@ -26,13 +28,8 @@ class SSHSession(RemoteSession): """A persistent SSH connection to a remote Node. - The connection is implemented with the Fabric Python library. - - Args: - node_config: The configuration of the Node to connect to. - session_name: The name of the session. - logger: The logger used for logging. - This should be passed from the parent OSSession. + The connection is implemented with + `the Fabric Python library `_. Attributes: session: The underlying Fabric SSH connection. @@ -80,6 +77,7 @@ def _connect(self) -> None: raise SSHConnectionError(self.hostname, errors) def is_alive(self) -> bool: + """Overrides :meth:`~.remote_session.RemoteSession.is_alive`.""" return self.session.is_connected def _send_command( @@ -89,7 +87,7 @@ def _send_command( Args: command: The command to execute. - timeout: Wait at most this many seconds for the execution to complete. + timeout: Wait at most this long in seconds to execute the command. env: Extra environment variables that will be used in command execution. Raises: @@ -118,6 +116,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_from`.""" self.session.get(str(destination_file), str(source_file)) def copy_to( @@ -125,6 +124,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_to`.""" self.session.put(str(source_file), str(destination_file)) def _close(self, force: bool = False) -> None: From patchwork Wed Nov 8 12:53:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133983 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7D69432D6; Wed, 8 Nov 2023 13:55:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5834E42E57; Wed, 8 Nov 2023 13:53:45 +0100 (CET) Received: from mail-ed1-f54.google.com (mail-ed1-f54.google.com [209.85.208.54]) by mails.dpdk.org (Postfix) with ESMTP id 76B3142E29 for ; Wed, 8 Nov 2023 13:53:40 +0100 (CET) Received: by mail-ed1-f54.google.com with SMTP id 4fb4d7f45d1cf-53e2308198eso11488556a12.1 for ; Wed, 08 Nov 2023 04:53:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448020; x=1700052820; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DRBT1iCQ3izDtfSEhLlQAS7Wlh56tQtjJbqIRheTJP8=; b=gmaopl8qWc1WiZOIH4mVme3lDzK/OH2ukFCrxD0B2WghY2EejVYX7Nzv1J/US4V+aV 9KgF0qCPfNd5xH2pGUq8WBGUucmBLQznj+tvN5vgDSnaa2xlqUoJ07IvtsUa7NWNC8kx hlOuVDQUVXsUCG9LFGdWKtRzRzOfHr1X6MRzLd4/6pyiknnvfbvApVI6AsXoF5iap1/o qfSDv/EuoU+4U0uNK5FXJRQpwQejMBbkNti8rbTZGiSi8lnlKxJhtbt9WPTxU8zRHhUj VILeQQkfv3NBnECn5uA8CVNPHMCQu7B0H4mPeRaicaGuWRmh4imx8n3AU19e7OSx80fa 1aJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448020; x=1700052820; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DRBT1iCQ3izDtfSEhLlQAS7Wlh56tQtjJbqIRheTJP8=; b=MlXy1s0twrsHbQn7IZUlpuiFFpsHKicboJdQ57XiqIwZPGcG3RD613fS/tD9E1tjfy Hi/ScRVu/r05Tj5AH47sBGUwO2BwXShLltBMECx5oTh1CXOaKy2aozJVRlCi9xkpku5G VEShgv6GRX3J03Sxp/UvvOv3ePYtovoOU7o7narsUSKHJUyf6S3Fza6CZEBLAsUZ+oQP H7VcY1C7D0hpy8DZ1Iz+zrTpx/gB4ofJhOreOMhahjRVsS4HhwltIBAy5AZlueSPgK4V tvjwMoWK7prIJPPRMCIXEoUE/sdxF1tlRy6Esl598OvOb+dBO33Hddzq2v7r+gMLSlt2 VM7g== X-Gm-Message-State: AOJu0YxqVJ7IAgH9jjNgU3qVCm/jABuqOY1k7L/iy35qvWUgjPsClBbe SbtRH2Mhn83JDSdI5wHH3RaAAg== X-Google-Smtp-Source: AGHT+IGvrq//rY7BO7Rlp612LPrNQvIRBr2WUcOSIl4WHFJ4M035bOD5Cy4tLIPUfOTdZcjFXnKo7A== X-Received: by 2002:a50:cc91:0:b0:53e:81f6:8060 with SMTP id q17-20020a50cc91000000b0053e81f68060mr1376892edi.14.1699448020049; Wed, 08 Nov 2023 04:53:40 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:39 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 12/23] dts: interactive remote session docstring update Date: Wed, 8 Nov 2023 13:53:13 +0100 Message-Id: <20231108125324.191005-12-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../interactive_remote_session.py | 36 +++---- .../remote_session/interactive_shell.py | 99 +++++++++++-------- dts/framework/remote_session/python_shell.py | 26 ++++- dts/framework/remote_session/testpmd_shell.py | 61 +++++++++--- 4 files changed, 150 insertions(+), 72 deletions(-) diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py index 9085a668e8..c1bf30ac61 100644 --- a/dts/framework/remote_session/interactive_remote_session.py +++ b/dts/framework/remote_session/interactive_remote_session.py @@ -22,27 +22,23 @@ class InteractiveRemoteSession: """SSH connection dedicated to interactive applications. - This connection is created using paramiko and is a persistent connection to the - host. This class defines methods for connecting to the node and configures this - connection to send "keep alive" packets every 30 seconds. Because paramiko attempts - to use SSH keys to establish a connection first, providing a password is optional. - This session is utilized by InteractiveShells and cannot be interacted with - directly. - - Arguments: - node_config: Configuration class for the node you are connecting to. - _logger: Desired logger for this session to use. + The connection is created using `paramiko `_ + and is a persistent connection to the host. This class defines the methods for connecting + to the node and configures the connection to send "keep alive" packets every 30 seconds. + Because paramiko attempts to use SSH keys to establish a connection first, providing + a password is optional. This session is utilized by InteractiveShells + and cannot be interacted with directly. Attributes: - hostname: Hostname that will be used to initialize a connection to the node. - ip: A subsection of hostname that removes the port for the connection if there + hostname: The hostname that will be used to initialize a connection to the node. + ip: A subsection of `hostname` that removes the port for the connection if there is one. If there is no port, this will be the same as hostname. - port: Port to use for the ssh connection. This will be extracted from the - hostname if there is a port included, otherwise it will default to 22. + port: Port to use for the ssh connection. This will be extracted from `hostname` + if there is a port included, otherwise it will default to ``22``. username: User to connect to the node with. password: Password of the user connecting to the host. This will default to an empty string if a password is not provided. - session: Underlying paramiko connection. + session: The underlying paramiko connection. Raises: SSHConnectionError: There is an error creating the SSH connection. @@ -58,9 +54,15 @@ class InteractiveRemoteSession: _node_config: NodeConfiguration _transport: Transport | None - def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None: + def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None: + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + """ self._node_config = node_config - self._logger = _logger + self._logger = logger self.hostname = node_config.hostname self.username = node_config.user self.password = node_config.password if node_config.password else "" diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py index c24376b2a8..a98a822e91 100644 --- a/dts/framework/remote_session/interactive_shell.py +++ b/dts/framework/remote_session/interactive_shell.py @@ -3,18 +3,20 @@ """Common functionality for interactive shell handling. -This base class, InteractiveShell, is meant to be extended by other classes that -contain functionality specific to that shell type. These derived classes will often -modify things like the prompt to expect or the arguments to pass into the application, -but still utilize the same method for sending a command and collecting output. How -this output is handled however is often application specific. If an application needs -elevated privileges to start it is expected that the method for gaining those -privileges is provided when initializing the class. +The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain +functionality specific to that shell type. These subclasses will often modify things like +the prompt to expect or the arguments to pass into the application, but still utilize +the same method for sending a command and collecting output. How this output is handled however +is often application specific. If an application needs elevated privileges to start it is expected +that the method for gaining those privileges is provided when initializing the class. + +The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` +environment variable configure the timeout of getting the output from command execution. """ from abc import ABC from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from paramiko import Channel, SSHClient, channel # type: ignore[import] @@ -30,28 +32,6 @@ class InteractiveShell(ABC): and collecting input until reaching a certain prompt. All interactive applications will use the same SSH connection, but each will create their own channel on that session. - - Arguments: - interactive_session: The SSH session dedicated to interactive shells. - logger: Logger used for displaying information in the console. - get_privileged_command: Method for modifying a command to allow it to use - elevated privileges. If this is None, the application will not be started - with elevated privileges. - app_args: Command line arguments to be passed to the application on startup. - timeout: Timeout used for the SSH channel that is dedicated to this interactive - shell. This timeout is for collecting output, so if reading from the buffer - and no output is gathered within the timeout, an exception is thrown. - - Attributes - _default_prompt: Prompt to expect at the end of output when sending a command. - This is often overridden by derived classes. - _command_extra_chars: Extra characters to add to the end of every command - before sending them. This is often overridden by derived classes and is - most commonly an additional newline character. - path: Path to the executable to start the interactive application. - dpdk_app: Whether this application is a DPDK app. If it is, the build - directory for DPDK on the node will be prepended to the path to the - executable. """ _interactive_session: SSHClient @@ -61,10 +41,22 @@ class InteractiveShell(ABC): _logger: DTSLOG _timeout: float _app_args: str - _default_prompt: str = "" - _command_extra_chars: str = "" - path: PurePath - dpdk_app: bool = False + + #: Prompt to expect at the end of output when sending a command. + #: This is often overridden by subclasses. + _default_prompt: ClassVar[str] = "" + + #: Extra characters to add to the end of every command + #: before sending them. This is often overridden by subclasses and is + #: most commonly an additional newline character. + _command_extra_chars: ClassVar[str] = "" + + #: Path to the executable to start the interactive application. + path: ClassVar[PurePath] + + #: Whether this application is a DPDK app. If it is, the build directory + #: for DPDK on the node will be prepended to the path to the executable. + dpdk_app: ClassVar[bool] = False def __init__( self, @@ -74,6 +66,19 @@ def __init__( app_args: str = "", timeout: float = SETTINGS.timeout, ) -> None: + """Create an SSH channel during initialization. + + Args: + interactive_session: The SSH session dedicated to interactive shells. + logger: The logger instance this session will use. + get_privileged_command: A method for modifying a command to allow it to use + elevated privileges. If :data:`None`, the application will not be started + with elevated privileges. + app_args: The command line arguments to be passed to the application on startup. + timeout: The timeout used for the SSH channel that is dedicated to this interactive + shell. This timeout is for collecting output, so if reading from the buffer + and no output is gathered within the timeout, an exception is thrown. + """ self._interactive_session = interactive_session self._ssh_channel = self._interactive_session.invoke_shell() self._stdin = self._ssh_channel.makefile_stdin("w") @@ -92,6 +97,10 @@ def _start_application( This method is often overridden by subclasses as their process for starting may look different. + + Args: + get_privileged_command: A function (but could be any callable) that produces + the version of the command with elevated privileges. """ start_command = f"{self.path} {self._app_args}" if get_privileged_command is not None: @@ -99,16 +108,24 @@ def _start_application( self.send_command(start_command) def send_command(self, command: str, prompt: str | None = None) -> str: - """Send a command and get all output before the expected ending string. + """Send `command` and get all output before the expected ending string. Lines that expect input are not included in the stdout buffer, so they cannot - be used for expect. For example, if you were prompted to log into something - with a username and password, you cannot expect "username:" because it won't - yet be in the stdout buffer. A workaround for this could be consuming an - extra newline character to force the current prompt into the stdout buffer. + be used for expect. + + Example: + If you were prompted to log into something with a username and password, + you cannot expect ``username:`` because it won't yet be in the stdout buffer. + A workaround for this could be consuming an extra newline character to force + the current `prompt` into the stdout buffer. + + Args: + command: The command to send. + prompt: After sending the command, `send_command` will be expecting this string. + If :data:`None`, will use the class's default prompt. Returns: - All output in the buffer before expected string + All output in the buffer before expected string. """ self._logger.info(f"Sending: '{command}'") if prompt is None: @@ -126,8 +143,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str: return out def close(self) -> None: + """Properly free all resources.""" self._stdin.close() self._ssh_channel.close() def __del__(self) -> None: + """Make sure the session is properly closed before deleting the object.""" self.close() diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py index cc3ad48a68..c8e5957ef7 100644 --- a/dts/framework/remote_session/python_shell.py +++ b/dts/framework/remote_session/python_shell.py @@ -1,12 +1,32 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Python interactive shell. + +Typical usage example in a TestSuite:: + + from framework.remote_session import PythonShell + python_shell = self.tg_node.create_interactive_shell( + PythonShell, timeout=5, privileged=True + ) + python_shell.send_command("print('Hello World')") + pytyon_shell.close() +""" + from pathlib import PurePath +from typing import ClassVar from .interactive_shell import InteractiveShell class PythonShell(InteractiveShell): - _default_prompt: str = ">>>" - _command_extra_chars: str = "\n" - path: PurePath = PurePath("python3") + """Python interactive shell.""" + + #: Python's prompt. + _default_prompt: ClassVar[str] = ">>>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" + + #: The Python executable. + path: ClassVar[PurePath] = PurePath("python3") diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 1455b5a199..2632515d74 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -1,45 +1,82 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +"""Testpmd interactive shell. + +Typical usage example in a TestSuite:: + + testpmd_shell = self.sut_node.create_interactive_shell( + TestPmdShell, privileged=True + ) + devices = testpmd_shell.get_devices() + for device in devices: + print(device) + testpmd_shell.close() +""" + from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from .interactive_shell import InteractiveShell class TestPmdDevice(object): + """The data of a device that testpmd can recognize. + + Attributes: + pci_address: The PCI address of the device. + """ + pci_address: str def __init__(self, pci_address_line: str): + """Initialize the device from the testpmd output line string. + + Args: + pci_address_line: A line of testpmd output that contains a device. + """ self.pci_address = pci_address_line.strip().split(": ")[1].strip() def __str__(self) -> str: + """The PCI address captures what the device is.""" return self.pci_address class TestPmdShell(InteractiveShell): - path: PurePath = PurePath("app", "dpdk-testpmd") - dpdk_app: bool = True - _default_prompt: str = "testpmd>" - _command_extra_chars: str = ( - "\n" # We want to append an extra newline to every command - ) + """Testpmd interactive shell. + + The testpmd shell users should never use + the :meth:`~framework.remote_session.interactive_shell.InteractiveShell.send_command` method + directly, but rather call specialized methods. If there isn't one that satisfies a need, + it should be added. + """ + + #: The path to the testpmd executable. + path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd") + + #: Flag this as a DPDK app so that it's clear this is not a system app and + #: needs to be looked in a specific path. + dpdk_app: ClassVar[bool] = True + + #: The testpmd's prompt. + _default_prompt: ClassVar[str] = "testpmd>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" def _start_application( self, get_privileged_command: Callable[[str], str] | None ) -> None: - """See "_start_application" in InteractiveShell.""" self._app_args += " -- -i" super()._start_application(get_privileged_command) def get_devices(self) -> list[TestPmdDevice]: - """Get a list of device names that are known to testpmd + """Get a list of device names that are known to testpmd. - Uses the device info listed in testpmd and then parses the output to - return only the names of the devices. + Uses the device info listed in testpmd and then parses the output. Returns: - A list of strings representing device names (e.g. 0000:14:00.1) + A list of devices. """ dev_info: str = self.send_command("show device info all") dev_list: list[TestPmdDevice] = [] From patchwork Wed Nov 8 12:53:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133984 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3A6B432D6; Wed, 8 Nov 2023 13:55:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC27942E14; Wed, 8 Nov 2023 13:53:46 +0100 (CET) Received: from mail-ej1-f47.google.com (mail-ej1-f47.google.com [209.85.218.47]) by mails.dpdk.org (Postfix) with ESMTP id C0E3042E30 for ; Wed, 8 Nov 2023 13:53:41 +0100 (CET) Received: by mail-ej1-f47.google.com with SMTP id a640c23a62f3a-991c786369cso1040182766b.1 for ; Wed, 08 Nov 2023 04:53:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448021; x=1700052821; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nNWqU1DLwXxy+jHTYIQUWcQVb4gb9gLJSwll5Z93zLQ=; b=ag2//IqW9LHkXMRIV1/Ug9FPKMzy9UyBXBDDNdOpfWnLe0+LXlDg4yC7UvjX5xOa22 45qFWwDZkV9B0f5zKyBuqYyODBbDEsc8kIwPFDA8tgPVan+7h9nnhfAWmxL6LjPFShM4 eZ9/1crjYOLJITfUY3HII/B84ROESeEMJZIAAoXmEyXrpYOQFAAlgEKLO7E3OD6LYOE7 PPHMTAhaZqhRcZmPCsD3545k44A5V+w25Cg9Q5OU2v79pDYIE6EiF9ldiGlgcH5fix7B qMOuLMn/Wn4Xy4SfUIJUbJHb+tuXdpQReU5EQTiZjRGGzVVA5Y0lPxXe2TBO9EF1p5/w BMqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448021; x=1700052821; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nNWqU1DLwXxy+jHTYIQUWcQVb4gb9gLJSwll5Z93zLQ=; b=OslDgovjIpTiIzsTtRAxB6GXFNb6gL5W0wSJ9xqOGny31s+j5E1WD78YTK4+qbr+pP 5jszrbSjErPYLCRQG3pNaLHzK3mp7J1tHMy/g6yt2nR7cc1ODSnKZuwxdyEu5oB6NurE syyrS9Qf/9+JU05MtFqED46ePrWlEhY7Lf4C8Gy1qL5TZc6XP9AuHVWUGpwrQdf6WWt0 YlmWc2HPd8vK6u9aTHDUInBzySlzSG3rAcmRsMGgVqW5pFk3Cp+IK1K2ht0/bJqzj+uQ EwbajcUbUohbqEvieMGlTgow4cdPF1t5NLZTyzqdvjzfUC0RsU25jDL1UrTBxwQHjhNj Z3iw== X-Gm-Message-State: AOJu0YyK4LkRmvZTS+/XQIVHol76A5P40QBLCoBu2sDHLTRwpjhlYbId /L1XEneFLfffk3y49WhDkk15ng== X-Google-Smtp-Source: AGHT+IHgv7KuqRVhhfJf+Z6FV0tUg7Mn8fytKKbEmYHkTENbxU5MV73HxG90lV+B8kMDwmoFBiqZwA== X-Received: by 2002:a17:906:b842:b0:9e4:3b4:68f7 with SMTP id ga2-20020a170906b84200b009e403b468f7mr726325ejb.60.1699448021449; Wed, 08 Nov 2023 04:53:41 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:40 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 13/23] dts: port and virtual device docstring update Date: Wed, 8 Nov 2023 13:53:14 +0100 Message-Id: <20231108125324.191005-13-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/__init__.py | 16 ++++-- dts/framework/testbed_model/port.py | 53 +++++++++++++++---- dts/framework/testbed_model/virtual_device.py | 17 +++++- 3 files changed, 71 insertions(+), 15 deletions(-) diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 8ced05653b..a02be1f2d9 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -2,9 +2,19 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -This package contains the classes used to model the physical traffic generator, -system under test and any other components that need to be interacted with. +"""Testbed modelling. + +This package defines the testbed elements DTS works with: + + * A system under test node: :class:`SutNode`, + * A traffic generator node: :class:`TGNode`, + * The ports of network interface cards (NICs) present on nodes: :class:`Port`, + * The logical cores of CPUs present on nodes: :class:`LogicalCore`, + * The virtual devices that can be created on nodes: :class:`VirtualDevice`, + * The operating systems running on nodes: :class:`LinuxSession` and :class:`PosixSession`. + +DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes +to properly build and test DPDK. """ # pylama:ignore=W0611 diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py index 680c29bfe3..817405bea4 100644 --- a/dts/framework/testbed_model/port.py +++ b/dts/framework/testbed_model/port.py @@ -2,6 +2,13 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""NIC port model. + +Basic port information, such as location (the port are identified by their PCI address on a node), +drivers and address. +""" + + from dataclasses import dataclass from framework.config import PortConfig @@ -9,24 +16,35 @@ @dataclass(slots=True, frozen=True) class PortIdentifier: + """The port identifier. + + Attributes: + node: The node where the port resides. + pci: The PCI address of the port on `node`. + """ + node: str pci: str @dataclass(slots=True) class Port: - """ - identifier: The PCI address of the port on a node. - - os_driver: The driver used by this port when the OS is controlling it. - Example: i40e - os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it, - Example: vfio-pci. + """Physical port on a node. - Note: os_driver and os_driver_for_dpdk may be the same thing. - Example: mlx5_core + The ports are identified by the node they're on and their PCI addresses. The port on the other + side of the connection is also captured here. + Each port is serviced by a driver, which may be different for the operating system (`os_driver`) + and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``. - peer: The identifier of a port this port is connected with. + Attributes: + identifier: The PCI address of the port on a node. + os_driver: The operating system driver name when the operating system controls the port, + e.g.: ``i40e``. + os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``. + peer: The identifier of a port this port is connected with. + The `peer` is on a different node. + mac_address: The MAC address of the port. + logical_name: The logical name of the port. Must be discovered. """ identifier: PortIdentifier @@ -37,6 +55,12 @@ class Port: logical_name: str = "" def __init__(self, node_name: str, config: PortConfig): + """Initialize the port from `node_name` and `config`. + + Args: + node_name: The name of the port's node. + config: The test run configuration of the port. + """ self.identifier = PortIdentifier( node=node_name, pci=config.pci, @@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig): @property def node(self) -> str: + """The node where the port resides.""" return self.identifier.node @property def pci(self) -> str: + """The PCI address of the port.""" return self.identifier.pci @dataclass(slots=True, frozen=True) class PortLink: + """The physical, cabled connection between the ports. + + Attributes: + sut_port: The port on the SUT node connected to `tg_port`. + tg_port: The port on the TG node connected to `sut_port`. + """ + sut_port: Port tg_port: Port diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py index eb664d9f17..e9b5e9c3be 100644 --- a/dts/framework/testbed_model/virtual_device.py +++ b/dts/framework/testbed_model/virtual_device.py @@ -1,16 +1,29 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Virtual devices model. + +Alongside support for physical hardware, DPDK can create various virtual devices. +""" + class VirtualDevice(object): - """ - Base class for virtual devices used by DPDK. + """Base class for virtual devices used by DPDK. + + Attributes: + name: The name of the virtual device. """ name: str def __init__(self, name: str): + """Initialize the virtual device. + + Args: + name: The name of the virtual device. + """ self.name = name def __str__(self) -> str: + """This corresponds to the name used for DPDK devices.""" return self.name From patchwork Wed Nov 8 12:53:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133985 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2692A432D6; Wed, 8 Nov 2023 13:55:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A09842E67; Wed, 8 Nov 2023 13:53:48 +0100 (CET) Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by mails.dpdk.org (Postfix) with ESMTP id A00E642E4C for ; Wed, 8 Nov 2023 13:53:42 +0100 (CET) Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-53df747cfe5so11841625a12.2 for ; Wed, 08 Nov 2023 04:53:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448022; x=1700052822; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pLeNkk+BE6ZnW6umBr/glSV3roMC3ycMuTjWgCecNoQ=; b=IuGc3jXM5fjfEUxfHA6FLuWQDyoKuW7ZR2MgWXQlo9rD0YAGNck+q5d2zZkznyNpQg Y5/nKeMA/xRmxHtlpooMCzVuYwnPlST0PxoKQD7IGtDXP4VUSvCzYHIQcRsaScIgVg6k c1p3LAOpHpVLysg2a+QjnYuGAe22l1yx0DKvnboYCUCasFpZy+ULEHjo3xT6pNtFkKtt Qje4wPM8sx/XYnET7saiyBb+kQa8KwOmQdxHjNyFz/xawZjq7g9NtJCgaj6YFzo/h3OI REwm1r8p3R9s1VX2AkiJ3OewOBodVQoL6h6GwQL9RDrW7pVBWmBXnVMCTCF8WZbD/4RU 1kSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448022; x=1700052822; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pLeNkk+BE6ZnW6umBr/glSV3roMC3ycMuTjWgCecNoQ=; b=nzQ6W2+gxRlfC8ejQ8UyxlY0fjyavB07vCh8IfAdlGH4dvq5lozba5HOnSzJf2k10Z z4eRti/GWUDndTizOL0a/4w8kIXQ9Wh5WzAZQGRnOf1FUO54hI0VNXniOV7p1OeWaswn T/Rl5MpFVA2vekMaXfrDldAZJ4DLjyYbVwDz5Ums7geb/B8R5w+OgFoS2O05lc5RvO8m a2HcbkVBLIe1xCqNKhe/hs2gSuA2cmdaqnw6P064LmFRsGmq7Qpo9PohVQoHwsnKEsJ2 WHr1hyVHftFwAUud8Qmqx1rjFS+/CEfuyihU/RvGtbZ4JAt0jNXRW382Z7MVxdPedIh1 5shw== X-Gm-Message-State: AOJu0Yyc440tTzK1tXFFxQHYz7rbQKPn2daid2DEUzhLKTLi08qVBoku Q475JPVP8rjfgLxeCYHH8oDP1w== X-Google-Smtp-Source: AGHT+IEr5awIbNDU8YsDLDIIPSEKgv5rsCPITXfk+8yV7PwdqNvbHHQpbmI9BwJl0lBI0CsFMtjPPQ== X-Received: by 2002:a50:d642:0:b0:543:7b0d:aea9 with SMTP id c2-20020a50d642000000b005437b0daea9mr1320097edj.15.1699448022265; Wed, 08 Nov 2023 04:53:42 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:41 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 14/23] dts: cpu docstring update Date: Wed, 8 Nov 2023 13:53:15 +0100 Message-Id: <20231108125324.191005-14-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++-------- 1 file changed, 144 insertions(+), 52 deletions(-) diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py index 8fe785dfe4..4edeb4a7c2 100644 --- a/dts/framework/testbed_model/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -1,6 +1,22 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""CPU core representation and filtering. + +This module provides a unified representation of logical CPU cores along +with filtering capabilities. + +When symmetric multiprocessing (SMP or multithreading) is enabled on a server, +the physical CPU cores are split into logical CPU cores with different IDs. + +:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify +the socket from which to filter the number of logical cores. It's also possible to not use all +logical CPU cores from each physical core (e.g. only the first logical core of each physical core). + +:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that +the logical cores are actually present on the server. +""" + import dataclasses from abc import ABC, abstractmethod from collections.abc import Iterable, ValuesView @@ -11,9 +27,17 @@ @dataclass(slots=True, frozen=True) class LogicalCore(object): - """ - Representation of a CPU core. A physical core is represented in OS - by multiple logical cores (lcores) if CPU multithreading is enabled. + """Representation of a logical CPU core. + + A physical core is represented in OS by multiple logical cores (lcores) + if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same. + + Attributes: + lcore: The logical core ID of a CPU core. It's the same as `core` with + disabled multithreading. + core: The physical core ID of a CPU core. + socket: The physical socket ID where the CPU resides. + node: The NUMA node ID where the CPU resides. """ lcore: int @@ -22,27 +46,36 @@ class LogicalCore(object): node: int def __int__(self) -> int: + """The CPU is best represented by the logical core, as that's what we configure in EAL.""" return self.lcore class LogicalCoreList(object): - """ - Convert these options into a list of logical core ids. - lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores - lcore_list=[0,1,2,3] - a list of int indices - lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported - lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported - - The class creates a unified format used across the framework and allows - the user to use either a str representation (using str(instance) or directly - in f-strings) or a list representation (by accessing instance.lcore_list). - Empty lcore_list is allowed. + r"""A unified way to store :class:`LogicalCore`\s. + + Create a unified format used across the framework and allow the user to use + either a :class:`str` representation (using ``str(instance)`` or directly in f-strings) + or a :class:`list` representation (by accessing the `lcore_list` property, + which stores logical core IDs). """ _lcore_list: list[int] _lcore_str: str def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): + """Process `lcore_list`, then sort. + + There are four supported logical core list formats:: + + lcore_list=[LogicalCore1, LogicalCore2] # a list of LogicalCores + lcore_list=[0,1,2,3] # a list of int indices + lcore_list=['0','1','2-3'] # a list of str indices; ranges are supported + lcore_list='0,1,2-3' # a comma delimited str of indices; ranges are supported + + Args: + lcore_list: Various ways to represent multiple logical cores. + Empty `lcore_list` is allowed. + """ self._lcore_list = [] if isinstance(lcore_list, str): lcore_list = lcore_list.split(",") @@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): @property def lcore_list(self) -> list[int]: + """The logical core IDs.""" return self._lcore_list def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: @@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: return formatted_core_list def __str__(self) -> str: + """The consecutive ranges of logical core IDs.""" return self._lcore_str @dataclasses.dataclass(slots=True, frozen=True) class LogicalCoreCount(object): - """ - Define the number of logical cores to use. - If sockets is not None, socket_count is ignored. - """ + """Define the number of logical cores per physical cores per sockets.""" + #: Use this many logical cores per each physical core. lcores_per_core: int = 1 + #: Use this many physical cores per each socket. cores_per_socket: int = 2 + #: Use this many sockets. socket_count: int = 1 + #: Use exactly these sockets. This takes precedence over `socket_count`, + #: so when `sockets` is not :data:`None`, `socket_count` is ignored. sockets: list[int] | None = None class LogicalCoreFilter(ABC): - """ - Filter according to the input filter specifier. Each filter needs to be - implemented in a derived class. - This class only implements operations common to all filters, such as sorting - the list to be filtered beforehand. + """Common filtering class. + + Each filter needs to be implemented in a subclass. This base class sorts the list of cores + and defines the filtering method, which must be implemented by subclasses. """ _filter_specifier: LogicalCoreCount | LogicalCoreList @@ -122,6 +158,17 @@ def __init__( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ): + """Filter according to the input filter specifier. + + The input `lcore_list` is copied and sorted by physical core before filtering. + The list is copied so that the original is left intact. + + Args: + lcore_list: The logical CPU cores to filter. + filter_specifier: Filter cores from `lcore_list` according to this filter. + ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`, + sort in descending order. + """ self._filter_specifier = filter_specifier # sorting by core is needed in case hyperthreading is enabled @@ -132,31 +179,45 @@ def __init__( @abstractmethod def filter(self) -> list[LogicalCore]: - """ - Use self._filter_specifier to filter self._lcores_to_filter - and return the list of filtered LogicalCores. - self._lcores_to_filter is a sorted copy of the original list, - so it may be modified. + r"""Filter the cores. + + Use `self._filter_specifier` to filter `self._lcores_to_filter` and return + the filtered :class:`LogicalCore`\s. + `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified. + + Returns: + The filtered cores. """ class LogicalCoreCountFilter(LogicalCoreFilter): - """ + """Filter cores by specified counts. + Filter the input list of LogicalCores according to specified rules: - Use cores from the specified number of sockets or from the specified socket ids. - If sockets is specified, it takes precedence over socket_count. - From each of those sockets, use only cores_per_socket of cores. - And for each core, use lcores_per_core of logical cores. Hypertheading - must be enabled for this to take effect. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + + * The input `filter_specifier` is :class:`LogicalCoreCount`, + * Use cores from the specified number of sockets or from the specified socket ids, + * If `sockets` is specified, it takes precedence over `socket_count`, + * From each of those sockets, use only `cores_per_socket` of cores, + * And for each core, use `lcores_per_core` of logical cores. Hypertheading + must be enabled for this to take effect. """ _filter_specifier: LogicalCoreCount def filter(self) -> list[LogicalCore]: + """Filter the cores according to :class:`LogicalCoreCount`. + + Start by filtering the allowed sockets. The cores matching the allowed socket are returned. + The cores of each socket are stored in separate lists. + + Then filter the allowed physical cores from those lists of cores per socket. When filtering + physical cores, store the desired number of logical cores per physical core which then + together constitute the final filtered list. + + Returns: + The filtered cores. + """ sockets_to_filter = self._filter_sockets(self._lcores_to_filter) filtered_lcores = [] for socket_to_filter in sockets_to_filter: @@ -166,24 +227,37 @@ def filter(self) -> list[LogicalCore]: def _filter_sockets( self, lcores_to_filter: Iterable[LogicalCore] ) -> ValuesView[list[LogicalCore]]: - """ - Remove all lcores that don't match the specified socket(s). - If self._filter_specifier.sockets is not None, keep lcores from those sockets, - otherwise keep lcores from the first - self._filter_specifier.socket_count sockets. + """Filter a list of cores per each allowed socket. + + The sockets may be specified in two ways, either a number or a specific list of sockets. + In case of a specific list, we just need to return the cores from those sockets. + If filtering a number of cores, we need to go through all cores and note which sockets + appear and only filter from the first n that appear. + + Args: + lcores_to_filter: The cores to filter. These must be sorted by the physical core. + + Returns: + A list of lists of logical CPU cores. Each list contains cores from one socket. """ allowed_sockets: set[int] = set() socket_count = self._filter_specifier.socket_count if self._filter_specifier.sockets: + # when sockets in filter is specified, the sockets are already set socket_count = len(self._filter_specifier.sockets) allowed_sockets = set(self._filter_specifier.sockets) + # filter socket_count sockets from all sockets by checking the socket of each CPU filtered_lcores: dict[int, list[LogicalCore]] = {} for lcore in lcores_to_filter: if not self._filter_specifier.sockets: + # this is when sockets is not set, so we do the actual filtering + # when it is set, allowed_sockets is already defined and can't be changed if len(allowed_sockets) < socket_count: + # allowed_sockets is a set, so adding an existing socket won't re-add it allowed_sockets.add(lcore.socket) if lcore.socket in allowed_sockets: + # separate sockets per socket; this makes it easier in further processing if lcore.socket in filtered_lcores: filtered_lcores[lcore.socket].append(lcore) else: @@ -200,12 +274,13 @@ def _filter_sockets( def _filter_cores_from_socket( self, lcores_to_filter: Iterable[LogicalCore] ) -> list[LogicalCore]: - """ - Keep only the first self._filter_specifier.cores_per_socket cores. - In multithreaded environments, keep only - the first self._filter_specifier.lcores_per_core lcores of those cores. - """ + """Filter a list of cores from the given socket. + + Go through the cores and note how many logical cores per physical core have been filtered. + Returns: + The filtered logical CPU cores. + """ # no need to use ordered dict, from Python3.7 the dict # insertion order is preserved (LIFO). lcore_count_per_core_map: dict[int, int] = {} @@ -248,15 +323,21 @@ def _filter_cores_from_socket( class LogicalCoreListFilter(LogicalCoreFilter): - """ - Filter the input list of Logical Cores according to the input list of - lcore indices. - An empty LogicalCoreList won't filter anything. + """Filter the logical CPU cores by logical CPU core IDs. + + This is a simple filter that looks at logical CPU IDs and only filter those that match. + + The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything. """ _filter_specifier: LogicalCoreList def filter(self) -> list[LogicalCore]: + """Filter based on logical CPU core ID. + + Return: + The filtered logical CPU cores. + """ if not len(self._filter_specifier.lcore_list): return self._lcores_to_filter @@ -279,6 +360,17 @@ def lcore_filter( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool, ) -> LogicalCoreFilter: + """Factory for using the right filter with `filter_specifier`. + + Args: + core_list: The logical CPU cores to filter. + filter_specifier: The filter to use. + ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`, + sort in descending order. + + Returns: + The filter matching `filter_specifier`. + """ if isinstance(filter_specifier, LogicalCoreList): return LogicalCoreListFilter(core_list, filter_specifier, ascending) elif isinstance(filter_specifier, LogicalCoreCount): From patchwork Wed Nov 8 12:53:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133986 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13A1E432D6; Wed, 8 Nov 2023 13:55:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F7F342E69; Wed, 8 Nov 2023 13:53:49 +0100 (CET) Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by mails.dpdk.org (Postfix) with ESMTP id 0C6DB42E4E for ; Wed, 8 Nov 2023 13:53:44 +0100 (CET) Received: by mail-ed1-f51.google.com with SMTP id 4fb4d7f45d1cf-544455a4b56so7448221a12.1 for ; Wed, 08 Nov 2023 04:53:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448023; x=1700052823; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ITkrAxO5KuM/Qh2lLzYH/N5yf14RBiBwIGekQSrs0vE=; b=in1FAUPndMYpQ6a+JFEwqgmtPpuiSU71C1b3IhNTokFDpnfg4r5MBlCtGgVNYaNfYl 5l8EuR2eX1w7ZToWFZW9G/cIPv6bccXN2LJ57UUGBJ6pv+XmRB70QH6ukH3DsvAs+Xm1 MLAMVEJipsFeVKBjo/busMvszgbj4a+XcjXe4eq7J0E+1DzZZ+tO/DUj1puJWghNaA7q taXmSZwwwoBRWVrxOB2Udcpx79/0Sfzee4JQUstrtPzu/j38wstaeOu9zUyhqaNUQahy SWmDYUyhBl+KzTUO4CBftzPyjMvVS5aH9UnhsPnvo9ZsvwAQ85jYTBZbELpf31O+N4M6 Dfsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448023; x=1700052823; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ITkrAxO5KuM/Qh2lLzYH/N5yf14RBiBwIGekQSrs0vE=; b=D7UOli+AoRW3mmWVL8e5SHuPWYgs+Fp/UpH/qyGnFXrf7g1FtQEnvqmEYBacHBJvv3 h1CimnVWqP+ZzbQsLt8GNgp/pTO6xPe5Qk+7BuiidpbP1vXMNJQAWUjqR7+xM+trPdPy TXjk1i3dBCt7ExMmMcrbQD1y9vzjtD0IPGGeQqDWqHw30AZZWv8j966AG6m1D1qZYZ4j 0q593KNCrLzPNMtEjU0M3agP7IyNTfCqy2YVL5msTZrenX+dO8bDJPYlL/8pk4hLmVzg aMhztWBPPjQCqDAl45EOOHYmaX2yvPBiZ2hk0isHm91fXfqGsDkSxfzUq5ajGl//RpN1 1IOA== X-Gm-Message-State: AOJu0YyXA6i7nfO5Zl9AhAi7LYMzxPY+k6PXgRzrzU2YRl2Q+qdhfHXD nxKN/qFlKMCrdC09rFTXEMMfWw== X-Google-Smtp-Source: AGHT+IEws4pj1K+jI8o0RHOBPlW/JOsBYC8p/sL08xTxNFapcC6CbyKIs0Mm23bCqeNV02MN3tnL6Q== X-Received: by 2002:a50:d702:0:b0:542:d895:762a with SMTP id t2-20020a50d702000000b00542d895762amr1191385edi.39.1699448023626; Wed, 08 Nov 2023 04:53:43 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:42 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 15/23] dts: os session docstring update Date: Wed, 8 Nov 2023 13:53:16 +0100 Message-Id: <20231108125324.191005-15-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/os_session.py | 275 ++++++++++++++++------ 1 file changed, 208 insertions(+), 67 deletions(-) diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py index 76e595a518..bad75d52e7 100644 --- a/dts/framework/testbed_model/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -2,6 +2,29 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""OS-aware remote session. + +DPDK supports multiple different operating systems, meaning it can run on these different operating +systems. This module defines the common API that OS-unaware layers use and translates the API into +OS-aware calls/utility usage. + +Note: + Running commands with administrative privileges requires OS awareness. This is the only layer + that's aware of OS differences, so this is where non-privileged command get converted + to privileged commands. + +Example: + A user wishes to remove a directory on + a remote :class:`~framework.testbed_model.sut_node.SutNode`. + The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware what OS the node + is running - it delegates the OS translation logic + to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node calls + :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and + the :attr:`~framework.testbed_model.node.Node.main_session` translates that + to ``rm -rf`` if the node's OS is Linux and other commands for other OSs. + It also translates the path to match the underlying OS. +""" + from abc import ABC, abstractmethod from collections.abc import Iterable from ipaddress import IPv4Interface, IPv6Interface @@ -28,10 +51,16 @@ class OSSession(ABC): - """ - The OS classes create a DTS node remote session and implement OS specific + """OS-unaware to OS-aware translation API definition. + + The OSSession classes create a remote session to a DTS node and implement OS specific behavior. There a few control methods implemented by the base class, the rest need - to be implemented by derived classes. + to be implemented by subclasses. + + Attributes: + name: The name of the session. + remote_session: The remote session maintaining the connection to the node. + interactive_session: The interactive remote session maintaining the connection to the node. """ _config: NodeConfiguration @@ -46,6 +75,15 @@ def __init__( name: str, logger: DTSLOG, ): + """Initialize the OS-aware session. + + Connect to the node right away and also create an interactive remote session. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ self._config = node_config self.name = name self._logger = logger @@ -53,15 +91,15 @@ def __init__( self.interactive_session = create_interactive_session(node_config, logger) def close(self, force: bool = False) -> None: - """ - Close the remote session. + """Close the underlying remote session. + + Args: + force: Force the closure of the connection. """ self.remote_session.close(force) def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the underlying remote session is still responding.""" return self.remote_session.is_alive() def send_command( @@ -72,10 +110,23 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - An all-purpose API in case the command to be executed is already - OS-agnostic, such as when the path to the executed command has been - constructed beforehand. + """An all-purpose API for OS-agnostic commands. + + This can be used for an execution of a portable command that's executed the same way + on all operating systems, such as Python. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds to execute the command. + privileged: Whether to run the command with administrative privileges. + verify: If True, will check the exit code of the command. + env: A dictionary with environment variables to be used with the command execution. + + Raises: + RemoteCommandExecutionError: If verify is True and the command failed. """ if privileged: command = self._get_privileged_command(command) @@ -89,8 +140,20 @@ def create_interactive_shell( privileged: bool, app_args: str, ) -> InteractiveShellType: - """ - See "create_interactive_shell" in SutNode + """Factory for interactive session handlers. + + Instantiate `shell_cls` according to the remote OS specifics. + + Args: + shell_cls: The class of the shell. + timeout: Timeout for reading output from the SSH channel. If you are + reading from the buffer and don't receive any data within the timeout + it will throw an error. + privileged: Whether to run the shell with administrative privileges. + app_args: The arguments to be passed to the application. + + Returns: + An instance of the desired interactive application shell. """ return shell_cls( self.interactive_session.session, @@ -114,27 +177,42 @@ def _get_privileged_command(command: str) -> str: @abstractmethod def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: - """ - Try to find DPDK remote dir in remote_dir. + """Try to find DPDK directory in `remote_dir`. + + The directory is the one which is created after the extraction of the tarball. The files + are usually extracted into a directory starting with ``dpdk-``. + + Returns: + The absolute path of the DPDK remote directory, empty path if not found. """ @abstractmethod def get_remote_tmp_dir(self) -> PurePath: - """ - Get the path of the temporary directory of the remote OS. + """Get the path of the temporary directory of the remote OS. + + Returns: + The absolute path of the temporary directory. """ @abstractmethod def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for the target architecture. Get - information from the node if needed. + """Create extra environment variables needed for the target architecture. + + Different architectures may require different configuration, such as setting 32-bit CFLAGS. + + Returns: + A dictionary with keys as environment variables. """ @abstractmethod def join_remote_path(self, *args: str | PurePath) -> PurePath: - """ - Join path parts using the path separator that fits the remote OS. + """Join path parts using the path separator that fits the remote OS. + + Args: + args: Any number of paths to join. + + Returns: + The resulting joined path. """ @abstractmethod @@ -143,13 +221,13 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from the remote Node to the local filesystem. + """Copy a file from the remote node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote node associated with this remote + session to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. + source_file: the file on the remote node. destination_file: a file or directory path on the local filesystem. """ @@ -159,14 +237,14 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from local filesystem to the remote Node. + """Copy a file from local filesystem to the remote node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` + on the remote node associated with this remote session. Args: source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + destination_file: a file or directory path on the remote node. """ @abstractmethod @@ -176,8 +254,12 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: - """ - Remove remote directory, by default remove recursively and forcefully. + """Remove remote directory, by default remove recursively and forcefully. + + Args: + remote_dir_path: The path of the directory to remove. + recursive: If :data:`True`, also remove all contents inside the directory. + force: If :data:`True`, ignore all warnings and try to remove at all costs. """ @abstractmethod @@ -186,9 +268,12 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: - """ - Extract remote tarball in place. If expected_dir is a non-empty string, check - whether the dir exists after extracting the archive. + """Extract remote tarball in its remote directory. + + Args: + remote_tarball_path: The path of the tarball on the remote node. + expected_dir: If non-empty, check whether `expected_dir` exists after extracting + the archive. """ @abstractmethod @@ -201,69 +286,119 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: - """ - Build DPDK in the input dir with specified environment variables and meson - arguments. + """Build DPDK on the remote node. + + An extracted DPDK tarball must be present on the node. The build consists of two steps:: + + meson setup remote_dpdk_dir remote_dpdk_build_dir + ninja -C remote_dpdk_build_dir + + The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT` + environment variable configure the timeout of DPDK build. + + Args: + env_vars: Use these environment variables then building DPDK. + meson_args: Use these meson arguments when building DPDK. + remote_dpdk_dir: The directory on the remote node where DPDK will be built. + remote_dpdk_build_dir: The target build directory on the remote node. + rebuild: If True, do a subsequent build with ``meson configure`` instead + of ``meson setup``. + timeout: Wait at most this long in seconds for the build to execute. """ @abstractmethod def get_dpdk_version(self, version_path: str | PurePath) -> str: - """ - Inspect DPDK version on the remote node from version_path. + """Inspect the DPDK version on the remote node. + + Args: + version_path: The path to the VERSION file containing the DPDK version. + + Returns: + The DPDK version. """ @abstractmethod def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: - """ - Compose a list of LogicalCores present on the remote node. - If use_first_core is False, the first physical core won't be used. + r"""Get the list of :class:`~framework.testbed_model.cpu.LogicalCore`\s on the remote node. + + Args: + use_first_core: If :data:`False`, the first physical core won't be used. + + Returns: + The logical cores present on the node. """ @abstractmethod def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: - """ - Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If - dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean. + """Kill and cleanup all DPDK apps. + + Args: + dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`. + If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean. """ @abstractmethod def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: - """ - Get the DPDK file prefix that will be used when running DPDK apps. + """Make OS-specific modification to the DPDK file prefix. + + Args: + dpdk_prefix: The OS-unaware file prefix. + + Returns: + The OS-specific file prefix. """ @abstractmethod - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: - """ - Get the node's Hugepage Size, configure the specified amount of hugepages + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Configure hugepages on the node. + + Get the node's Hugepage Size, configure the specified count of hugepages if needed and mount the hugepages if needed. - If force_first_numa is True, configure hugepages just on the first socket. + + Args: + hugepage_count: Configure this many hugepages. + force_first_numa: If :data:`True`, configure hugepages just on the first socket. """ @abstractmethod def get_compiler_version(self, compiler_name: str) -> str: - """ - Get installed version of compiler used for DPDK + """Get installed version of compiler used for DPDK. + + Args: + compiler_name: The name of the compiler executable. + + Returns: + The compiler's version. """ @abstractmethod def get_node_info(self) -> NodeInfo: - """ - Collect information about the node + """Collect additional information about the node. + + Returns: + Node information. """ @abstractmethod def update_ports(self, ports: list[Port]) -> None: - """ - Get additional information about ports: - Logical name (e.g. enp7s0) if applicable - Mac address + """Get additional information about ports from the operating system and update them. + + The additional information is: + + * Logical name (e.g. ``enp7s0``) if applicable, + * Mac address. + + Args: + ports: The ports to update. """ @abstractmethod def configure_port_state(self, port: Port, enable: bool) -> None: - """ - Enable/disable port. + """Enable/disable `port` in the operating system. + + Args: + port: The port to configure. + enable: If :data:`True`, enable the port, otherwise shut it down. """ @abstractmethod @@ -273,12 +408,18 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: - """ - Configure (add or delete) an IP address of the input port. + """Configure an IP address on `port` in the operating system. + + Args: + address: The address to configure. + port: The port to configure. + delete: If :data:`True`, remove the IP address, otherwise configure it. """ @abstractmethod def configure_ipv4_forwarding(self, enable: bool) -> None: - """ - Enable IPv4 forwarding in the underlying OS. + """Enable IPv4 forwarding in the operating system. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. """ From patchwork Wed Nov 8 12:53:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133987 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 293BE432D6; Wed, 8 Nov 2023 13:55:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 851FC42E6F; Wed, 8 Nov 2023 13:53:50 +0100 (CET) Received: from mail-ed1-f54.google.com (mail-ed1-f54.google.com [209.85.208.54]) by mails.dpdk.org (Postfix) with ESMTP id D70EF42E50 for ; Wed, 8 Nov 2023 13:53:44 +0100 (CET) Received: by mail-ed1-f54.google.com with SMTP id 4fb4d7f45d1cf-53dd3f169d8so11428397a12.3 for ; Wed, 08 Nov 2023 04:53:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448024; x=1700052824; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HtpE/O7NYRazzLGD2fslpp4IFCJwcEtAC0VMwjolv2s=; b=nXTLJ425CrQteSORPRouqnrpY79SdOwk567XkkLFAB79W2vnLMLVJnjxgHakO8sYi3 a1n4qGXdtjia+eLJlgWonpXoMCFRJIssj2/pXGdD2dz58LvQtDJN8YZxdC+KEx0dqf5V EjDveL3Nx7558RtejJaCPzzW/m1mPxgz4c+IAoyKedhP6v0joiiQzc1V2vfHH0LT5+LA 0YmbdkKyV1SXiT4XrejRftc/XLzZO3uTqAZEw658EV8K+6DdG56Dozoi6gfkd9vDRAWG AN4WsijBX/OTyY6V960pIqpx6f9mJQi6jPpn2NDji06bfZgtVQpo/HPCLU+Ybs9y1tMh xR+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448024; x=1700052824; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HtpE/O7NYRazzLGD2fslpp4IFCJwcEtAC0VMwjolv2s=; b=XM2cmyULS5U/lv9kBCHNiKTJTAl/YpxlrbLBqSqceggeOCjjoK8swPKoiC5sXUN3JL jW6wKZkKflL+4yH2fBPZaVD2XhSViHbyy7POVq9ljAMEjR+BusZme9J0xKfZ1ZdH1NyF gRGuSNn1RJs72HdoxohuqozFEFlcXYPnJJoRbiDWSXGUwhkq/PRmu24YV3OBxDTG3Mkm orf5xVig/J1uQAYfX41a6HF+Ea0Tfu2uzYqGUSis8QQIVceYdXoUO59gw9OnefyAg0bE J7XoJGT9NmfVdOOpCc00lpUujjPg4B5lxXAmSLSvc0uvTqLk+RUIhmiCLtn4deRoU0pR 5HiQ== X-Gm-Message-State: AOJu0YwN7E6nAKryFrbnG/9ZLpu7b4CJX8a2QnBAiL+0tT0rJRrNwm+9 UIHytSMdoevncrfAsrtcBW3g6Q== X-Google-Smtp-Source: AGHT+IHmk3p3i2gjG1wRdIA5H541lGM4LIcZK/t+BuLOdOfGX1ISqNSL3kHMW0Y9RycZ/w6wgf8aPQ== X-Received: by 2002:a50:9fcc:0:b0:541:875:53d8 with SMTP id c70-20020a509fcc000000b00541087553d8mr1520247edf.23.1699448024574; Wed, 08 Nov 2023 04:53:44 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:44 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 16/23] dts: posix and linux sessions docstring update Date: Wed, 8 Nov 2023 13:53:17 +0100 Message-Id: <20231108125324.191005-16-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/linux_session.py | 63 ++++++++++----- dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++--- 2 files changed, 113 insertions(+), 31 deletions(-) diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py index f472bb8f0f..279954ff63 100644 --- a/dts/framework/testbed_model/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -2,6 +2,13 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""Linux OS translator. + +Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly +compliant with POSIX standards, so this module only implements the parts that aren't. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import json from ipaddress import IPv4Interface, IPv6Interface from typing import TypedDict, Union @@ -17,43 +24,51 @@ class LshwConfigurationOutput(TypedDict): + """The relevant parts of ``lshw``'s ``configuration`` section.""" + + #: link: str class LshwOutput(TypedDict): - """ - A model of the relevant information from json lshw output, e.g.: - { - ... - "businfo" : "pci@0000:08:00.0", - "logicalname" : "enp8s0", - "version" : "00", - "serial" : "52:54:00:59:e1:ac", - ... - "configuration" : { - ... - "link" : "yes", - ... - }, - ... + """A model of the relevant information from ``lshw``'s json output. + + e.g.:: + + { + ... + "businfo" : "pci@0000:08:00.0", + "logicalname" : "enp8s0", + "version" : "00", + "serial" : "52:54:00:59:e1:ac", + ... + "configuration" : { + ... + "link" : "yes", + ... + }, + ... """ + #: businfo: str + #: logicalname: NotRequired[str] + #: serial: NotRequired[str] + #: configuration: LshwConfigurationOutput class LinuxSession(PosixSession): - """ - The implementation of non-Posix compliant parts of Linux remote sessions. - """ + """The implementation of non-Posix compliant parts of Linux.""" @staticmethod def _get_privileged_command(command: str) -> str: return f"sudo -- sh -c '{command}'" def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: + """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`.""" cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout lcores = [] for cpu_line in cpu_info.splitlines(): @@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: return lcores def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return dpdk_prefix - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.""" self._logger.info("Getting Hugepage information.") hugepage_size = self._get_hugepage_size() hugepages_total = self._get_hugepages_total() self._numa_nodes = self._get_numa_nodes() - if force_first_numa or hugepages_total != hugepage_amount: + if force_first_numa or hugepages_total != hugepage_count: # when forcing numa, we need to clear existing hugepages regardless # of size, so they can be moved to the first numa node - self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa) + self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa) else: self._logger.info("Hugepages already configured.") self._mount_huge_pages() @@ -140,6 +157,7 @@ def _configure_huge_pages( ) def update_ports(self, ports: list[Port]) -> None: + """Overrides :meth:`~.os_session.OSSession.update_ports`.""" self._logger.debug("Gathering port info.") for port in ports: assert ( @@ -178,6 +196,7 @@ def _update_port_attr( ) def configure_port_state(self, port: Port, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_state`.""" state = "up" if enable else "down" self.send_command( f"ip link set dev {port.logical_name} {state}", privileged=True @@ -189,6 +208,7 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`.""" command = "del" if delete else "add" self.send_command( f"ip address {command} {address} dev {port.logical_name}", @@ -197,5 +217,6 @@ def configure_port_ip_address( ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`.""" state = 1 if enable else 0 self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True) diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py index 1d1d5b1b26..a4824aa274 100644 --- a/dts/framework/testbed_model/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -2,6 +2,15 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""POSIX compliant OS translator. + +Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards +for portability between Unix operating systems which not all Linux distributions +(or the tools most frequently bundled with said distributions) adhere to. Most of Linux +distributions are mostly compliant though. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import re from collections.abc import Iterable from pathlib import PurePath, PurePosixPath @@ -15,13 +24,21 @@ class PosixSession(OSSession): - """ - An intermediary class implementing the Posix compliant parts of - Linux and other OS remote sessions. - """ + """An intermediary class implementing the POSIX standard.""" @staticmethod def combine_short_options(**opts: bool) -> str: + """Combine shell options into one argument. + + These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``. + + Args: + opts: The keys are option names (usually one letter) and the bool values indicate + whether to include the option in the resulting argument. + + Returns: + The options combined into one argument. + """ ret_opts = "" for opt, include in opts.items(): if include: @@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str: return ret_opts def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`.""" remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) def get_remote_tmp_dir(self) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`.""" return PurePosixPath("/tmp") def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for i686 arch build. Get information - from the node if needed. + """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`. + + Supported architecture: ``i686``. """ env_vars = {} if arch == Architecture.i686: @@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: return env_vars def join_remote_path(self, *args: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.join_remote_path`.""" return PurePosixPath(*args) def copy_from( @@ -70,6 +90,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_from`.""" self.remote_session.copy_from(source_file, destination_file) def copy_to( @@ -77,6 +98,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_to`.""" self.remote_session.copy_to(source_file, destination_file) def remove_remote_dir( @@ -85,6 +107,7 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: + """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`.""" opts = PosixSession.combine_short_options(r=recursive, f=force) self.send_command(f"rm{opts} {remote_dir_path}") @@ -93,6 +116,7 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: + """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`.""" self.send_command( f"tar xfm {remote_tarball_path} " f"-C {PurePosixPath(remote_tarball_path).parent}", @@ -110,6 +134,7 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: + """Overrides :meth:`~.os_session.OSSession.build_dpdk`.""" try: if rebuild: # reconfigure, then build @@ -140,12 +165,14 @@ def build_dpdk( raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.") def get_dpdk_version(self, build_dir: str | PurePath) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`.""" out = self.send_command( f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True ) return out.stdout def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: + """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`.""" self._logger.info("Cleaning up DPDK apps.") dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list) if dpdk_runtime_dirs: @@ -159,6 +186,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: def _get_dpdk_runtime_dirs( self, dpdk_prefix_list: Iterable[str] ) -> list[PurePosixPath]: + """Find runtime directories DPDK apps are currently using. + + Args: + dpdk_prefix_list: The prefixes DPDK apps were started with. + + Returns: + The paths of DPDK apps' runtime dirs. + """ prefix = PurePosixPath("/var", "run", "dpdk") if not dpdk_prefix_list: remote_prefixes = self._list_remote_dirs(prefix) @@ -170,9 +205,13 @@ def _get_dpdk_runtime_dirs( return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list] def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: - """ - Return a list of directories of the remote_dir. - If remote_path doesn't exist, return None. + """Contents of remote_path. + + Args: + remote_path: List the contents of this path. + + Returns: + The contents of remote_path. If remote_path doesn't exist, return None. """ out = self.send_command( f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'" @@ -183,6 +222,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: return out.splitlines() def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]: + """Find PIDs of running DPDK apps. + + Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes + that opened those file. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + + Returns: + The PIDs of running DPDK apps. + """ pids = [] pid_regex = r"p(\d+)" for dpdk_runtime_dir in dpdk_runtime_dirs: @@ -203,6 +253,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool: def _check_dpdk_hugepages( self, dpdk_runtime_dirs: Iterable[str | PurePath] ) -> None: + """Check there aren't any leftover hugepages. + + If any hugegapes are found, emit a warning. The hugepages are investigated in the + "hugepage_info" file of dpdk_runtime_dirs. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + """ for dpdk_runtime_dir in dpdk_runtime_dirs: hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info") if self._remote_files_exists(hugepage_info): @@ -220,9 +278,11 @@ def _remove_dpdk_runtime_dirs( self.remove_remote_dir(dpdk_runtime_dir) def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return "" def get_compiler_version(self, compiler_name: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_compiler_version`.""" match compiler_name: case "gcc": return self.send_command( @@ -240,6 +300,7 @@ def get_compiler_version(self, compiler_name: str) -> str: raise ValueError(f"Unknown compiler {compiler_name}") def get_node_info(self) -> NodeInfo: + """Overrides :meth:`~.os_session.OSSession.get_node_info`.""" os_release_info = self.send_command( "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release", SETTINGS.timeout, From patchwork Wed Nov 8 12:53:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133988 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AE86A432D6; Wed, 8 Nov 2023 13:55:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0144442E79; Wed, 8 Nov 2023 13:53:52 +0100 (CET) Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) by mails.dpdk.org (Postfix) with ESMTP id 2831D42E5A for ; Wed, 8 Nov 2023 13:53:46 +0100 (CET) Received: by mail-ed1-f41.google.com with SMTP id 4fb4d7f45d1cf-5437d60fb7aso11497689a12.3 for ; Wed, 08 Nov 2023 04:53:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448026; x=1700052826; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NUm7xMiT/BRHjWTmLVv9u7rEe8Sqswv+nKyJJdnT3gQ=; b=BiFocC5x7pSjCD5W2fnAlEqQyt7YcRXkHUbTasmZc7bpaSVOl49bOtFJWP0ScCxFD0 ETp1Itl0aChLsDTvU6Z/ehgSOOWrR5OpyYr5BgcohMawum+b+YJ682ri++CwrAwROFNP 0Iwn1KOb9Tiu/rkOxYw0N2xLX44ALxfjSdTuos8Ih8EKPKpuJJUhd5+Gv2vMmYMmcSGO mRkyatuoC5csxlMABDbhl1VuUJnXRZ4vOGMfb6DJdyZDMEt2G5QO/fWYQDxQy8Y88Qgx wpmsPShietd9R5I/G141ndxxLrebFXXVmb5YO+IF8GFZ2jQZkACQT75pDofIolO3/i2p z6Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448026; x=1700052826; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NUm7xMiT/BRHjWTmLVv9u7rEe8Sqswv+nKyJJdnT3gQ=; b=GCP/trN1mYaf8isdpPOqpal8PMPQTaL+wqwkoM4ZjflConsaRWmwBYb0EMrM0Jh/3C 9zxuAvC0zvTw+ZU6Txnxo7HLWCOSueE9dJcm0436CegoOT8fLzTwPmGPvfmHErMtQWCf +UZgmO9AS4aCKhY1Hw6OGlo4GrtM3hHY/BhE+Ov+Gg/suiuVMUNeC7csU4+Cx3t5JH9Q mebM0ZD2HKUsuQpnb9mGX4cbRc00MU2Nb8ZZmPHIkU4jt0cbvueSCzVpFtflhKF8PXSy I2JY2+B/LQUYEDIpbHshdpD8bZYItNvWKthSoGGDdi/yGk7Ncokp2AH3+DcRBLiivbMV hcwA== X-Gm-Message-State: AOJu0YwR/Eo3Et219O3z9JIZFvcFyfxM6FcA8dAsU/biW/fsFJT5QsS1 4yJG8nosp+HKHJjtPi9hByh/6A== X-Google-Smtp-Source: AGHT+IGOoDv0lUZ4dOzRXkYcgg87eMjA7zg6wuJ3vZiHU17dL1l5F0gjAjjRatYH4orvk1pI9TATGw== X-Received: by 2002:a50:c357:0:b0:542:d2c4:b423 with SMTP id q23-20020a50c357000000b00542d2c4b423mr1411959edb.30.1699448025821; Wed, 08 Nov 2023 04:53:45 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:45 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 17/23] dts: node docstring update Date: Wed, 8 Nov 2023 13:53:18 +0100 Message-Id: <20231108125324.191005-17-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/node.py | 191 +++++++++++++++++++--------- 1 file changed, 131 insertions(+), 60 deletions(-) diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index 7571e7b98d..abf86793a7 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -3,8 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -A node is a generic host that DTS connects to and manages. +"""Common functionality for node management. + +A node is any host/server DTS connects to. + +The base class, :class:`Node`, provides functionality common to all nodes and is supposed +to be extended by subclasses with functionality specific to each node type. +The decorator :func:`Node.skip_setup` can be used without subclassing. """ from abc import ABC @@ -35,10 +40,22 @@ class Node(ABC): - """ - Basic class for node management. This class implements methods that - manage a node, such as information gathering (of CPU/PCI/NIC) and - environment setup. + """The base class for node management. + + It shouldn't be instantiated, but rather subclassed. + It implements common methods to manage any node: + + * Connection to the node, + * Hugepages setup. + + Attributes: + main_session: The primary OS-aware remote session used to communicate with the node. + config: The node configuration. + name: The name of the node. + lcores: The list of logical cores that DTS can use on the node. + It's derived from logical cores present on the node and the test run configuration. + ports: The ports of this node specified in the test run configuration. + virtual_devices: The virtual devices used on the node. """ main_session: OSSession @@ -52,6 +69,17 @@ class Node(ABC): virtual_devices: list[VirtualDevice] def __init__(self, node_config: NodeConfiguration): + """Connect to the node and gather info during initialization. + + Extra gathered information: + + * The list of available logical CPUs. This is then filtered by + the ``lcores`` configuration in the YAML test run configuration file, + * Information about ports from the YAML test run configuration file. + + Args: + node_config: The node's test run configuration. + """ self.config = node_config self.name = node_config.name self._logger = getLogger(self.name) @@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration): self._logger.info(f"Connected to node: {self.name}") self._get_remote_cpus() - # filter the node lcores according to user config + # filter the node lcores according to the test run configuration self.lcores = LogicalCoreListFilter( self.lcores, LogicalCoreList(self.config.lcores) ).filter() @@ -77,9 +105,14 @@ def _init_ports(self) -> None: self.configure_port_state(port) def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - Perform the execution setup that will be done for each execution - this node is part of. + """Execution setup steps. + + Configure hugepages and call :meth:`_set_up_execution` where + the rest of the configuration steps (if any) are implemented. + + Args: + execution_config: The execution test run configuration according to which + the setup steps will be taken. """ self._setup_hugepages() self._set_up_execution(execution_config) @@ -88,58 +121,74 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: self.virtual_devices.append(VirtualDevice(vdev)) def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution setup steps for subclasses. + + Subclasses should override this if they need to add additional execution setup steps. """ def tear_down_execution(self) -> None: - """ - Perform the execution teardown that will be done after each execution - this node is part of concludes. + """Execution teardown steps. + + There are currently no common execution teardown steps common to all DTS node types. """ self.virtual_devices = [] self._tear_down_execution() def _tear_down_execution(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution teardown steps for subclasses. + + Subclasses should override this if they need to add additional execution teardown steps. """ def set_up_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - Perform the build target setup that will be done for each build target - tested on this node. + """Build target setup steps. + + There are currently no common build target setup steps common to all DTS node types. + + Args: + build_target_config: The build target test run configuration according to which + the setup steps will be taken. """ self._set_up_build_target(build_target_config) def _set_up_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target setup steps for subclasses. + + Subclasses should override this if they need to add additional build target setup steps. """ def tear_down_build_target(self) -> None: - """ - Perform the build target teardown that will be done after each build target - tested on this node. + """Build target teardown steps. + + There are currently no common build target teardown steps common to all DTS node types. """ self._tear_down_build_target() def _tear_down_build_target(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target teardown steps for subclasses. + + Subclasses should override this if they need to add additional build target teardown steps. """ def create_session(self, name: str) -> OSSession: - """ - Create and return a new OSSession tailored to the remote OS. + """Create and return a new OS-aware remote session. + + The returned session won't be used by the node creating it. The session must be used by + the caller. The session will be maintained for the entire lifecycle of the node object, + at the end of which the session will be cleaned up automatically. + + Note: + Any number of these supplementary sessions may be created. + + Args: + name: The name of the session. + + Returns: + A new OS-aware remote session. """ session_name = f"{self.name} {name}" connection = create_session( @@ -157,19 +206,19 @@ def create_interactive_shell( privileged: bool = False, app_args: str = "", ) -> InteractiveShellType: - """Create a handler for an interactive session. + """Factory for interactive session handlers. - Instantiate shell_cls according to the remote OS specifics. + Instantiate `shell_cls` according to the remote OS specifics. Args: shell_cls: The class of the shell. - timeout: Timeout for reading output from the SSH channel. If you are - reading from the buffer and don't receive any data within the timeout - it will throw an error. + timeout: Timeout for reading output from the SSH channel. If you are reading from + the buffer and don't receive any data within the timeout it will throw an error. privileged: Whether to run the shell with administrative privileges. app_args: The arguments to be passed to the application. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not shell_cls.dpdk_app: shell_cls.path = self.main_session.join_remote_path(shell_cls.path) @@ -186,14 +235,22 @@ def filter_lcores( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ) -> list[LogicalCore]: - """ - Filter the LogicalCores found on the Node according to - a LogicalCoreCount or a LogicalCoreList. + """Filter the node's logical cores that DTS can use. + + Logical cores that DTS can use are the ones that are present on the node, but filtered + according to the test run configuration. The `filter_specifier` will filter cores from + those logical cores. + + Args: + filter_specifier: Two different filters can be used, one that specifies the number + of logical cores per core, cores per socket and the number of sockets, + and another one that specifies a logical core list. + ascending: If :data:`True`, use cores with the lowest numerical id first and continue + in ascending order. If :data:`False`, start with the highest id and continue + in descending order. This ordering affects which sockets to consider first as well. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + Returns: + The filtered logical cores. """ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.") return lcore_filter( @@ -203,17 +260,14 @@ def filter_lcores( ).filter() def _get_remote_cpus(self) -> None: - """ - Scan CPUs in the remote OS and store a list of LogicalCores. - """ + """Scan CPUs in the remote OS and store a list of LogicalCores.""" self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) def _setup_hugepages(self) -> None: - """ - Setup hugepages on the Node. Different architectures can supply different - amounts of memory for hugepages and numa-based hugepage allocation may need - to be considered. + """Setup hugepages on the node. + + Configure the hugepages only if they're specified in the node's test run configuration. """ if self.config.hugepages: self.main_session.setup_hugepages( @@ -221,8 +275,11 @@ def _setup_hugepages(self) -> None: ) def configure_port_state(self, port: Port, enable: bool = True) -> None: - """ - Enable/disable port. + """Enable/disable `port`. + + Args: + port: The port to enable/disable. + enable: :data:`True` to enable, :data:`False` to disable. """ self.main_session.configure_port_state(port, enable) @@ -232,15 +289,17 @@ def configure_port_ip_address( port: Port, delete: bool = False, ) -> None: - """ - Configure the IP address of a port on this node. + """Add an IP address to `port` on this node. + + Args: + address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6. + port: The port to which to add the address. + delete: If :data:`True`, will delete the address from the port instead of adding it. """ self.main_session.configure_port_ip_address(address, port, delete) def close(self) -> None: - """ - Close all connections and free other resources. - """ + """Close all connections and free other resources.""" if self.main_session: self.main_session.close() for session in self._other_sessions: @@ -249,6 +308,11 @@ def close(self) -> None: @staticmethod def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: + """Skip the decorated function. + + The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP` + environment variable enable the decorator. + """ if SETTINGS.skip_setup: return lambda *args: None else: @@ -258,6 +322,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: def create_session( node_config: NodeConfiguration, name: str, logger: DTSLOG ) -> OSSession: + """Factory for OS-aware sessions. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ match node_config.os: case OS.linux: return LinuxSession(node_config, name, logger) From patchwork Wed Nov 8 12:53:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133989 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 001AB432D6; Wed, 8 Nov 2023 13:55:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7F0B142E86; Wed, 8 Nov 2023 13:53:53 +0100 (CET) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id B411F42E66 for ; Wed, 8 Nov 2023 13:53:47 +0100 (CET) Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-52bd9ddb741so11577742a12.0 for ; Wed, 08 Nov 2023 04:53:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448027; x=1700052827; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l2+XtX+mO5JQiK3rWOnIzn/DkDWxuXrREag6topOsA4=; b=IDuJt13NNshZR1YXmk6kQGdQnp+J38ovBrxo6hCNpHpx3q4z7t5eT5V2CitwbMdarA lcuuGK3nm3D8qACeL5fv/77/WYHRbGe1DSOZecrqK6N22mLF4/gtwmtaaiDdqLwXruxX 1pnR+4fvy5ugA916XclLhYJb/JU2mGPRrZ4QTGM51hZNtSudpElEmL5S7NuMBAvhehlQ rBQeouOx71vJg4klTowNXyPNwkuyJRiwCT/UGdpjSu4rJls0/8lrj68a4symy1LlKLK8 3wE6LGNCzpq6UqrsTAnmT0ijW+uYRRfCQiCb58Se//iUkw287DQICjIe6UInUkqcBnxN ywPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448027; x=1700052827; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l2+XtX+mO5JQiK3rWOnIzn/DkDWxuXrREag6topOsA4=; b=m518W6MpsqA8jv3yGpYtaMIUvpeeEqWUUT4TAFjXKAbGmM549FOlVYJ238aZFFn12p 36eVx/w7/oI5KEhiDMc2bLtoNLmFsrVNKBVO6Eb3EG1qO1msrCzrxrl2dCT5ufardSSY zSkvamvTEBzPrG7Fr1sAbvkZRdpqPLvF2Q5CtqpPUzytMgtSeJ6P6s61GlNFHoWaAe6e HUzfbxgkHQx+8qy/rcvYY2NojhCHEYmYcwnS+V4p5QC0IV6borWRzlhsfAZJ+BbhcWsG cGkGsWPLnMrt2ymTdiQ2W6zQgrgSMPPq60hnDo1fIuxWEizwL7l/g+7OV+7I53sHC9FL Aa1w== X-Gm-Message-State: AOJu0YzWUfB7va7EdAJUKG7IElNLrPP+sb2/o/g6V4xgrqH9ZiJlPzRI 6xAZln+nymLVs7n0P51RlCy+tA== X-Google-Smtp-Source: AGHT+IE+o4vifbZjDUH8cjsKPMEmFtO3pHdEsR2NyW0bfeE65aXBZ+TlrRGxyQ2N7lEsOpN3RfvZzg== X-Received: by 2002:a50:d742:0:b0:540:3a46:cdcd with SMTP id i2-20020a50d742000000b005403a46cdcdmr1386402edj.29.1699448027089; Wed, 08 Nov 2023 04:53:47 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:46 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 18/23] dts: sut and tg nodes docstring update Date: Wed, 8 Nov 2023 13:53:19 +0100 Message-Id: <20231108125324.191005-18-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/sut_node.py | 219 ++++++++++++++++-------- dts/framework/testbed_model/tg_node.py | 42 +++-- 2 files changed, 170 insertions(+), 91 deletions(-) diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 4e33cf02ea..b57d48fd31 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -3,6 +3,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""System under test (DPDK + hardware) node. + +A system under test (SUT) is the combination of DPDK +and the hardware we're testing with DPDK (NICs, crypto and other devices). +An SUT node is where this SUT runs. +""" + + import os import tarfile import time @@ -26,6 +34,11 @@ class EalParameters(object): + """The environment abstraction layer parameters. + + The string representation can be created by converting the instance to a string. + """ + def __init__( self, lcore_list: LogicalCoreList, @@ -35,21 +48,23 @@ def __init__( vdevs: list[VirtualDevice], other_eal_param: str, ): - """ - Generate eal parameters character string; - :param lcore_list: the list of logical cores to use. - :param memory_channels: the number of memory channels to use. - :param prefix: set file prefix string, eg: - prefix='vf' - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' + """Initialize the parameters according to inputs. + + Process the parameters into the format used on the command line. + + Args: + lcore_list: The list of logical cores to use. + memory_channels: The number of memory channels to use. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'`` """ self._lcore_list = f"-l {lcore_list}" self._memory_channels = f"-n {memory_channels}" @@ -61,6 +76,7 @@ def __init__( self._other_eal_param = other_eal_param def __str__(self) -> str: + """Create the EAL string.""" return ( f"{self._lcore_list} " f"{self._memory_channels} " @@ -72,11 +88,21 @@ def __str__(self) -> str: class SutNode(Node): - """ - A class for managing connections to the System under Test, providing - methods that retrieve the necessary information about the node (such as - CPU, memory and NIC details) and configuration capabilities. - Another key capability is building DPDK according to given build target. + """The system under test node. + + The SUT node extends :class:`Node` with DPDK specific features: + + * DPDK build, + * Gathering of DPDK build info, + * The running of DPDK apps, interactively or one-time execution, + * DPDK apps cleanup. + + The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL` + environment variable configure the path to the DPDK tarball + or the git commit ID, tag ID or tree ID to test. + + Attributes: + config: The SUT node configuration """ config: SutNodeConfiguration @@ -93,6 +119,11 @@ class SutNode(Node): _compiler_version: str | None def __init__(self, node_config: SutNodeConfiguration): + """Extend the constructor with SUT node specifics. + + Args: + node_config: The SUT node's test run configuration. + """ super(SutNode, self).__init__(node_config) self._dpdk_prefix_list = [] self._build_target_config = None @@ -111,6 +142,12 @@ def __init__(self, node_config: SutNodeConfiguration): @property def _remote_dpdk_dir(self) -> PurePath: + """The remote DPDK dir. + + This internal property should be set after extracting the DPDK tarball. If it's not set, + that implies the DPDK setup step has been skipped, in which case we can guess where + a previous build was located. + """ if self.__remote_dpdk_dir is None: self.__remote_dpdk_dir = self._guess_dpdk_remote_dir() return self.__remote_dpdk_dir @@ -121,6 +158,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None: @property def remote_dpdk_build_dir(self) -> PurePath: + """The remote DPDK build directory. + + This is the directory where DPDK was built. + We assume it was built in a subdirectory of the extracted tarball. + """ if self._build_target_config: return self.main_session.join_remote_path( self._remote_dpdk_dir, self._build_target_config.name @@ -130,6 +172,7 @@ def remote_dpdk_build_dir(self) -> PurePath: @property def dpdk_version(self) -> str: + """Last built DPDK version.""" if self._dpdk_version is None: self._dpdk_version = self.main_session.get_dpdk_version( self._remote_dpdk_dir @@ -138,12 +181,14 @@ def dpdk_version(self) -> str: @property def node_info(self) -> NodeInfo: + """Additional node information.""" if self._node_info is None: self._node_info = self.main_session.get_node_info() return self._node_info @property def compiler_version(self) -> str: + """The node's compiler version.""" if self._compiler_version is None: if self._build_target_config is not None: self._compiler_version = self.main_session.get_compiler_version( @@ -158,6 +203,11 @@ def compiler_version(self) -> str: return self._compiler_version def get_build_target_info(self) -> BuildTargetInfo: + """Get additional build target information. + + Returns: + The build target information, + """ return BuildTargetInfo( dpdk_version=self.dpdk_version, compiler_version=self.compiler_version ) @@ -168,8 +218,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath: def _set_up_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - Setup DPDK on the SUT node. + """Setup DPDK on the SUT node. + + Additional build target setup steps on top of those in :class:`Node`. """ # we want to ensure that dpdk_version and compiler_version is reset for new # build targets @@ -182,9 +233,7 @@ def _set_up_build_target( def _configure_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - Populate common environment variables and set build target config. - """ + """Populate common environment variables and set build target config.""" self._env_vars = {} self._build_target_config = build_target_config self._env_vars.update( @@ -199,9 +248,7 @@ def _configure_build_target( @Node.skip_setup def _copy_dpdk_tarball(self) -> None: - """ - Copy to and extract DPDK tarball on the SUT node. - """ + """Copy to and extract DPDK tarball on the SUT node.""" self._logger.info("Copying DPDK tarball to SUT.") self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir) @@ -232,8 +279,9 @@ def _copy_dpdk_tarball(self) -> None: @Node.skip_setup def _build_dpdk(self) -> None: - """ - Build DPDK. Uses the already configured target. Assumes that the tarball has + """Build DPDK. + + Uses the already configured target. Assumes that the tarball has already been copied to and extracted on the SUT node. """ self.main_session.build_dpdk( @@ -244,15 +292,19 @@ def _build_dpdk(self) -> None: ) def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath: - """ - Build one or all DPDK apps. Requires DPDK to be already built on the SUT node. - When app_name is 'all', build all example apps. - When app_name is any other string, tries to build that example app. - Return the directory path of the built app. If building all apps, return - the path to the examples directory (where all apps reside). - The meson_dpdk_args are keyword arguments - found in meson_option.txt in root DPDK directory. Do not use -D with them, - for example: enable_kmods=True. + """Build one or all DPDK apps. + + Requires DPDK to be already built on the SUT node. + + Args: + app_name: The name of the DPDK app to build. + When `app_name` is ``all``, build all example apps. + meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Returns: + The directory path of the built app. If building all apps, return + the path to the examples directory (where all apps reside). """ self.main_session.build_dpdk( self._env_vars, @@ -273,9 +325,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa ) def kill_cleanup_dpdk_apps(self) -> None: - """ - Kill all dpdk applications on the SUT. Cleanup hugepages. - """ + """Kill all dpdk applications on the SUT, then clean up hugepages.""" if self._dpdk_kill_session and self._dpdk_kill_session.is_alive(): # we can use the session if it exists and responds self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list) @@ -294,33 +344,34 @@ def create_eal_parameters( vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": - """ - Generate eal parameters character string; - :param lcore_filter_specifier: a number of lcores/cores/sockets to use - or a list of lcore ids to use. - The default will select one lcore for each of two cores - on one socket, in ascending order of core ids. - :param ascending_cores: True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the - highest id and continue in descending order. This ordering - affects which sockets to consider first as well. - :param prefix: set file prefix string, eg: - prefix='vf' - :param append_prefix_timestamp: if True, will append a timestamp to - DPDK file prefix. - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' - :return: eal param string, eg: - '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420'; - """ + """Compose the EAL parameters. + + Process the list of cores and the DPDK prefix and pass that along with + the rest of the arguments. + Args: + lcore_filter_specifier: A number of lcores/cores/sockets to use + or a list of lcore ids to use. + The default will select one lcore for each of two cores + on one socket, in ascending order of core ids. + ascending_cores: Sort cores in ascending order (lowest to highest IDs). + If :data:`False`, sort in descending order. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'``. + + Returns: + An EAL param string, such as + ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``. + """ lcore_list = LogicalCoreList( self.filter_lcores(lcore_filter_specifier, ascending_cores) ) @@ -346,14 +397,29 @@ def create_eal_parameters( def run_dpdk_app( self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30 ) -> CommandResult: - """ - Run DPDK application on the remote node. + """Run DPDK application on the remote node. + + The application is not run interactively - the command that starts the application + is executed and then the call waits for it to finish execution. + + Args: + app_path: The remote path to the DPDK application. + eal_args: EAL parameters to run the DPDK application with. + timeout: Wait at most this long in seconds to execute the command. + + Returns: + The result of the DPDK app execution. """ return self.main_session.send_command( f"{app_path} {eal_args}", timeout, privileged=True, verify=True ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Enable/disable IPv4 forwarding on the node. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. + """ self.main_session.configure_ipv4_forwarding(enable) def create_interactive_shell( @@ -363,9 +429,13 @@ def create_interactive_shell( privileged: bool = False, eal_parameters: EalParameters | str | None = None, ) -> InteractiveShellType: - """Factory method for creating a handler for an interactive session. + """Extend the factory for interactive session handlers. + + The extensions are SUT node specific: - Instantiate shell_cls according to the remote OS specifics. + * The default for `eal_parameters`, + * The interactive shell path `shell_cls.path` is prepended with path to the remote + DPDK build directory for DPDK apps. Args: shell_cls: The class of the shell. @@ -375,9 +445,10 @@ def create_interactive_shell( privileged: Whether to run the shell with administrative privileges. eal_parameters: List of EAL parameters to use to launch the app. If this isn't provided or an empty string is passed, it will default to calling - create_eal_parameters(). + :meth:`create_eal_parameters`. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not eal_parameters: eal_parameters = self.create_eal_parameters() diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 166eb8430e..69eb33ccb1 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -5,13 +5,8 @@ """Traffic generator node. -This is the node where the traffic generator resides. -The distinction between a node and a traffic generator is as follows: -A node is a host that DTS connects to. It could be a baremetal server, -a VM or a container. -A traffic generator is software running on the node. -A traffic generator node is a node running a traffic generator. -A node can be a traffic generator node as well as system under test node. +A traffic generator (TG) generates traffic that's sent towards the SUT node. +A TG node is where the TG runs. """ from scapy.packet import Packet # type: ignore[import] @@ -24,13 +19,16 @@ class TGNode(Node): - """Manage connections to a node with a traffic generator. + """The traffic generator node. - Apart from basic node management capabilities, the Traffic Generator node has - specialized methods for handling the traffic generator running on it. + The TG node extends :class:`Node` with TG specific features: - Arguments: - node_config: The user configuration of the traffic generator node. + * Traffic generator initialization, + * The sending of traffic and receiving packets, + * The sending of traffic without receiving packets. + + Not all traffic generators are capable of capturing traffic, which is why there + must be a way to send traffic without that. Attributes: traffic_generator: The traffic generator running on the node. @@ -39,6 +37,13 @@ class TGNode(Node): traffic_generator: CapturingTrafficGenerator def __init__(self, node_config: TGNodeConfiguration): + """Extend the constructor with TG node specifics. + + Initialize the traffic generator on the TG node. + + Args: + node_config: The TG node's test run configuration. + """ super(TGNode, self).__init__(node_config) self.traffic_generator = create_traffic_generator( self, node_config.traffic_generator @@ -52,17 +57,17 @@ def send_packet_and_capture( receive_port: Port, duration: float = 1, ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet`, return received traffic. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given duration. Also record the captured traffic in a pcap file. Args: packet: The packet to send. send_port: The egress port on the TG node. receive_port: The ingress port in the TG node. - duration: Capture traffic for this amount of time after sending the packet. + duration: Capture traffic for this amount of time after sending `packet`. Returns: A list of received packets. May be empty if no packets are captured. @@ -72,6 +77,9 @@ def send_packet_and_capture( ) def close(self) -> None: - """Free all resources used by the node""" + """Free all resources used by the node. + + This extends the superclass method with TG cleanup. + """ self.traffic_generator.close() super(TGNode, self).close() From patchwork Wed Nov 8 12:53:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133990 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B251432D6; Wed, 8 Nov 2023 13:56:05 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A421E42E89; Wed, 8 Nov 2023 13:53:54 +0100 (CET) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id 898D042E21 for ; Wed, 8 Nov 2023 13:53:48 +0100 (CET) Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-5401bab7525so11718433a12.2 for ; Wed, 08 Nov 2023 04:53:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448028; x=1700052828; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g4hWHvKq7wmrNScwYf5rL6W0ga2jchBf0GSHuvS0F2E=; b=l3wnscgFXroUvDMcTeNr6ERNs2IGU37y997mKFPdP/u9yLXncBKvHKqhC4G9fhRGlD GEh3DZir/NRPNA53aFm5mijcSZyXP80S932vDBdQ4wqQFnPPsw8p1wf3sUs+q/Jrfz+I 7pbJWEYJc/zwdj0dkQ56no793teYHsuuDBBShRO1cWhLvdpA1QzCramTh1/kYdPi0UFS h+D6/Mm99+LFKyMXyHwlTaCH+cKf+Iw9G7aPvV10WyJVcASWPnfyDviWG9dSKgCwx3gj 2Q7j2UzWfYBN0NScTqJ3OEmlhBVHdD0WFW4N3eZiWnvp2JLPOjCLHsrrVLQcRKvHgja5 Nz7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448028; x=1700052828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g4hWHvKq7wmrNScwYf5rL6W0ga2jchBf0GSHuvS0F2E=; b=bm1Yjs1mgtep3BQ9Fzuh2HV20AHzC6DfXBhHhtvEqQndR37fRkJDKOvVvVQaOSXrU6 6orT6ugDBdSNuHGSJPrrx9In1doqJ9DgePhWmH1v5o+oYEtCXnlU7DjJ2YZqRrbahG6E QBRn5i6EIa2N3kKIgEWIZMuLyR6pp4UCAr3RwXddiFJYK3pJniLRgFYG9vw8Z6Z4gKU3 qN7KZTw7Jiofqk53GUpfhJ3wBuwrrnn/PL0M9CDOUTJSZC9au8bF3oME9OatzvkRFEfo GYETfX5fCk5GFovfGu/UxRdmMl9z0/I0YsumskGgj9W/z1+E85ZEuNMWlSlXi8IPWRbS uH2Q== X-Gm-Message-State: AOJu0YxfW3LKEWIQAX8c9IFMIoc507uYYKIoayWmCKxXsSj2+KMqPxry v1MAaVjaOR+9OKqH4SBbk+Oicw== X-Google-Smtp-Source: AGHT+IFJd+8WdTSO4LM7d4I4ABpVK5A7ttTek+YZeJ3cuz0wn3E+ss0f9i8fiK8Idnp8LHN/VvAhGg== X-Received: by 2002:a50:d6d4:0:b0:543:783:e8c5 with SMTP id l20-20020a50d6d4000000b005430783e8c5mr1313595edj.18.1699448028139; Wed, 08 Nov 2023 04:53:48 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:47 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 19/23] dts: base traffic generators docstring update Date: Wed, 8 Nov 2023 13:53:20 +0100 Message-Id: <20231108125324.191005-19-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../traffic_generator/__init__.py | 22 ++++++++- .../capturing_traffic_generator.py | 46 +++++++++++-------- .../traffic_generator/traffic_generator.py | 33 +++++++------ 3 files changed, 68 insertions(+), 33 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py index 11bfa1ee0f..51cca77da4 100644 --- a/dts/framework/testbed_model/traffic_generator/__init__.py +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -1,6 +1,19 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""DTS traffic generators. + +A traffic generator is capable of generating traffic and then monitor returning traffic. +A traffic generator may just count the number of received packets +and it may additionally capture individual packets. + +A traffic generator may be software running on generic hardware or it could be specialized hardware. + +The traffic generators that only count the number of received packets are suitable only for +performance testing. In functional testing, we need to be able to dissect each arrived packet +and a capturing traffic generator is required. +""" + from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType from framework.exception import ConfigurationError from framework.testbed_model.node import Node @@ -12,8 +25,15 @@ def create_traffic_generator( tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig ) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" + """The factory function for creating traffic generator objects from the test run configuration. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + traffic_generator_config: The traffic generator config. + Returns: + A traffic generator capable of capturing received packets. + """ match traffic_generator_config.traffic_generator_type: case TrafficGeneratorType.SCAPY: return ScapyTrafficGenerator(tg_node, traffic_generator_config) diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index e521211ef0..b0a43ad003 100644 --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -23,19 +23,22 @@ def _get_default_capture_name() -> str: - """ - This is the function used for the default implementation of capture names. - """ return str(uuid.uuid4()) class CapturingTrafficGenerator(TrafficGenerator): """Capture packets after sending traffic. - A mixin interface which enables a packet generator to declare that it can capture + The intermediary interface which enables a packet generator to declare that it can capture packets and return them to the user. + Similarly to + :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`, + this class exposes the public methods specific to capturing traffic generators and defines + a private method that must implement the traffic generation and capturing logic in subclasses. + The methods of capturing traffic generators obey the following workflow: + 1. send packets 2. capture packets 3. write the capture to a .pcap file @@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator): @property def is_capturing(self) -> bool: + """This traffic generator can capture traffic.""" return True def send_packet_and_capture( @@ -54,11 +58,12 @@ def send_packet_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet` and capture received traffic. + + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + The captured traffic is recorded in the `capture_name`.pcap file. Args: packet: The packet to send. @@ -68,7 +73,7 @@ def send_packet_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ return self.send_packets_and_capture( [packet], send_port, receive_port, duration, capture_name @@ -82,11 +87,14 @@ def send_packets_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send packets, return received traffic. + """Send `packets` and capture received traffic. - Send packets on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + Send `packets` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. + + The captured traffic is recorded in the `capture_name`.pcap file. The target directory + can be configured with the :option:`--output-dir` command line argument or + the :envvar:`DTS_OUTPUT_DIR` environment variable. Args: packets: The packets to send. @@ -96,7 +104,7 @@ def send_packets_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ self._logger.debug(get_packet_summaries(packets)) self._logger.debug( @@ -124,10 +132,12 @@ def _send_packets_and_capture( receive_port: Port, duration: float, ) -> list[Packet]: - """ - The extended classes must implement this method which - sends packets on send_port and receives packets on the receive_port - for the specified duration. It must be able to handle no received packets. + """The implementation of :method:`send_packets_and_capture`. + + The subclasses must implement this method which sends `packets` on `send_port` + and receives packets on `receive_port` for the specified `duration`. + + It must be able to handle no received packets. """ def _write_capture_from_packets( diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py index ea7c3963da..ed396c6a2f 100644 --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -22,7 +22,8 @@ class TrafficGenerator(ABC): """The base traffic generator. - Defines the few basic methods that each traffic generator must implement. + Exposes the common public methods of all traffic generators and defines private methods + that must implement the traffic generation logic in subclasses. """ _config: TrafficGeneratorConfig @@ -30,6 +31,12 @@ class TrafficGenerator(ABC): _logger: DTSLOG def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + """Initialize the traffic generator. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + config: The traffic generator's test run configuration. + """ self._config = config self._tg_node = tg_node self._logger = getLogger( @@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): ) def send_packet(self, packet: Packet, port: Port) -> None: - """Send a packet and block until it is fully sent. + """Send `packet` and block until it is fully sent. - What fully sent means is defined by the traffic generator. + Send `packet` on `port`, then wait until `packet` is fully sent. Args: packet: The packet to send. @@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None: self.send_packets([packet], port) def send_packets(self, packets: list[Packet], port: Port) -> None: - """Send packets and block until they are fully sent. + """Send `packets` and block until they are fully sent. - What fully sent means is defined by the traffic generator. + Send `packets` on `port`, then wait until `packets` are fully sent. Args: packets: The packets to send. @@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None: @abstractmethod def _send_packets(self, packets: list[Packet], port: Port) -> None: - """ - The extended classes must implement this method which - sends packets on send_port. The method should block until all packets - are fully sent. + """The implementation of :method:`send_packets`. + + The subclasses must implement this method which sends `packets` on `port`. + The method should block until all `packets` are fully sent. + + What full sent means is defined by the traffic generator. """ @property def is_capturing(self) -> bool: - """Whether this traffic generator can capture traffic. - - Returns: - True if the traffic generator can capture traffic, False otherwise. - """ + """This traffic generator can't capture traffic.""" return False @abstractmethod From patchwork Wed Nov 8 12:53:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133991 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 131F8432D6; Wed, 8 Nov 2023 13:56:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 424D642E94; Wed, 8 Nov 2023 13:53:56 +0100 (CET) Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by mails.dpdk.org (Postfix) with ESMTP id 13EB342E6C for ; Wed, 8 Nov 2023 13:53:50 +0100 (CET) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-5435336ab0bso11846808a12.1 for ; Wed, 08 Nov 2023 04:53:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448030; x=1700052830; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ic2HSjaaUtpFow78DherjJZEUbGpTzSWTYcIUK3zrxE=; b=u7KMTKAbdwvGvJ3VrvRP+xOQ1jd5nKo2JX+9ldUjlDOny5o+VPt0EtmK9EPwCQApvB QHpAD+XSOqbNNj4E0625pqTkEIqtWqs563SZD16R8JaigtLrbirtJyc8/3/irj5lIOaI eeilWJsKCdUDGnMgcm/EYk7f46HJFgTBQ5w1fF+QFWG/btjC56GsqVxxWw/s3a4Cel6E /Ov5iq4O7Y02uPEnDSrd2ENM1F66B+/+ywrStY8pEKYi38asnZUfEAMBK6UK3L+kkUsI 0i2qmQmPSBN+uJB5C2Tv4dTCXxZtiByNwJEhyjR9QEgob3sKYMSs93/0SrHAU7raK/QJ z+JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448030; x=1700052830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ic2HSjaaUtpFow78DherjJZEUbGpTzSWTYcIUK3zrxE=; b=iAkdB1xje6YfaAXJ5xQlomrLhG+COjWRo0EJ4fZnmk89zdx+DF8o2uz0NNnWwWuu7u G/14wiQgVDjFvDuknGwT4gxsX3lrYRJRrNZjIKY6m+Zn979Rzh6wOJcFCRgQ5wpjggM9 s6BtiV6uZseGMCUxJgPVAd+kTMDWmnKmuTyxhJSRaBowWBlIcmJxZLEIeQuZRylm6xzg 0qLnSnyKRaE/fNSoxJZmYDGLq4ie1yKdVf8ZqyqONgqg8EF6BkfhgK7PmiY/eCF80q8G nlu/sMn86iCo5v4I5dCICYZhcJlsL2PEdUlgrEPm+ID7x352xbawTrpsDJPxyTFGUHkl 4yIw== X-Gm-Message-State: AOJu0YwKqVMFp3quSBGSkcigmejYRcOHHQYyVxztU9/HDNMIh2vhEcA2 pWssQigjnc7AzKE/9WbOUmIvfg== X-Google-Smtp-Source: AGHT+IEON4/LGi/f3eNQWq6Zd+dqj0gC44AFscI7dbHduZFMCHGqoOQ7Iluy7qJo6if3qiulJ5Bkvw== X-Received: by 2002:a50:cc99:0:b0:543:512c:49b8 with SMTP id q25-20020a50cc99000000b00543512c49b8mr1380867edi.1.1699448029390; Wed, 08 Nov 2023 04:53:49 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:48 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 20/23] dts: scapy tg docstring update Date: Wed, 8 Nov 2023 13:53:21 +0100 Message-Id: <20231108125324.191005-20-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../testbed_model/traffic_generator/scapy.py | 91 +++++++++++-------- 1 file changed, 54 insertions(+), 37 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py index 51864b6e6b..d0fe03055a 100644 --- a/dts/framework/testbed_model/traffic_generator/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -2,14 +2,15 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -"""Scapy traffic generator. +"""The Scapy traffic generator. -Traffic generator used for functional testing, implemented using the Scapy library. +A traffic generator used for functional testing, implemented with +`the Scapy library `_. The traffic generator uses an XML-RPC server to run Scapy on the remote TG node. -The XML-RPC server runs in an interactive remote SSH session running Python console, -where we start the server. The communication with the server is facilitated with -a local server proxy. +The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server +in an interactive remote Python SSH session. The communication with the server is facilitated +with a local server proxy from the :mod:`xmlrpc.client` module. """ import inspect @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture( recv_iface: str, duration: float, ) -> list[bytes]: - """RPC function to send and capture packets. + """The RPC function to send and capture packets. - The function is meant to be executed on the remote TG node. + The function is meant to be executed on the remote TG node via the server proxy. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. recv_iface: The logical name of the ingress interface. duration: Capture for this amount of time, in seconds. Returns: A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. + to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] sniffer = scapy.all.AsyncSniffer( @@ -98,19 +99,15 @@ def scapy_send_packets_and_capture( def scapy_send_packets( xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str ) -> None: - """RPC function to send packets. + """The RPC function to send packets. - The function is meant to be executed on the remote TG node. - It doesn't return anything, only sends packets. + The function is meant to be executed on the remote TG node via the server proxy. + It only sends `xmlrpc_packets`, without capturing them. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. - - Returns: - A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True) @@ -130,11 +127,19 @@ def scapy_send_packets( class QuittableXMLRPCServer(SimpleXMLRPCServer): - """Basic XML-RPC server that may be extended - by functions serializable by the marshal module. + r"""Basic XML-RPC server. + + The server may be augmented by functions serializable by the :mod:`marshal` module. """ def __init__(self, *args, **kwargs): + """Extend the XML-RPC server initialization. + + Args: + args: The positional arguments that will be passed to the superclass's constructor. + kwargs: The keyword arguments that will be passed to the superclass's constructor. + The `allow_none` argument will be set to ``True``. + """ kwargs["allow_none"] = True super().__init__(*args, **kwargs) self.register_introspection_functions() @@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs): self.register_function(self.add_rpc_function) def quit(self) -> None: + """Quit the server.""" self._BaseServer__shutdown_request = True return None def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: - """Add a function to the server. - - This is meant to be executed remotely. + """Add a function to the server from the local server proxy. Args: name: The name of the function. @@ -159,6 +163,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N self.register_function(function) def serve_forever(self, poll_interval: float = 0.5) -> None: + """Extend the superclass method with an additional print. + + Once executed in the local server proxy, the print gives us a clear string to expect + when starting the server. The print means the function was executed on the XML-RPC server. + """ print("XMLRPC OK") super().serve_forever(poll_interval) @@ -166,19 +175,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None: class ScapyTrafficGenerator(CapturingTrafficGenerator): """Provides access to scapy functions via an RPC interface. - The traffic generator first starts an XML-RPC on the remote TG node. - Then it populates the server with functions which use the Scapy library - to send/receive traffic. - - Any packets sent to the remote server are first converted to bytes. - They are received as xmlrpc.client.Binary objects on the server side. - When the server sends the packets back, they are also received as - xmlrpc.client.Binary object on the client side, are converted back to Scapy - packets and only then returned from the methods. + The class extends the base with remote execution of scapy functions. - Arguments: - tg_node: The node where the traffic generator resides. - config: The user configuration of the traffic generator. + Any packets sent to the remote server are first converted to bytes. They are received as + :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets + back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are + converted back to :class:`scapy.packet.Packet` objects and only then returned from the methods. Attributes: session: The exclusive interactive remote session created by the Scapy @@ -192,6 +194,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): _config: ScapyTrafficGeneratorConfig def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + """Extend the constructor with Scapy TG specifics. + + The traffic generator first starts an XML-RPC on the remote `tg_node`. + Then it populates the server with functions which use the Scapy library + to send/receive traffic: + + * :func:`scapy_send_packets_and_capture` + * :func:`scapy_send_packets` + + To enable verbose logging from the xmlrpc client, use the :option:`--verbose` + command line argument or the :envvar:`DTS_VERBOSE` environment variable. + + Args: + tg_node: The node where the traffic generator resides. + config: The traffic generator's test run configuration. + """ super().__init__(tg_node, config) assert ( @@ -237,10 +255,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: [line for line in src.splitlines() if not line.isspace() and line != ""] ) - spacing = "\n" * 4 - # execute it in the python terminal - self.session.send_command(spacing + src + spacing) + self.session.send_command(src + "\n") self.session.send_command( f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));" f"server.serve_forever()", @@ -274,6 +290,7 @@ def _send_packets_and_capture( return scapy_packets def close(self) -> None: + """Close the traffic generator.""" try: self.rpc_server_proxy.quit() except ConnectionRefusedError: From patchwork Wed Nov 8 12:53:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133992 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9BEFF432D6; Wed, 8 Nov 2023 13:56:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C12F42E97; Wed, 8 Nov 2023 13:53:57 +0100 (CET) Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by mails.dpdk.org (Postfix) with ESMTP id DC23F42E6D for ; Wed, 8 Nov 2023 13:53:50 +0100 (CET) Received: by mail-ej1-f53.google.com with SMTP id a640c23a62f3a-9dbb3d12aefso1026602166b.0 for ; Wed, 08 Nov 2023 04:53:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448030; x=1700052830; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YtryVehocq8tW4ezjxt3Dsa07CT6OR4IVjMNFTmf+JQ=; b=TTFiKRsQe4jeZir5taWxjRcAdYBOcuWjTP6GFvdnRaf7wr6ZWy09MUS5jsacH9V3ym Gls+/10tzBmbqdzfeib6YOeV5a+mLk6AbPNIb+R+b+u8MOLv9ZWHzfNLwc/JQ7xuweEp YN9/trpk5bU/PlYRwMbW4A0GkJldYZAvWvs09PCQpQFr2Jchek3a+mSrnvONgLBbAwPK mJlVh51yhToqofh4VPEDqdlHOyEtfuK+kXucSzdp77VkSPAepegeK30x2atpdqNDOn/B AMBctV7OrXBk0gcw/YPxjFd3LvntiBrqY+RpyFAB2VULgmfG84Bqi/+raaZR6KIYM/yg k8nA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448030; x=1700052830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YtryVehocq8tW4ezjxt3Dsa07CT6OR4IVjMNFTmf+JQ=; b=a1WeGcvnLmLWgna0Mb+l/AQV0nCEIAu5sbAZOfujXiT68+vqMfHGKiJmCKc9KQqV13 TCUqiOVC4l2hwfvSZ2Q8iBn8WWLNhpK26FIE2duPyBbUGkKNBSOrrlJwV8KiCMzaKsjT hfYoUqIylgQANMu3Vf4OLqUlRwXOWV9U34VevXRSluzV2Thr4+vB812NlNGJTwah9/UZ V6B1SzXQHl5dmYTq328nHi5JbITPQhBEH4AHS7RrheRgI43UKVisiDO7Y2yhODLMqsC5 /cBME7/HOk6/NMFchZmn5Fr3eff9o/H/XgXwPRtPiRGp571t5lNx8i+BBhsiwy5jztxI agUw== X-Gm-Message-State: AOJu0YxRdlSR68aqCGMRY8zbfdjt2LJ5VkBBSCFGOU0CBdkZYphMI5cY Pxt4Xx3a3P60i0HStrX0YuHk+g== X-Google-Smtp-Source: AGHT+IFcLZl3NYgHEKJjS3s8rbGiJDpDaPt3EoGunoNluVOPi7COMJRgszJ/q14VAbVrH1SGwMdmug== X-Received: by 2002:a17:907:c303:b0:9da:b2ca:6c50 with SMTP id tl3-20020a170907c30300b009dab2ca6c50mr1465783ejc.72.1699448030565; Wed, 08 Nov 2023 04:53:50 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:49 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 21/23] dts: test suites docstring update Date: Wed, 8 Nov 2023 13:53:22 +0100 Message-Id: <20231108125324.191005-21-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/tests/TestSuite_hello_world.py | 16 +++++---- dts/tests/TestSuite_os_udp.py | 16 +++++---- dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++--- 3 files changed, 68 insertions(+), 17 deletions(-) diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py index 7e3d95c0cf..662a8f8726 100644 --- a/dts/tests/TestSuite_hello_world.py +++ b/dts/tests/TestSuite_hello_world.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -""" +"""The DPDK hello world app test suite. + Run the helloworld example app and verify it prints a message for each used core. No other EAL parameters apart from cores are used. """ @@ -15,22 +16,25 @@ class TestHelloWorld(TestSuite): + """DPDK hello world app test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: Build the app we're about to test - helloworld. """ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld") def test_hello_world_single_core(self) -> None: - """ + """Single core test case. + Steps: Run the helloworld app on the first usable logical core. Verify: The app prints a message from the used core: "hello from core " """ - # get the first usable core lcore_amount = LogicalCoreCount(1, 1, 1) lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter() @@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None: ) def test_hello_world_all_cores(self) -> None: - """ + """All cores test case. + Steps: Run the helloworld app on all usable logical cores. Verify: The app prints a message from all used cores: "hello from core " """ - # get the maximum logical core number eal_para = self.sut_node.create_eal_parameters( lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores) diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py index 9b5f39711d..f99c4d76e3 100644 --- a/dts/tests/TestSuite_os_udp.py +++ b/dts/tests/TestSuite_os_udp.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" +"""Basic IPv4 OS routing test suite. + Configure SUT node to route traffic from if1 to if2. Send a packet to the SUT node, verify it comes back on the second port on the TG node. """ @@ -13,22 +14,24 @@ class TestOSUdp(TestSuite): + """IPv4 UDP OS routing test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: Configure SUT ports and SUT to route traffic from if1 to if2. """ - self.configure_testbed_ipv4() def test_os_udp(self) -> None: - """ + """Basic UDP IPv4 traffic test case. + Steps: Send a UDP packet. Verify: The packet with proper addresses arrives at the other TG port. """ - packet = Ether() / IP() / UDP() received_packets = self.send_packet_and_capture(packet) @@ -38,7 +41,8 @@ def test_os_udp(self) -> None: self.verify_packets(expected_packet, received_packets) def tear_down_suite(self) -> None: - """ + """Tear down the test suite. + Teardown: Remove the SUT port configuration configured in setup. """ diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py index 4a269df75b..36ff10a862 100644 --- a/dts/tests/TestSuite_smoke_tests.py +++ b/dts/tests/TestSuite_smoke_tests.py @@ -1,6 +1,17 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +"""Smoke test suite. + +Smoke tests are a class of tests which are used for validating a minimal set of important features. +These are the most important features without which (or when they're faulty) the software wouldn't +work properly. Thus, if any failure occurs while testing these features, +there isn't that much of a reason to continue testing, as the software is fundamentally broken. + +These tests don't have to include only DPDK tests, as the reason for failures could be +in the infrastructure (a faulty link between NICs or a misconfiguration). +""" + import re from framework.config import PortConfig @@ -11,13 +22,25 @@ class SmokeTests(TestSuite): + """DPDK and infrastructure smoke test suite. + + The test cases validate the most basic DPDK functionality needed for all other test suites. + The infrastructure also needs to be tested, as that is also used by all other test suites. + + Attributes: + is_blocking: This test suite will block the execution of all other test suites + in the build target after it. + nics_in_node: The NICs present on the SUT node. + """ + is_blocking = True # dicts in this list are expected to have two keys: # "pci_address" and "current_driver" nics_in_node: list[PortConfig] = [] def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: Set the build directory path and generate a list of NICs in the SUT node. """ @@ -25,7 +48,13 @@ def set_up_suite(self) -> None: self.nics_in_node = self.sut_node.config.ports def test_unit_tests(self) -> None: - """ + """DPDK meson fast-tests unit tests. + + The DPDK unit tests are basic tests that indicate regressions and other critical failures. + These need to be addressed before other testing. + + The fast-tests unit tests are a subset with only the most basic tests. + Test: Run the fast-test unit-test suite through meson. """ @@ -37,7 +66,14 @@ def test_unit_tests(self) -> None: ) def test_driver_tests(self) -> None: - """ + """DPDK meson driver-tests unit tests. + + The DPDK unit tests are basic tests that indicate regressions and other critical failures. + These need to be addressed before other testing. + + The driver-tests unit tests are a subset that test only drivers. These may be run + with virtual devices as well. + Test: Run the driver-test unit-test suite through meson. """ @@ -63,7 +99,10 @@ def test_driver_tests(self) -> None: ) def test_devices_listed_in_testpmd(self) -> None: - """ + """Testpmd device discovery. + + If the configured devices can't be found in testpmd, they can't be tested. + Test: Uses testpmd driver to verify that devices have been found by testpmd. """ @@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None: ) def test_device_bound_to_driver(self) -> None: - """ + """Device driver in OS. + + The devices must be bound to the proper driver, otherwise they can't be used by DPDK + or the traffic generators. + Test: Ensure that all drivers listed in the config are bound to the correct driver. From patchwork Wed Nov 8 12:53:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133993 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA969432D6; Wed, 8 Nov 2023 13:56:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8116342E9A; Wed, 8 Nov 2023 13:53:58 +0100 (CET) Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) by mails.dpdk.org (Postfix) with ESMTP id 6A35142E7D for ; Wed, 8 Nov 2023 13:53:52 +0100 (CET) Received: by mail-lf1-f53.google.com with SMTP id 2adb3069b0e04-507b96095abso8690406e87.3 for ; Wed, 08 Nov 2023 04:53:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448032; x=1700052832; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hEvxFxlgxDa00tG1Wz9M7pXUcDNhOdX9XJZljbqSvZg=; b=Ys5P6Ogdai+XEFmHzj6M8BwRSTIhBdbm6eyMFBsUQwUlIk+C7PNWEQpBdP9D1Ov/fg wv1F+AbZz2eXna6KIoTXoJYFy0tleZSUZB1/+dFfO/callQDpvrT423PIRXEi2EjumS2 Uw6PqqOaFnrqx3sqtJSWiSIPxaheYTewdN0WpFZhXL8ZR9TY6pYlneLUSQPJY/s2jaM3 mwV16zxWs8fB+fmM/rZ9Pzu9oEPClbww7rgjfdEOumzjo8aW7CB0WhH6rGha0o2mPW9M 7CiKMpwwCeOligG4IoDo5tqoRU3s+42dXJpmFIqkcvSQBSda6jp940VgcNNdn9E9wLj/ 1Nig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448032; x=1700052832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hEvxFxlgxDa00tG1Wz9M7pXUcDNhOdX9XJZljbqSvZg=; b=Wp3aUSzHcvMmjtW/5LTHVXiJa7DdsJtZ6VAt0FfvlrYkvFSILO7b9XkmmvT8W5KGlL x+GA761TZQzB+lIBqx84bEOx8I+Fc31Ha3O+MccVMZbohD66YaUMYjemgZ2qek7TPCG3 So1y5gsGnwAEmxsY9izTe+Yfm3HqxKXcPX1zCmgYv/8PDnRFQ8LdndlGo8ssomlOcKcu tUUmmkDjeCZUv/A2mkAkZB8nwFZm7P5onfbeyWWGef4B+cf5w3CAzOK9QooV+N3rSU7x 4CthKGbPJeDutQCzZ4f/YmncwD8so2FFjU2fFQe1nr+NeQCR6GLInhkKL1dfrujKtarn 2jkA== X-Gm-Message-State: AOJu0YyQNf629EF0vvuVfm+CGBeAPajSSa1Y2ckewgNgeZLt/o5qR0tT Kw6U43J5mtmVlrADjy0hjCSoGA== X-Google-Smtp-Source: AGHT+IE7amK56QO1CZkQsdhgtnqukILAolNGoEDQpNt8pN3kc+iQmxOOiAtlw2QiJQF+B2eJOIMvKA== X-Received: by 2002:a05:6512:3122:b0:509:4864:1b3a with SMTP id p2-20020a056512312200b0050948641b3amr1173274lfd.64.1699448031708; Wed, 08 Nov 2023 04:53:51 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:51 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 22/23] dts: add doc generation dependencies Date: Wed, 8 Nov 2023 13:53:23 +0100 Message-Id: <20231108125324.191005-22-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sphinx imports every Python module when generating documentation from docstrings, meaning all dts dependencies, including Python version, must be satisfied. By adding Sphinx to dts dependencies we make sure that the proper Python version and dependencies are used when Sphinx is executed. Signed-off-by: Juraj Linkeš --- dts/poetry.lock | 499 ++++++++++++++++++++++++++++++++++++++++++++- dts/pyproject.toml | 7 + 2 files changed, 505 insertions(+), 1 deletion(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index a734fa71f0..dea98f6913 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -1,5 +1,16 @@ # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand. +[[package]] +name = "alabaster" +version = "0.7.13" +description = "A configurable sidebar-enabled Sphinx theme" +optional = false +python-versions = ">=3.6" +files = [ + {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"}, + {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"}, +] + [[package]] name = "attrs" version = "23.1.0" @@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib- tests = ["attrs[tests-no-zope]", "zope-interface"] tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +[[package]] +name = "babel" +version = "2.13.1" +description = "Internationalization utilities" +optional = false +python-versions = ">=3.7" +files = [ + {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"}, + {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"}, +] + +[package.dependencies] +setuptools = {version = "*", markers = "python_version >= \"3.12\""} + +[package.extras] +dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"] + [[package]] name = "bcrypt" version = "4.0.1" @@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"] jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"] uvloop = ["uvloop (>=0.15.2)"] +[[package]] +name = "certifi" +version = "2023.7.22" +description = "Python package for providing Mozilla's CA Bundle." +optional = false +python-versions = ">=3.6" +files = [ + {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, + {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, +] + [[package]] name = "cffi" version = "1.15.1" @@ -162,6 +201,105 @@ files = [ [package.dependencies] pycparser = "*" +[[package]] +name = "charset-normalizer" +version = "3.3.2" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +optional = false +python-versions = ">=3.7.0" +files = [ + {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"}, + {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"}, + {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"}, + {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"}, + {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"}, + {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"}, + {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"}, + {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"}, +] + [[package]] name = "click" version = "8.1.6" @@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"] test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"] test-randomorder = ["pytest-randomly"] +[[package]] +name = "docutils" +version = "0.18.1" +description = "Docutils -- Python Documentation Utilities" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +files = [ + {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"}, + {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"}, +] + [[package]] name = "fabric" version = "2.7.1" @@ -252,6 +401,28 @@ pathlib2 = "*" pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"] testing = ["mock (>=2.0.0,<3.0)"] +[[package]] +name = "idna" +version = "3.4" +description = "Internationalized Domain Names in Applications (IDNA)" +optional = false +python-versions = ">=3.5" +files = [ + {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"}, + {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"}, +] + +[[package]] +name = "imagesize" +version = "1.4.1" +description = "Getting image size from png/jpeg/jpeg2000/gif file" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"}, + {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"}, +] + [[package]] name = "invoke" version = "1.7.3" @@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib" plugins = ["setuptools"] requirements-deprecated-finder = ["pip-api", "pipreqs"] +[[package]] +name = "jinja2" +version = "3.1.2" +description = "A very fast and expressive template engine." +optional = false +python-versions = ">=3.7" +files = [ + {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"}, + {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"}, +] + +[package.dependencies] +MarkupSafe = ">=2.0" + +[package.extras] +i18n = ["Babel (>=2.7)"] + [[package]] name = "jsonpatch" version = "1.33" @@ -340,6 +528,65 @@ files = [ [package.dependencies] referencing = ">=0.28.0" +[[package]] +name = "markupsafe" +version = "2.1.3" +description = "Safely add untrusted strings to HTML/XML markup." +optional = false +python-versions = ">=3.7" +files = [ + {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"}, + {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"}, + {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"}, + {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"}, + {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"}, + {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"}, + {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"}, +] + [[package]] name = "mccabe" version = "0.7.0" @@ -404,6 +651,17 @@ files = [ {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"}, ] +[[package]] +name = "packaging" +version = "23.2" +description = "Core utilities for Python packages" +optional = false +python-versions = ">=3.7" +files = [ + {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"}, + {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"}, +] + [[package]] name = "paramiko" version = "3.2.0" @@ -515,6 +773,20 @@ files = [ {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"}, ] +[[package]] +name = "pygments" +version = "2.16.1" +description = "Pygments is a syntax highlighting package written in Python." +optional = false +python-versions = ">=3.7" +files = [ + {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"}, + {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"}, +] + +[package.extras] +plugins = ["importlib-metadata"] + [[package]] name = "pylama" version = "8.4.1" @@ -632,6 +904,27 @@ files = [ attrs = ">=22.2.0" rpds-py = ">=0.7.0" +[[package]] +name = "requests" +version = "2.31.0" +description = "Python HTTP for Humans." +optional = false +python-versions = ">=3.7" +files = [ + {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"}, + {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"}, +] + +[package.dependencies] +certifi = ">=2017.4.17" +charset-normalizer = ">=2,<4" +idna = ">=2.5,<4" +urllib3 = ">=1.21.1,<3" + +[package.extras] +socks = ["PySocks (>=1.5.6,!=1.5.7)"] +use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] + [[package]] name = "rpds-py" version = "0.9.2" @@ -753,6 +1046,22 @@ basic = ["ipython"] complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"] docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"] +[[package]] +name = "setuptools" +version = "68.2.2" +description = "Easily download, build, install, upgrade, and uninstall Python packages" +optional = false +python-versions = ">=3.8" +files = [ + {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"}, + {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"}, +] + +[package.extras] +docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"] +testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"] +testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"] + [[package]] name = "six" version = "1.16.0" @@ -775,6 +1084,177 @@ files = [ {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"}, ] +[[package]] +name = "sphinx" +version = "6.2.1" +description = "Python documentation generator" +optional = false +python-versions = ">=3.8" +files = [ + {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"}, + {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"}, +] + +[package.dependencies] +alabaster = ">=0.7,<0.8" +babel = ">=2.9" +colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} +docutils = ">=0.18.1,<0.20" +imagesize = ">=1.3" +Jinja2 = ">=3.0" +packaging = ">=21.0" +Pygments = ">=2.13" +requests = ">=2.25.0" +snowballstemmer = ">=2.0" +sphinxcontrib-applehelp = "*" +sphinxcontrib-devhelp = "*" +sphinxcontrib-htmlhelp = ">=2.0.0" +sphinxcontrib-jsmath = "*" +sphinxcontrib-qthelp = "*" +sphinxcontrib-serializinghtml = ">=1.1.5" + +[package.extras] +docs = ["sphinxcontrib-websupport"] +lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"] +test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"] + +[[package]] +name = "sphinx-rtd-theme" +version = "1.2.2" +description = "Read the Docs theme for Sphinx" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7" +files = [ + {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"}, + {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"}, +] + +[package.dependencies] +docutils = "<0.19" +sphinx = ">=1.6,<7" +sphinxcontrib-jquery = ">=4,<5" + +[package.extras] +dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"] + +[[package]] +name = "sphinxcontrib-applehelp" +version = "1.0.7" +description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books" +optional = false +python-versions = ">=3.9" +files = [ + {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"}, + {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"}, +] + +[package.dependencies] +Sphinx = ">=5" + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + +[[package]] +name = "sphinxcontrib-devhelp" +version = "1.0.5" +description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents" +optional = false +python-versions = ">=3.9" +files = [ + {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"}, + {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"}, +] + +[package.dependencies] +Sphinx = ">=5" + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + +[[package]] +name = "sphinxcontrib-htmlhelp" +version = "2.0.4" +description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files" +optional = false +python-versions = ">=3.9" +files = [ + {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"}, + {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"}, +] + +[package.dependencies] +Sphinx = ">=5" + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["html5lib", "pytest"] + +[[package]] +name = "sphinxcontrib-jquery" +version = "4.1" +description = "Extension to include jQuery on newer Sphinx releases" +optional = false +python-versions = ">=2.7" +files = [ + {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"}, + {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"}, +] + +[package.dependencies] +Sphinx = ">=1.8" + +[[package]] +name = "sphinxcontrib-jsmath" +version = "1.0.1" +description = "A sphinx extension which renders display math in HTML via JavaScript" +optional = false +python-versions = ">=3.5" +files = [ + {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"}, + {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"}, +] + +[package.extras] +test = ["flake8", "mypy", "pytest"] + +[[package]] +name = "sphinxcontrib-qthelp" +version = "1.0.6" +description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents" +optional = false +python-versions = ">=3.9" +files = [ + {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"}, + {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"}, +] + +[package.dependencies] +Sphinx = ">=5" + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + +[[package]] +name = "sphinxcontrib-serializinghtml" +version = "1.1.9" +description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)" +optional = false +python-versions = ">=3.9" +files = [ + {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"}, + {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"}, +] + +[package.dependencies] +Sphinx = ">=5" + +[package.extras] +lint = ["docutils-stubs", "flake8", "mypy"] +test = ["pytest"] + [[package]] name = "toml" version = "0.10.2" @@ -819,6 +1299,23 @@ files = [ {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"}, ] +[[package]] +name = "urllib3" +version = "2.0.7" +description = "HTTP library with thread-safe connection pooling, file post, and more." +optional = false +python-versions = ">=3.7" +files = [ + {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"}, + {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"}, +] + +[package.extras] +brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] +secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"] +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] +zstd = ["zstandard (>=0.18.0)"] + [[package]] name = "warlock" version = "2.0.1" @@ -837,4 +1334,4 @@ jsonschema = ">=4,<5" [metadata] lock-version = "2.0" python-versions = "^3.10" -content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b" +content-hash = "5faad2e53833e9b8a353ad3554c58de991801a9ebe8f9712fc9c839b35e7a789" diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 3943c87c87..98df431b3b 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -35,6 +35,13 @@ pylama = "^8.4.1" pyflakes = "^2.5.0" toml = "^0.10.2" +[tool.poetry.group.docs] +optional = true + +[tool.poetry.group.docs.dependencies] +sphinx = "<7" +sphinx-rtd-theme = "^1.2.2" + [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" From patchwork Wed Nov 8 12:53:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 133994 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E1B5432D6; Wed, 8 Nov 2023 13:56:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AC9B742E9F; Wed, 8 Nov 2023 13:53:59 +0100 (CET) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id D9E3A42E7B for ; Wed, 8 Nov 2023 13:53:52 +0100 (CET) Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-53e08b60febso11653941a12.1 for ; Wed, 08 Nov 2023 04:53:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1699448032; x=1700052832; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+LHz84H0EkadZQ/wIw//k0BqY43oSfZrmWYCgAwyTHI=; b=XzXJ2bpfiqbB5tF8WCOjicgaxwO4Z+Re4buhXOrcLFXcZNBuVcguIJwnbrNxWv/5Fw +rWFohnw6OB8DEXVYge2JpYZ77EKsdW8Rt3diuQypkjxISf5z2UE06kfoYxPC2s/+EBM 6BYjtPf/7lFXSZZzasjjOR4qckr8PMg65ZGp37sFTA9Kjnd6jCX1MoBrJZjggenw3ltK l/VFreH7hSFf6safvjsSV8r3M0ozB/por0ouIWMHEZBoFOruuFNP3Ic3Z2vgFo7U5lLr KCRe+PRgGHv4ClOSj/mbg22WFE/Ksmoi5L1EInLgCbRpc3tCB/DL1nPfDEE1ufjbe+jY TbNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699448032; x=1700052832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+LHz84H0EkadZQ/wIw//k0BqY43oSfZrmWYCgAwyTHI=; b=jM+fe/3E65P0aAUz6me8sppbeSXcIq7D7dFNGWJAZ5L1nvTfFEBUrCM8V4vzbvnYFs usrZ8oY0hko+fDXwM/PHOUb9/75y1L5jp0iintn0ZlzJavcgL79qBQ2huWN9Z4INntN0 HPoGu+PeqVEdfV4ld1zAJGlKbiK0ypiao6AxEg2z5jMCU6CJ26ewajeJ5qUG5e8BaiBN 2EawdHW8sZnCxN7os75rE7cQ3KxU6qoJRgoqlAPF/NdOVL4s9/dtQ+6NlhJ3ek5nzCKh frmiQuHSKT/YRVnp3U8G9yBeQ1H6JDylUtQuTWQoeOXdd82LVKYb39X4LNKpTlhmu7S2 PmUQ== X-Gm-Message-State: AOJu0Yy/XofKW048b8y4eMTeq+ErsZhmrGgSIaFFqq26PqveZ8W9xCWk ZhuRNgSuejlDQldQIYK6RYncvw== X-Google-Smtp-Source: AGHT+IEoytNS0zD3p1JZHFaYPUKl7Vg2PHqapIBTURr+ncthHmH8DzqRL/f8PpFDkGEk1pN0eF7kpQ== X-Received: by 2002:a50:8a81:0:b0:53e:1b:15f5 with SMTP id j1-20020a508a81000000b0053e001b15f5mr1447655edj.39.1699448032564; Wed, 08 Nov 2023 04:53:52 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id v10-20020aa7dbca000000b0052ff9bae873sm6589289edt.5.2023.11.08.04.53.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 04:53:52 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, bruce.richardson@intel.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 23/23] dts: add doc generation Date: Wed, 8 Nov 2023 13:53:24 +0100 Message-Id: <20231108125324.191005-23-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231108125324.191005-1-juraj.linkes@pantheon.tech> References: <20231106171601.160749-1-juraj.linkes@pantheon.tech> <20231108125324.191005-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The tool used to generate developer docs is Sphinx, which is already used in DPDK. The same configuration is used to preserve style, but it's been augmented with doc-generating configuration and a change to how the sidebar displays the content hierarchy. Sphinx generates the documentation from Python docstrings. The docstring format is the Google format [0] which requires the sphinx.ext.napoleon extension. The other extension, sphinx.ext.intersphinx, enables linking to object in external documentations, such as the Python documentation. There are two requirements for building DTS docs: * The same Python version as DTS or higher, because Sphinx imports the code. * Also the same Python packages as DTS, for the same reason. [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings Signed-off-by: Juraj Linkeš --- buildtools/call-sphinx-build.py | 29 ++++++++++------ doc/api/meson.build | 1 + doc/guides/conf.py | 34 ++++++++++++++++--- doc/guides/meson.build | 1 + doc/guides/tools/dts.rst | 32 +++++++++++++++++- dts/doc/conf_yaml_schema.json | 1 + dts/doc/index.rst | 17 ++++++++++ dts/doc/meson.build | 60 +++++++++++++++++++++++++++++++++ dts/meson.build | 16 +++++++++ meson.build | 1 + 10 files changed, 176 insertions(+), 16 deletions(-) create mode 120000 dts/doc/conf_yaml_schema.json create mode 100644 dts/doc/index.rst create mode 100644 dts/doc/meson.build create mode 100644 dts/meson.build diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py index 39a60d09fa..c2f3acfb1d 100755 --- a/buildtools/call-sphinx-build.py +++ b/buildtools/call-sphinx-build.py @@ -3,37 +3,46 @@ # Copyright(c) 2019 Intel Corporation # +import argparse import sys import os from os.path import join from subprocess import run, PIPE, STDOUT from packaging.version import Version -# assign parameters to variables -(sphinx, version, src, dst, *extra_args) = sys.argv[1:] +parser = argparse.ArgumentParser() +parser.add_argument('sphinx') +parser.add_argument('version') +parser.add_argument('src') +parser.add_argument('dst') +parser.add_argument('--dts-root', default='.') +args, extra_args = parser.parse_known_args() # set the version in environment for sphinx to pick up -os.environ['DPDK_VERSION'] = version +os.environ['DPDK_VERSION'] = args.version +os.environ['DTS_ROOT'] = args.dts_root # for sphinx version >= 1.7 add parallelism using "-j auto" -ver = run([sphinx, '--version'], stdout=PIPE, +ver = run([args.sphinx, '--version'], stdout=PIPE, stderr=STDOUT).stdout.decode().split()[-1] -sphinx_cmd = [sphinx] + extra_args +sphinx_cmd = [args.sphinx] + extra_args if Version(ver) >= Version('1.7'): sphinx_cmd += ['-j', 'auto'] # find all the files sphinx will process so we can write them as dependencies srcfiles = [] -for root, dirs, files in os.walk(src): +for root, dirs, files in os.walk(args.src): srcfiles.extend([join(root, f) for f in files]) # run sphinx, putting the html output in a "html" directory -with open(join(dst, 'sphinx_html.out'), 'w') as out: - process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')], - stdout=out) +with open(join(args.dst, 'sphinx_html.out'), 'w') as out: + process = run( + sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')], + stdout=out + ) # create a gcc format .d file giving all the dependencies of this doc build -with open(join(dst, '.html.d'), 'w') as d: +with open(join(args.dst, '.html.d'), 'w') as d: d.write('html: ' + ' '.join(srcfiles) + '\n') sys.exit(process.returncode) diff --git a/doc/api/meson.build b/doc/api/meson.build index 5b50692df9..92fe10d9e7 100644 --- a/doc/api/meson.build +++ b/doc/api/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Luca Boccassi +doc_api_build_dir = meson.current_build_dir() doxygen = find_program('doxygen', required: get_option('enable_docs')) if not doxygen.found() diff --git a/doc/guides/conf.py b/doc/guides/conf.py index 0f7ff5282d..169b1d24bc 100644 --- a/doc/guides/conf.py +++ b/doc/guides/conf.py @@ -7,10 +7,9 @@ from sphinx import __version__ as sphinx_version from os import listdir from os import environ -from os.path import basename -from os.path import dirname +from os.path import basename, dirname from os.path import join as path_join -from sys import argv, stderr +from sys import argv, stderr, path import configparser @@ -24,6 +23,31 @@ file=stderr) pass +extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx'] + +# Python docstring options +autodoc_default_options = { + 'members': True, + 'member-order': 'bysource', + 'show-inheritance': True, +} +autodoc_class_signature = 'separated' +autodoc_typehints = 'both' +autodoc_typehints_format = 'short' +autodoc_typehints_description_target = 'documented' +napoleon_numpy_docstring = False +napoleon_attr_annotations = True +napoleon_preprocess_types = True +add_module_names = False +toc_object_entries = False +intersphinx_mapping = {'python': ('https://docs.python.org/3', None)} + +# Sidebar config +html_theme_options = { + 'collapse_navigation': False, + 'navigation_depth': -1, +} + stop_on_error = ('-W' in argv) project = 'Data Plane Development Kit' @@ -35,8 +59,8 @@ html_show_copyright = False highlight_language = 'none' -release = environ.setdefault('DPDK_VERSION', "None") -version = release +path.append(environ.get('DTS_ROOT')) +version = environ.setdefault('DPDK_VERSION', "None") master_doc = 'index' diff --git a/doc/guides/meson.build b/doc/guides/meson.build index 51f81da2e3..8933d75f6b 100644 --- a/doc/guides/meson.build +++ b/doc/guides/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Intel Corporation +doc_guides_source_dir = meson.current_source_dir() sphinx = find_program('sphinx-build', required: get_option('enable_docs')) if not sphinx.found() diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index cd771a428c..77d9434c1c 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -283,7 +283,10 @@ When adding code to the DTS framework, pay attention to the rest of the code and try not to divert much from it. The :ref:`DTS developer tools ` will issue warnings when some of the basics are not met. -The code must be properly documented with docstrings. The style must conform to +The API documentation, which is a helpful reference when developing, may be accessed +in the code directly or generated with the `API docs build steps `_. + +Speaking of which, the code must be properly documented with docstrings. The style must conform to the `Google style `_. See an example of the style `here `_. @@ -408,3 +411,30 @@ There are three tools used in DTS to help with code checking, style and formatti These three tools are all used in ``devtools/dts-check-format.sh``, the DTS code check and format script. Refer to the script for usage: ``devtools/dts-check-format.sh -h``. + + +.. _building_api_docs: + +Building DTS API docs +--------------------- + +To build DTS API docs, install the dependencies with Poetry, then enter its shell: + +.. code-block:: console + + poetry install --with docs + poetry shell + +The documentation is built using the standard DPDK build system. After executing the meson command +and entering Poetry's shell, build the documentation with: + +.. code-block:: console + + ninja -C build dts-doc + +The output is generated in ``build/doc/api/dts/html``. + +.. Note:: + + Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run + the ``devtools/dts-check-format.sh`` script and address any issues it finds. diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json new file mode 120000 index 0000000000..d89eb81b72 --- /dev/null +++ b/dts/doc/conf_yaml_schema.json @@ -0,0 +1 @@ +../framework/config/conf_yaml_schema.json \ No newline at end of file diff --git a/dts/doc/index.rst b/dts/doc/index.rst new file mode 100644 index 0000000000..f5dcd553f2 --- /dev/null +++ b/dts/doc/index.rst @@ -0,0 +1,17 @@ +.. DPDK Test Suite documentation. + +Welcome to DPDK Test Suite's documentation! +=========================================== + +.. toctree:: + :titlesonly: + :caption: Contents: + + framework + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` diff --git a/dts/doc/meson.build b/dts/doc/meson.build new file mode 100644 index 0000000000..e11ab83843 --- /dev/null +++ b/dts/doc/meson.build @@ -0,0 +1,60 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +sphinx = find_program('sphinx-build', required: false) +sphinx_apidoc = find_program('sphinx-apidoc', required: false) + +if not sphinx.found() or not sphinx_apidoc.found() + subdir_done() +endif + +dts_api_framework_dir = join_paths(dts_dir, 'framework') +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts') +if meson.version().version_compare('>=0.57.0') + dts_api_src = custom_target('dts_api_src', + output: 'modules.rst', + env: {'SPHINX_APIDOC_OPTIONS': 'members,show-inheritance'}, + command: [sphinx_apidoc, '--append-syspath', '--force', + '--module-first', '--separate', '-V', meson.project_version(), + '--output-dir', dts_api_build_dir, '--no-toc', '--implicit-namespaces', + dts_api_framework_dir], + build_by_default: false) +else + dts_api_src = custom_target('dts_api_src', + output: 'modules.rst', + command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance', + sphinx_apidoc, '--append-syspath', '--force', + '--module-first', '--separate', '-V', meson.project_version(), + '--output-dir', dts_api_build_dir, '--no-toc', '--implicit-namespaces', + dts_api_framework_dir], + build_by_default: false) +endif +doc_targets += dts_api_src +doc_target_names += 'DTS_API_sphinx_sources' + +cp = find_program('cp') +cp_index = custom_target('cp_index', + input: ['index.rst', 'conf_yaml_schema.json'], + output: 'index.rst', + depends: dts_api_src, + command: [cp, '--dereference', '@INPUT@', dts_api_build_dir], + build_by_default: false) +doc_targets += cp_index +doc_target_names += 'DTS_API_sphinx_index' + +extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir] +if get_option('werror') + extra_sphinx_args += '-W' +endif + +htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk') +dts_api_html = custom_target('dts_api_html', + output: 'html', + depends: cp_index, + command: [sphinx_wrapper, sphinx, meson.project_version(), + dts_api_build_dir, dts_api_build_dir, extra_sphinx_args], + build_by_default: false, + install: false, + install_dir: htmldir) +doc_targets += dts_api_html +doc_target_names += 'DTS_API_HTML' diff --git a/dts/meson.build b/dts/meson.build new file mode 100644 index 0000000000..e8ce0f06ac --- /dev/null +++ b/dts/meson.build @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +doc_targets = [] +doc_target_names = [] +dts_dir = meson.current_source_dir() + +subdir('doc') + +if doc_targets.length() == 0 + message = 'No docs targets found' +else + message = 'Built docs:' +endif +run_target('dts-doc', command: [echo, message, doc_target_names], + depends: doc_targets) diff --git a/meson.build b/meson.build index 2e6e546d20..c391bf8c71 100644 --- a/meson.build +++ b/meson.build @@ -87,6 +87,7 @@ subdir('app') # build docs subdir('doc') +subdir('dts') # build any examples explicitly requested - useful for developers - and # install any example code into the appropriate install path