From patchwork Fri Aug 11 20:00:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adam Hassick X-Patchwork-Id: 130211 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B7FF43039; Fri, 11 Aug 2023 22:03:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 75A8643259; Fri, 11 Aug 2023 22:03:04 +0200 (CEST) Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by mails.dpdk.org (Postfix) with ESMTP id A9AA340144 for ; Fri, 11 Aug 2023 22:03:03 +0200 (CEST) Received: by mail-qt1-f193.google.com with SMTP id d75a77b69052e-40ffc70e740so15121251cf.0 for ; Fri, 11 Aug 2023 13:03:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1691784183; x=1692388983; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w0Ik2REP1HyG3EAdnzsUT2znHudKIif3irmAoBb6Gds=; b=enj0Wuuwv6BwzIdq1yWzSaihUtBhusnklu4ZFbc4DHgiLnj2DcHXslvlWAuS0E/TjO TC4M+PdECjjlQ1jqs4OX8fWRZDczYFdS9ojOFQ0iA60pRlFSmc/IxcTAQ2sdzqg7C/bX Jb51hhUOM+7HOosKq2llfBSl09klk38EFurEY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691784183; x=1692388983; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w0Ik2REP1HyG3EAdnzsUT2znHudKIif3irmAoBb6Gds=; b=A0fcsMPLy4Igumw43BHlcmQMJRzDLzsd6Q+3BWSLg4UTtrHxf2vzhKxROOieRO6F/L co7SvjZ1iXxDgmzGIhX8NL1v40E2r1bUwvZGu1lwlQamtGleb0nT0SqmTVNfpIkkKJ8H UT579EQufuk1FB5LEM2Elt6MVoaZahCT+fVBfDT3+BzZVp50eVCMky9W6X3RZGvN2qPy IGN2KjmnCNJWs4A+7NQFSFtrdehlCChj+/TUL9oDod9ew7dE0NPJcnIJrcrfi2VF3mL4 p0y1roqjJdSn9uncTfyy48kaIZOekZgtgDfhG1TIBH2T8ZJvwFYC3GQ+8dcu7beDO1oZ ukxw== X-Gm-Message-State: AOJu0YznyxK3jqyMdbNk/1eggs/AUK5t/qBY1MYWKvyDRBTthMW59w5J JcuG9aeYyvo3i2QfvakmHBzwCp0cM03VYi3jqz9Tvf/fxRcgl+Nm53HfFQw9jW12xjoGkWzU7+9 X5HQf9QRitlBwol7IerCIA2MAbg9D2aAdUobzUZbSShsJ9MywSX69fRjNeQnkw9jKMOexyw== X-Google-Smtp-Source: AGHT+IFS7JdUfG7EjPeoyrOHae3je7BFXX91QHfrC29eJFyxJoagc2tZ/Kt8uc6DNrY9QVu7xL90jw== X-Received: by 2002:a05:622a:15c2:b0:40f:5510:d74d with SMTP id d2-20020a05622a15c200b0040f5510d74dmr3868590qty.13.1691784182765; Fri, 11 Aug 2023 13:03:02 -0700 (PDT) Received: from pogmachine2.loudonlune.net ([216.212.51.182]) by smtp.gmail.com with ESMTPSA id f19-20020ac84993000000b004053bcffe49sm1399874qtq.9.2023.08.11.13.03.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Aug 2023 13:03:02 -0700 (PDT) From: Adam Hassick To: ci@dpdk.org Cc: aconole@redhat.com, alialnu@nvidia.com, Owen Hilyard , Adam Hassick Subject: [PATCH v9 3/6] containers/builder: Dockerfile creation script Date: Fri, 11 Aug 2023 16:00:15 -0400 Message-ID: <20230811200018.5650-4-ahassick@iol.unh.edu> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230811200018.5650-1-ahassick@iol.unh.edu> References: <20230811200018.5650-1-ahassick@iol.unh.edu> MIME-Version: 1.0 X-BeenThere: ci@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK CI discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ci-bounces@dpdk.org From: Owen Hilyard This script will template out all of the Dockerfiles based on the definitions provided in the inventory using the jinja2 templating library. Signed-off-by: Owen Hilyard Signed-off-by: Adam Hassick --- containers/template_engine/make_dockerfile.py | 371 ++++++++++++++++++ 1 file changed, 371 insertions(+) create mode 100755 containers/template_engine/make_dockerfile.py diff --git a/containers/template_engine/make_dockerfile.py b/containers/template_engine/make_dockerfile.py new file mode 100755 index 0000000..9f5fc7f --- /dev/null +++ b/containers/template_engine/make_dockerfile.py @@ -0,0 +1,371 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 University of New Hampshire +import argparse +import json +import logging +import os +import re +from dataclasses import dataclass +from datetime import datetime +import platform +from typing import Any, Dict, List, Optional + +import jsonschema +import yaml +from jinja2 import Environment, FileSystemLoader + + +@dataclass(frozen=True) +class Options: + on_rhel: bool + fail_on_unbuildable: bool + has_coverity: bool + build_libabigail: bool + build_abi: bool + output_dir: str + registry_hostname: str + host_arch_only: bool + omit_latest: bool + is_builder: bool + date_override: Optional[str] + ninja_workers: Optional[int] + + +def _get_arg_parser() -> argparse.ArgumentParser: + parser = argparse.ArgumentParser(description="Makes the dockerfile") + parser.add_argument("--output-dir", required=True) + parser.add_argument( + "--rhel", + action="store_true", + help="Overwrite the check for running on RHEL", + default=False, + ) + parser.add_argument( + "--fail-on-unbuildable", + action="store_true", + help="If any container would not be possible to build, fail and exit with a non-zero exit code.", + default=False, + ) + parser.add_argument( + "--build-abi", + action="store_true", + help="Whether to build the ABI references into the image. Disabled by default due to producing 10+ GB images. Implies '--build-libabigail'.", + ) + parser.add_argument( + "--build-libabigail", + action="store_true", + help="Whether to build libabigail from source for distros that do not package it. Implied by '--build-abi'", + ) + parser.add_argument( + "--host-arch-only", + action="store_true", + help="Only build containers for the architecture of the host system", + ) + parser.add_argument( + "--omit-latest", + action="store_true", + help='Whether to include the "latest" tag in the generated makefile.', + ) + parser.add_argument( + "--builder-mode", + action="store_true", + help='Specifies that the makefile is being templated for a builder. This implicitly sets "--host-arch-only" to true and disables making the manifests.', + default=False, + ) + parser.add_argument( + "--date", + type=str, + help="Overrides generation of the timestamp and uses the provided string instead.", + ) + parser.add_argument( + "--ninja-workers", + type=int, + help="Specifies a number of ninja workers to limit builds to. Uses the ninja default when not given.", + ) + parser.add_argument( + "--coverity", + action="store_true", + help="Whether the Coverity Scan binaries are available for building the Coverity containers.", + default=False, + ) + return parser + + +def parse_args() -> Options: + parser = _get_arg_parser() + args = parser.parse_args() + + registry_hostname = ( + os.environ.get("DPDK_CI_CONTAINERS_REGISTRY_HOSTNAME") or "localhost" + ) + + # In order to to build the ABIs, libabigail must be built from source on + # some platforms + build_libabigail: bool = args.build_libabigail or args.build_abi + + opts = Options( + on_rhel=args.rhel, + fail_on_unbuildable=args.fail_on_unbuildable, + build_libabigail=build_libabigail, + build_abi=args.build_abi, + output_dir=args.output_dir, + registry_hostname=registry_hostname, + host_arch_only=args.host_arch_only or args.builder_mode, + omit_latest=args.omit_latest, + is_builder=args.builder_mode, + date_override=args.date, + ninja_workers=args.ninja_workers, + has_coverity=args.coverity, + ) + + logging.info(f"make_dockerfile.py options: {opts}") + return opts + + +def running_on_RHEL(options: Options) -> bool: + """ + RHEL containers can only be built on RHEL, so disable them and emit a + warning if not on RHEL. + """ + redhat_release_path = "/etc/redhat-release" + + if os.path.exists(redhat_release_path): + with open(redhat_release_path) as f: + first_line = f.readline() + on_rhel = "Red Hat Enterprise Linux" in first_line + if on_rhel: + logging.info("Running on RHEL, allowing RHEL containers") + return True + + logging.warning("Not on RHEL, disabling RHEL containers") + assert options is not None, "Internal state error, OPTIONS should not be None" + + if options.on_rhel: + logging.info("Override enabled, enabling RHEL containers") + + return options.on_rhel + + +def get_path_to_parent_directory() -> str: + return os.path.dirname(__file__) + + +def get_raw_inventory(): + parent_dir = get_path_to_parent_directory() + + schema_path = os.path.join(parent_dir, "inventory_schema.json") + inventory_path = os.path.join(parent_dir, "inventory.yaml") + + inventory: Dict[str, Any] + with open(inventory_path, "r") as f: + inventory = yaml.safe_load(f) + + schema: Dict[str, Any] + with open(schema_path, "r") as f: + schema = json.load(f) + + jsonschema.validate(instance=inventory, schema=schema) + return inventory + + +def apply_group_config_to_target( + target: Dict[str, Any], + raw_inventory: Dict[str, Any], + on_rhel: bool, + fail_on_unbuildable: bool, +) -> Optional[Dict[str, Any]]: + groups_for_target: List[Dict[str, Any]] = [] + groups: List[Dict[str, Any]] = raw_inventory["dockerfiles"]["groups"] + group = groups[target["group"]] + + target_primary_group = target["group"] + + assert isinstance( + target_primary_group, str + ), "Target group name was \ + not a string" + + requires_rhel = "rhel" in target_primary_group.lower() + + if requires_rhel and not on_rhel: + logging.warning( + f"Disabling target {target['name']}, because it must be built on RHEL." + ) + if fail_on_unbuildable: + raise AssertionError( + f"Not on RHEL and target {target['name']} must be built on RHEL" + ) + + return None + + while group["parent"] != "NONE": + groups_for_target.append(group) + group = groups[group["parent"]] + + groups_for_target.append(group) # add the "all" group + groups_for_target.reverse() # reverse it so overrides work + + target_packages: List[str] = target.get("packages") or [] + + for group in groups_for_target: + target_packages = [*target_packages, *(group.get("packages") or [])] + target = dict(target, **group) + + target["packages"] = target_packages + + return target + + +def apply_defaults_to_target(target: Dict[str, Any]) -> Dict[str, Any]: + def default_if_unset( + target: Dict[str, Any], key: str, value: Any + ) -> Dict[str, Any]: + if key not in target: + target[key] = value + + return target + + target = default_if_unset(target, "requires_coverity", False) + target = default_if_unset(target, "force_disable_abi", False) + target = default_if_unset( + target, "minimum_dpdk_version", dict(major=0, minor=0, revision=0) + ) + target = default_if_unset(target, "extra_information", {}) + + return target + + +def get_host_arch() -> str: + machine: str = platform.machine() + match machine: + case "aarch64" | "armv8b" | "armv8l": + return "linux/arm64" + case "ppc64le": + return "linux/ppc64le" + case "x86_64" | "x64" | "amd64": + return "linux/amd64" + case arch: + raise ValueError(f"Unknown arch {arch}") + + +def process_target( + target: Dict[str, Any], + raw_inventory: Dict[str, Any], + has_coverity: bool, + on_rhel: bool, + fail_on_unbuildable: bool, + host_arch_only: bool, + build_timestamp: str, +) -> Optional[Dict[str, Any]]: + target = apply_defaults_to_target(target) + # Copy the platforms, for building the manifest list. + + # Write the build timestamp. + target["extra_information"].update({"build_timestamp": build_timestamp}) + + if (not has_coverity) and target["requires_coverity"]: + print( + f"Disabling {target['name']}. Target requires Coverity, and it is not enabled." + ) + return None + + if host_arch_only: + host_arch = get_host_arch() + if host_arch in target["platforms"]: + target["platforms"] = [host_arch] + else: + return None + + return apply_group_config_to_target( + target, raw_inventory, on_rhel, fail_on_unbuildable + ) + + +def get_processed_inventory(options: Options, build_timestamp: str) -> Dict[str, Any]: + raw_inventory: Dict[str, Any] = get_raw_inventory() + on_rhel = running_on_RHEL(options) + targets = raw_inventory["dockerfiles"]["targets"] + targets = [ + process_target( + target, + raw_inventory, + options.has_coverity, + on_rhel, + options.fail_on_unbuildable, + options.host_arch_only, + build_timestamp, + ) + for target in targets + ] + # remove disabled options + targets = [target for target in targets if target is not None] + raw_inventory["dockerfiles"]["targets"] = targets + + return raw_inventory + + +def main(): + options: Options = parse_args() + + env = Environment( + loader=FileSystemLoader("templates"), + ) + + build_timestamp = datetime.now().strftime("%Y-%m-%d-%H-%M-%S") + + inventory = get_processed_inventory(options, build_timestamp) + + if options.date_override: + timestamp = options.date_override + else: + timestamp = datetime.now().strftime("%Y-%m-%d") + + for target in inventory["dockerfiles"]["targets"]: + template = env.get_template(f"containers/{target['group']}.dockerfile.j2") + dockerfile_location = os.path.join( + options.output_dir, target["name"] + ".dockerfile" + ) + + tags: list[str] = target.get("extra_tags") or [] + + tags.insert(0, "$R/$N:$T") + if not options.omit_latest: + tags.insert(0, "$R/$N:latest") + else: + tags = list(filter(lambda x: re.match("^.*:latest$", x) is None, tags)) + + target["tags"] = tags + + rendered_dockerfile = template.render( + timestamp=timestamp, + target=target, + build_libabigail=options.build_libabigail, + build_abi=options.build_abi, + build_timestamp=build_timestamp, + registry_hostname=options.registry_hostname, + ninja_workers=options.ninja_workers, + **inventory, + ) + with open(dockerfile_location, "w") as output_file: + output_file.write(rendered_dockerfile) + + makefile_template = env.get_template(f"containers.makefile.j2") + rendered_makefile = makefile_template.render( + timestamp=timestamp, + build_libabigail=options.build_libabigail, + build_abi=options.build_abi, + host_arch_only=options.host_arch_only, + registry_hostname=options.registry_hostname, + is_builder=options.is_builder, + **inventory, + ) + makefile_output_path = os.path.join(options.output_dir, "Makefile") + with open(makefile_output_path, "w") as f: + f.write(rendered_makefile) + + +if __name__ == "__main__": + logging.basicConfig() + logging.root.setLevel(0) # log everything + main()