From patchwork Tue Jul 11 20:21:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adam Hassick X-Patchwork-Id: 129481 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AFBD842E4A; Tue, 11 Jul 2023 22:22:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ABB3940A7D; Tue, 11 Jul 2023 22:22:28 +0200 (CEST) Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by mails.dpdk.org (Postfix) with ESMTP id 155A14003C for ; Tue, 11 Jul 2023 22:22:27 +0200 (CEST) Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-4036d80caaeso38382441cf.2 for ; Tue, 11 Jul 2023 13:22:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1689106946; x=1691698946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R6hhnvIaluFLKbPQH1MTOMcY34O5nxl+UZGl8zsqC70=; b=FXmfhtsyfm5slaVkO+12wQ/d7WI7i9iki1xCk0c84VTQbtyAv/3I9WpuUkGv8bnoDM o3YA9hslr0/JMDzGwUfKBll4+1Al3wO9CKV7S9Pi229qFg/uavjZws3449gdr7bjbfUv HHDkQuYJLHlW7648r6wfG95fhZOi7Dvz1z5jY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689106946; x=1691698946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R6hhnvIaluFLKbPQH1MTOMcY34O5nxl+UZGl8zsqC70=; b=PV1roXyIKEShh5gL4NDR3CRh+gk/Kz0PRazfmhdvMiOqHwekTlhAe8hSSGMYUoZgod zirHKkf9gl/4n+8t97cpVSi3Qwt4Urmg1meJpxZ7j0l0DqCgKZ9YiNkeUCkvAaSGF8He kxcEyjSrMF5Nd7HMApdUAE02qCxsV44Lw6LJ6aUImIrdfGE4t2Qn3eisTJaf8I2f9fk5 Hec7VTHhuljyQ3REco8D7cocwTdI6iWC2R2ElKgcr01HYuduqrL8O0b4+8Fi5yTtp2qo mltbODph/fx5HKOr0HzHqsWWIwPJyok+hgWqS5pwLW9iPAO6KOlD9qP/yGuABc6fKbda 3olw== X-Gm-Message-State: ABy/qLa4IoJgNytrm8QTBHgjRUW4NaKABKevyel1Vc7MrzJ/sSCcwNMI /OuicPNpL77n1bf/jj3yH+tDF/jZzc3smazoH6uvDZnTiIjzxhnsWlW7fB9t2NFzxw6bUHvqE2J QkNLj1qifDBn7tIX8Di499CcU1G8f6W1DRDLU+rnsi8N7975gkI2uiKxcw940bhbm X-Google-Smtp-Source: APBJJlGAazTfZ3LRW/xJBAFBUbLJozyySjKPqKiDuFSnvh5S8198XLIf7vIpj23e9y2c6r8BuJX9PA== X-Received: by 2002:ac8:5ad6:0:b0:403:31f9:3ed8 with SMTP id d22-20020ac85ad6000000b0040331f93ed8mr19918746qtd.45.1689106946304; Tue, 11 Jul 2023 13:22:26 -0700 (PDT) Received: from ahassick-Desktop.iol.unh.edu ([2606:4100:3880:1220:714e:bdff:bc40:f34f]) by smtp.gmail.com with ESMTPSA id s19-20020ac85293000000b004035c1062f8sm1470604qtn.10.2023.07.11.13.22.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Jul 2023 13:22:26 -0700 (PDT) From: Adam Hassick To: ci@dpdk.org Cc: alialnu@nvidia.com, aconole@redhat.com, Adam Hassick Subject: [PATCH v7 3/6] containers/builder: Dockerfile creation script Date: Tue, 11 Jul 2023 16:21:21 -0400 Message-Id: <20230711202124.1636317-4-ahassick@iol.unh.edu> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230711202124.1636317-1-ahassick@iol.unh.edu> References: <20230711202124.1636317-1-ahassick@iol.unh.edu> MIME-Version: 1.0 X-BeenThere: ci@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK CI discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ci-bounces@dpdk.org From: Owen Hilyard This script will template out all of the Dockerfiles based on the definitions provided in the inventory using the jinja2 templating library. Signed-off-by: Owen Hilyard Signed-off-by: Adam Hassick --- containers/template_engine/make_dockerfile.py | 358 ++++++++++++++++++ 1 file changed, 358 insertions(+) create mode 100755 containers/template_engine/make_dockerfile.py diff --git a/containers/template_engine/make_dockerfile.py b/containers/template_engine/make_dockerfile.py new file mode 100755 index 0000000..60269a0 --- /dev/null +++ b/containers/template_engine/make_dockerfile.py @@ -0,0 +1,358 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 University of New Hampshire +import argparse +import json +import logging +import os +import re +from dataclasses import dataclass +from datetime import datetime +import platform +from typing import Any, Dict, List, Optional + +import jsonschema +import yaml +from jinja2 import Environment, FileSystemLoader, select_autoescape + + +@dataclass(frozen=True) +class Options: + on_rhel: bool + fail_on_unbuildable: bool + has_coverity: bool + build_libabigail: bool + build_abi: bool + output_dir: str + registry_hostname: str + host_arch_only: bool + omit_latest: bool + is_builder: bool + date_override: Optional[str] + ninja_workers: Optional[int] + + +def _get_arg_parser() -> argparse.ArgumentParser: + parser = argparse.ArgumentParser(description="Makes the dockerfile") + parser.add_argument("--output-dir", required=True) + parser.add_argument( + "--rhel", + action="store_true", + help="Overwrite the check for running on RHEL", + default=False, + ) + parser.add_argument( + "--fail-on-unbuildable", + action="store_true", + help="If any container would not be possible to build, fail and exit with a non-zero exit code.", + default=False, + ) + parser.add_argument( + "--build-abi", + action="store_true", + help="Whether to build the ABI references into the image. Disabled by \ + default due to producing 10+ GB images. \ + Implies '--build-libabigail'.", + ) + parser.add_argument( + "--build-libabigail", + action="store_true", + help="Whether to build libabigail from source for distros that do not \ + package it. Implied by '--build-abi'", + ) + parser.add_argument( + "--host-arch-only", + action="store_true", + help="Only build containers for the architecture of the host system", + ) + parser.add_argument( + "--omit-latest", + action="store_true", + help="Whether to include the \"latest\" tag in the generated makefile." + ) + parser.add_argument( + "--builder-mode", + action="store_true", + help="Specifies that the makefile is being templated for a builder. \ + This implicitly sets \"--host-arch-only\" to true and disables making the manifests.", + default=False + ) + parser.add_argument( + "--date", + type=str, + help="Overrides generation of the timestamp and uses the provided string instead." + ) + parser.add_argument( + "--ninja-workers", + type=int, + help="Specifies a number of ninja workers to limit builds to. Uses the ninja default when not given." + ) + parser.add_argument( + "--coverity", + action="store_true", + help="Whether the Coverity Scan binaries are available for building the Coverity containers.", + default=False + ) + return parser + + +def parse_args() -> Options: + parser = _get_arg_parser() + args = parser.parse_args() + + registry_hostname = ( + os.environ.get("DPDK_CI_CONTAINERS_REGISTRY_HOSTNAME") or "localhost" + ) + + # In order to to build the ABIs, libabigail must be built from source on + # some platforms + build_libabigail: bool = args.build_libabigail or args.build_abi + + opts = Options( + on_rhel=args.rhel, + fail_on_unbuildable=args.fail_on_unbuildable, + build_libabigail=build_libabigail, + build_abi=args.build_abi, + output_dir=args.output_dir, + registry_hostname=registry_hostname, + host_arch_only=args.host_arch_only or args.builder_mode, + omit_latest=args.omit_latest, + is_builder=args.builder_mode, + date_override=args.date, + ninja_workers=args.ninja_workers, + has_coverity=args.coverity + ) + + logging.info(f"make_dockerfile.py options: {opts}") + return opts + + +def running_on_RHEL(options: Options) -> bool: + """ + RHEL containers can only be built on RHEL, so disable them and emit a + warning if not on RHEL. + """ + redhat_release_path = "/etc/redhat-release" + + if os.path.exists(redhat_release_path): + with open(redhat_release_path) as f: + first_line = f.readline() + on_rhel = "Red Hat Enterprise Linux" in first_line + if on_rhel: + logging.info("Running on RHEL, allowing RHEL containers") + return True + + logging.warning("Not on RHEL, disabling RHEL containers") + assert options is not None, "Internal state error, OPTIONS should not be None" + + if options.on_rhel: + logging.info("Override enabled, enabling RHEL containers") + + return options.on_rhel + + +def get_path_to_parent_directory() -> str: + return os.path.dirname(__file__) + + +def get_raw_inventory(): + parent_dir = get_path_to_parent_directory() + + schema_path = os.path.join(parent_dir, "inventory_schema.json") + inventory_path = os.path.join(parent_dir, "inventory.yaml") + + inventory: Dict[str, Any] + with open(inventory_path, "r") as f: + inventory = yaml.safe_load(f) + + schema: Dict[str, Any] + with open(schema_path, "r") as f: + schema = json.load(f) + + jsonschema.validate(instance=inventory, schema=schema) + return inventory + + +def apply_group_config_to_target( + target: Dict[str, Any], + raw_inventory: Dict[str, Any], + on_rhel: bool, + fail_on_unbuildable: bool, +) -> Optional[Dict[str, Any]]: + groups_for_target: List[Dict[str, Any]] = [] + groups: List[Dict[str, Any]] = raw_inventory["dockerfiles"]["groups"] + group = groups[target["group"]] + + target_primary_group = target["group"] + + assert isinstance(target_primary_group, str), "Target group name was not a string" + + requires_rhel = "rhel" in target_primary_group.lower() + + if requires_rhel and not on_rhel: + logging.warning( + f"Disabling target {target['name']}, because it must be built on RHEL." + ) + if fail_on_unbuildable: + raise AssertionError( + f"Not on RHEL and target {target['name']} must be built on RHEL" + ) + + return None + + while group["parent"] != "NONE": + groups_for_target.append(group) + group = groups[group["parent"]] + + groups_for_target.append(group) # add the "all" group + groups_for_target.reverse() # reverse it so overrides work + + target_packages: List[str] = target.get("packages") or [] + + for group in groups_for_target: + target_packages = [*target_packages, *(group.get("packages") or [])] + target = dict(target, **group) + + target["packages"] = target_packages + + return target + +def apply_defaults_to_target(target: Dict[str, Any]) -> Dict[str, Any]: + def default_if_unset(target: Dict[str, Any], key: str, value: Any) -> Dict[str, Any]: + if key not in target: + target[key] = value + + return target + + target = default_if_unset(target, "requires_coverity", False) + target = default_if_unset(target, "force_disable_abi", False) + target = default_if_unset(target, "minimum_dpdk_version", dict(major=0, minor=0, revision=0)) + target = default_if_unset(target, "extra_information", {}) + + return target + +def get_host_arch() -> str: + machine: str = platform.machine() + match machine: + case "aarch64" | "armv8b" | "armv8l": + return "linux/arm64" + case "ppc64le": + return "linux/ppc64le" + case "x86_64" | "x64" | "amd64": + return "linux/amd64" + case arch: + raise ValueError(f"Unknown arch {arch}") + +def process_target( + target: Dict[str, Any], + raw_inventory: Dict[str, Any], + has_coverity: bool, + on_rhel: bool, + fail_on_unbuildable: bool, + host_arch_only: bool, + build_timestamp: str +) -> Optional[Dict[str, Any]]: + target = apply_defaults_to_target(target) + # Copy the platforms, for building the manifest list. + + # Write the build timestamp. + target["extra_information"].update({ + "build_timestamp": build_timestamp + }) + + if (not has_coverity) and target["requires_coverity"]: + print(f"Disabling {target['name']}. Target requires Coverity, and it is not enabled.") + return None + + if host_arch_only: + host_arch = get_host_arch() + if host_arch in target["platforms"]: + target["platforms"] = [host_arch] + else: + return None + + return apply_group_config_to_target( + target, raw_inventory, on_rhel, fail_on_unbuildable + ) + +def get_processed_inventory(options: Options, build_timestamp: str) -> Dict[str, Any]: + raw_inventory: Dict[str, Any] = get_raw_inventory() + on_rhel = running_on_RHEL(options) + targets = raw_inventory["dockerfiles"]["targets"] + targets = [ + process_target( + target, raw_inventory, options.has_coverity, on_rhel, options.fail_on_unbuildable, options.host_arch_only, build_timestamp + ) + for target in targets + ] + # remove disabled options + targets = [target for target in targets if target is not None] + raw_inventory["dockerfiles"]["targets"] = targets + + return raw_inventory + + +def main(): + options: Options = parse_args() + + env = Environment( + loader=FileSystemLoader("templates"), + ) + + build_timestamp = datetime.now().strftime("%Y-%m-%d-%H-%M-%S") + + inventory = get_processed_inventory(options, build_timestamp) + + if options.date_override: + timestamp = options.date_override + else: + timestamp = datetime.now().strftime("%Y-%m-%d") + + for target in inventory["dockerfiles"]["targets"]: + template = env.get_template(f"containers/{target['group']}.dockerfile.j2") + dockerfile_location = os.path.join( + options.output_dir, target["name"] + ".dockerfile" + ) + + tags: list[str] = target.get("extra_tags") or [] + + tags.insert(0, "$R/$N:$T") + if not options.omit_latest: + tags.insert(0, "$R/$N:latest") + else: + tags = list(filter(lambda x: re.match('^.*:latest$', x) is None, tags)) + + target["tags"] = tags + + rendered_dockerfile = template.render( + timestamp=timestamp, + target=target, + build_libabigail=options.build_libabigail, + build_abi=options.build_abi, + build_timestamp=build_timestamp, + registry_hostname=options.registry_hostname, + ninja_workers=options.ninja_workers, + **inventory, + ) + with open(dockerfile_location, "w") as output_file: + output_file.write(rendered_dockerfile) + + makefile_template = env.get_template(f"containers.makefile.j2") + rendered_makefile = makefile_template.render( + timestamp=timestamp, + build_libabigail=options.build_libabigail, + build_abi=options.build_abi, + host_arch_only=options.host_arch_only, + registry_hostname=options.registry_hostname, + is_builder=options.is_builder, + **inventory, + ) + makefile_output_path = os.path.join(options.output_dir, "Makefile") + with open(makefile_output_path, "w") as f: + f.write(rendered_makefile) + + +if __name__ == "__main__": + logging.basicConfig() + logging.root.setLevel(0) # log everything + main()