Giter VIP home page Giter VIP logo

build-definitions's Introduction

build-definitions

This repository contains components that are installed or managed by the managed CI and Build Team.

This includes default Pipelines and Tasks. You need to have bootstrapped a working appstudio configuration from (see https://github.com/redhat-appstudio/infra-deployments) for the dev of pipelines or new tasks.

Pipelines and Tasks are delivered into App Studio via quay organization konflux-ci/tekton-catalog. Pipelines are bundled and pushed into repositories prefixed with pipeline- and tagged with $GIT_SHA (tag will be updated with every change). Tasks are bundled and pushed into repositories prefixed with task- and tagged with $VERSION where VERSION is the task version (tag is updated when the task file contains any change in the PR)

Currently a set of utilities are bundled with App Studio in quay.io/konflux-ci/appstudio-utils:$GIT_SHA as a convenience but tasks may be run from different per-task containers.

Building

Script hack/build-and-push.sh creates bundles for pipelines, tasks and create appstudio-utils image. Images are pushed into your quay.io repository. You will need to set QUAY_NAMESPACE to use this feature and be logged into quay.io on your workstation. Once you run the hack/build-and-push.sh all pipelines will come from your bundle instead of from the default installed by gitops into the cluster.

Note

If you're using Mac OS, you need to install GNU coreutils before running the hack/build-and-push.sh script:

brew install coreutils

There is an option to push all bundles to a single quay.io repository (this method is used in PR testing). It is used by setting TEST_REPO_NAME environment variable. Bundle names are then specified in the container image tag, i.e. quay.io/<quay-user>/$TEST_REPO_NAME:<bundle-name>-<tag>

Pipelines

The pipelines can be found in the pipelines directory.

  • core-services: contains pipeline for the CI of Stonesoup core services e.g. application-service and build-service.
  • template-build: contains common template used to generate docker-build, fbc-builder, java-builder and nodejs-builder pipelines

Tasks

The tasks can be found in the tasks directories. Tasks are bundled and used by bundled pipelines. Tasks are not stored in the Cluster. For quick local innerloop style task development, you may install new Tasks in your local namespace manually and create your pipelines as well as the base task image to test new function. Tasks can be installed into local namespace using oc apply -k tasks/appstudio-utils/util-tasks.

There is a container which is used to support multiple set of tasks called quay.io/konflux-ci/appstudio-utils:GIT_SHA , which is a single container which is used by multiple tasks. Tasks may also be in their own container as well however many simple tasks are utilities and will be packaged for app studio in a single container. Tasks can rely on other tasks in the system which are co-packed in a container allowing combined tasks (build-only vs build-deploy) which use the same core implementations.

Shellspec tests can be run by invoking hack/test-shellspec.sh.

StepActions

The StepActions can be found in the stepactions directory. StepActions are not yet bundled.

Release

Release is done by (better leave it to the push pipeline):

for quay_namespace in redhat-appstudio-tekton-catalog konflux-ci/tekton-catalog; do
  QUAY_NAMESPACE=$quay_namespace BUILD_TAG=$(git rev-parse HEAD) hack/build-and-push.sh
done

Versioning

When the task update changes the interface (eg. change of parameters, workspaces or results names) then a new version of the task should be created. The folder with the new version must contain MIGRATION.md with instructions on how to update the current pipeline file in user's .tekton folder.

Adding a new parameter with a default value does not require the task version increase.

Task version increase must be approved by Project Manager.

Testing

Script ./hack/test-builds.sh creates pipelines and tasks directly in current namespace and executes there test builds. By setting the environment variable QUAY_NAMESPACE the images will be pushed into user's quay repository, in that case creation of secret named redhat-appstudio-staginguser-pull-secret is required.

Script ./hack/test-build.sh provides way to test on custom git repository and pipeline. Usage example: ./hack/test-build.sh https://github.com/jduimovich/spring-petclinic java-builder.

Compliance

Task definitions must comply to Enterprise Contract policies. Currently, there are two policy configurations. The all-tasks policy configuration applies to all Task definitions, while the build-tasks policy configuration applies only to build Task definitions. A build Task, i.e. one that produces a container image, must abide to both policy configurations.

build-definitions's People

Contributors

amisstea avatar arewm avatar brianwcook avatar brunoapimentel avatar chmeliik avatar dirgim avatar gabemontero avatar gbenhaim avatar hongweiliu17 avatar jduimovich avatar jinqi7 avatar joejstuart avatar jsztuka avatar lcarva avatar martinbasti avatar michkov avatar mkosiarc avatar mmorhun avatar psturc avatar ralphbean avatar renovate[bot] avatar rh-tap-build-team[bot] avatar sbose78 avatar simonbaird avatar sonam1412 avatar stuartwdouglas avatar tisutisu avatar tkdchen avatar yftacherzog avatar zregvart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

build-definitions's Issues

Build the Pipelines Catalog using tekton on Staging

Acceptance criteria:

Upon, merge to https://github.com/redhat-appstudio/build-definitions , everything inside https://github.com/redhat-appstudio/build-definitions/tree/main/pipelines/build-templates-bundle should be built into a new bundle as deemed relevant.

Implementation notes

etc-pki-entitlement secret ignored after #1207

I have a component that has a etc-pki-secret for accessing RHEL repositories but no activation-key secret.

With the d4262b9 version of the buildah task, I see in my logs:

[build] Adding the entitlement to the build
[build] Adding activation key to the build

With 2099988 (failing PR for my component), I see only:

[build] Adding activation key to the build

I believe the checks in the code:

      if [ -d "$ACTIVATION_KEY_PATH" ]; then
        cp -r --preserve=mode "$ACTIVATION_KEY_PATH" /tmp/activation-key
        mkdir /shared/rhsm-tmp
        VOLUME_MOUNTS="${VOLUME_MOUNTS} --volume /tmp/activation-key:/activation-key -v /shared/rhsm-tmp:/etc/pki/entitlement:Z"
        echo "Adding activation key to the build"
      elif [ -d "$ENTITLEMENT_PATH" ]; then
        cp -r --preserve=mode "$ENTITLEMENT_PATH" /tmp/entitlement
        VOLUME_MOUNTS="${VOLUME_MOUNTS} --volume /tmp/entitlement:/etc/pki/entitlement"
        echo "Adding the entitlement to the build"
      fi

are faulty because if the optional secrets are missing, the mount points are still created, they just are empty directories. The Kubernetes documentation doesn't specify what happens for a missing optional secret, but see this stack exchange question.

deprecated-image-check is not multiarch aware

The task starts by downloading an SBOM to figure out the base images, but this doesn't work for multi-arch images unless you specify which SBOM to download.

It should probably download all of them and check all of them if its a multiarch image.

Upload Snyk SAST results to snyk.io

For RHTAS prodsec compliance, we have been asked to provide links to systems or documents where it is shown how SAST findings are analyzed and processed by the team. With the current Konflux configuration, it is not straightforward to do this. Snyk Code results are stored as pipeline results, but are not shipped anywhere else during a pipelinerun. It would be ideal for us if the snyk results could be published to snyk.io using the --report option for snyk code test. This feature appears to be in private beta, but it has been for about a year. Perhaps we could use it?

Add `zstd:chunked` compression to built images.

There's a long arc of changes in container tooling to support zstd:chunked compression that supports better compression and better download capabilities in clients. However, old versions of docker (pre-2023) don't support it, which has held up adoption.

The request here is to add zstd:chunked compression in addition to the normal gzip compression for images when we push an OCI image index - so that new clients can take advantage of it, and old clients don't break. To illustrate, for an ordinary multi-arch image index, you would have 4 entries, one for each arch. The idea here is to have 8 entries, one for each arch with gzip compression, and one for each arch with zstd:chunked compression.

Idea number 1: I looked, and buildah manifest push has a --add-compression zstd:chunked option which (I think) will automatically add the extra image manifests on push.

However, I think we might have issues with image provenance. In a Konflux multi-arch build pipeline, we build each arch separately in a buildah task. Those produce image manifests, and their digests are recorded and provenance information is associated with them. We then produce the image index, record its digest, and record stuff there too.

When we validate an image at release time, we ensure that we can find provenance for both the image index as well as all of the referenced image manifests.

If we "magically" add an extra four manifests when we create the image index, then our policy check will fail unless their digests and pullspecs are somehow also exposed, so that chains can attest to their origin.

Idea number 2: Instead, perhaps our buildah tasks should be modified to first push the image with gzip compression, and then call buildah push again with a --compression-format zstd:chunked option to generate the second image manifest then, earlier in the process. Our buildah task would need to expose both digests.. hm.

Idea number 3: Perhaps we should add a new intermediary task that accepts the digest of the image-manifest producing buildah task as a param, pulls it, and then re-pushes it with zstd:chunked. This new task would expose the pullspec and digest of the zstd:chunked manifest as a result, which will induce chains to record provenance for it. The task that builds the image index would then take 8 inputs instead of 4. All 8 would be explicitly added to the image index.

Task "source-build" fails - ERROR:failed to build source image

Issue

The task "source-build" fails using our PipelineRun - https://github.com/redhat-buildpacks/builder-ubi-base/blob/main/.tekton/pipelinerun-builder-ubi-base.yaml#L438-L457 and reports such an error:

step-get-base-images
BASE_IMAGES param received:
sha256:cf6762b050e84d6c5cf31542f339edf312d9cb10a85bea92c27b1a86bb437d2d

step-build
2024-09-13 10:08:33,969:source-build:DEBUG:workspace directory /var/source-build
2024-09-13 10:08:33,969:source-build:DEBUG:working directory /var/source-build/source-build
2024-09-13 10:08:33,975:build-source.source-archive:DEBUG:Stashing any changes to working repo ['git', 'stash']
No local changes to save
2024-09-13 10:08:34,007:build-source.source-archive:DEBUG:Collecting timestamp of the commit at HEAD ['git', 'show', '-s', '--format=%cI']
2024-09-13 10:08:34,010:build-source.source-archive:DEBUG:Generate source repo file list ['git', 'ls-files', '--recurse-submodules', '-z']
2024-09-13 10:08:34,013:build-source.source-archive:DEBUG:Generate source archive ['tar', 'caf', '/var/source-build/source-build/source_archive/builder-ubi-base-c44a5c1d37588e85733dc49878b5ff9e8a2c1dd5.tar.gz', '--mtime', '2024-09-13T12:03:44+02:00', '--transform', 's,^,builder-ubi-base-c44a5c1d37588e85733dc49878b5ff9e8a2c1dd5/,', '--null', '-T-']
2024-09-13 10:08:34,541:build-source.source-archive:DEBUG:Popping any stashed changes to working repo ['git', 'stash', 'pop']
No stash entries found.
2024-09-13 10:08:34,543:build-source.source-archive:INFO:add source archive directory to sources for bsi: /var/source-build/source-build/source_archive
2024-09-13 10:08:34,543:source-build:INFO:Image sha256:cf6762b050e84d6c5cf31542f339edf312d9cb10a85bea92c27b1a86bb437d2d does not come from supported allowed registry.
2024-09-13 10:08:34,791:source-build:ERROR:failed to build source image
Traceback (most recent call last):
  File "/opt/source_build/source_build.py", line 1070, in main
    build_result = build(build_args)
                   ^^^^^^^^^^^^^^^^^
  File "/opt/source_build/source_build.py", line 1036, in build
    source_image = resolve_source_image(base_image, args.registry_allowlist)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/source_build/source_build.py", line 465, in resolve_source_image
    return resolve_source_image_by_manifest(binary_image)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/source_build/source_build.py", line 444, in resolve_source_image_by_manifest
    source_image = generate_konflux_source_image(image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/source_build/source_build.py", line 400, in generate_konflux_source_image
    digest = fetch_image_manifest_digest(cleaned_image)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/source_build/source_build.py", line 192, in fetch_image_manifest_digest
    return run(cmd, check=True, text=True, capture_output=True).stdout.strip()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['skopeo', 'inspect', '--format', '{{.Digest}}', '--no-tags', '--retry-times', '5', 'docker://sha256:cf6762b050e84d6c5cf31542f339edf312d9cb10a85bea92c27b1a86bb437d2d']' returned non-zero exit status 1.
2024-09-13 10:08:34,792:source-build:INFO:build result {"status": "failure", "message": "Command '['skopeo', 'inspect', '--format', '{{.Digest}}', '--no-tags', '--retry-times', '5', 'docker://sha256:cf6762b050e84d6c5cf31542f339edf312d9cb10a85bea92c27b1a86bb437d2d']' returned non-zero exit status 1."}
2024-09-13 10:08:34,792:source-build:INFO:write build result into file /tekton/results/BUILD_RESULT

Replace ClusterTask by something else

I am trying to use the build-definitions with https://github.com/openshift-pipelines/pipelines-service.

With some fixes in pipelines-service (https://github.com/openshift-pipelines/pipelines-service/compare/main...guillaumerose:builddef?expand=1), the PR is triggered but I hit an issue: ClusterTask doesn't exist on KCP and all definitions are using them.
How could we solve that?

Events:
  Type     Reason         Age   From         Message
  ----     ------         ----  ----         -------
  Normal   Started        29s   PipelineRun  
  Warning  Failed         27s   PipelineRun  Pipeline /noop can't be Run; it contains Tasks that don't exist: Couldn't retrieve Task "openshift-client": the server could not find the requested resource (get clustertasks.tekton.dev openshift-client)
  Warning  InternalError  27s   PipelineRun  1 error occurred:
           * Couldn't retrieve Task "openshift-client": the server could not find the requested resource (get clustertasks.tekton.dev openshift-client)

buildah-remote task: concurrency problems with source workspace

With the buildah-remote task, there are problems if several are run at the same time with the same workspace. For example, I saw:

  • arch1 completes, $(workspaces.source.path)/rhtap-final-image is rsynced back from the arch1 remote
  • arch2 starts, rhtap-final-image is rsynced to the arch2 remote
  • The arch2 image is added to rhtap-final-image
  • arch1 completes, rhtap-final-image is rsynced back from the arch2 remote - with both architectures
  • buildah pull oci:rhtap-final-image fails because the directory now has two images and buildah doesn't know which to use

But there are all sorts of other failures and corruptions that can occur. The workaround people have used for this is to use separate workspaces and duplicate the prefetch-dependencies steps. That works, with the downsides:

  • You have to know to use that workaround and why
  • A more complex, harder to maintain pipeline
  • Right now, cachi2 can't prefetch for only one architecture, making this approach much less efficient.

As far as I can tell, this could be fixed pretty simply by making the buildah/buildah-remote tasks align more closely with the buildah-remote-oci-tasks - use an emptydir workdir for temporary files rather than doing that in the workspace.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository.

  • WARN: No docker auth found - returning

⚠ Dependency Lookup Warnings ⚠

  • Renovate failed to look up the following dependencies: Failed to look up docker package registry.redhat.io/openshift4/ose-tools-rhel8, Failed to look up docker package registry.redhat.io/openshift4/ose-cli, Could not determine new digest for update (datasource: docker).

Files affected: .tekton/pull-request.yaml, task/init/0.1/init.yaml, task/s2i-java/0.1/s2i-java.yaml, task/s2i-nodejs/0.1/s2i-nodejs.yaml, task/summary/0.1/summary.yaml


Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

ansible
.tekton/tasks/yaml-lint.yaml
dockerfile
appstudio-utils/Dockerfile
syft.Dockerfile
  • registry.access.redhat.com/ubi8 8.7-1090
github-actions
.github/workflows/shellspec.yaml
  • actions/checkout v3
  • jerop/tkn v0.1.0
tekton
.tekton/pull-request.yaml
  • registry.redhat.io/openshift4/ose-tools-rhel8 v4.12
  • registry.redhat.io/openshift4/ose-cli v4.12@sha256:9f0cdc00b1b1a3c17411e50653253b9f6bb5329ea4fb82ad983790a6dbf2d9ad
.tekton/push.yaml
.tekton/tasks/buildah.yaml
.tekton/tasks/yaml-lint.yaml
  • docker.io/cytopia/yamllint 1.26@sha256:1bf8270a671a2e5f2fea8ac2e80164d627e0c5fa083759862bbde80628f942b2
task/buildah/0.1/buildah.yaml
  • quay.io/redhat-appstudio/buildah v1.28
  • quay.io/redhat-appstudio/syft v0.47.0
  • registry.access.redhat.com/ubi9/python-39 1-108
  • quay.io/redhat-appstudio/cosign v1.13.1
task/clair-scan/0.1/clair-scan.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/clamav-scan/0.1/clamav-scan.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/deprecated-image-check/0.1/deprecated-image-check.yaml
  • registry.access.redhat.com/ubi8/ubi-minimal 8.7-1085@sha256:dc06ba83c6f47fc0a2bca516a9b99f1cf8ef37331fd460f4ca55579a815ee6cb
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/fbc-related-image-check/0.1/fbc-related-image-check.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/fbc-validation/0.1/fbc-validation.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/git-clone/0.1/git-clone.yaml
task/init/0.1/init.yaml
  • registry.redhat.io/openshift4/ose-tools-rhel8 v4.12@sha256:253d042ecfad7b64593112a4aa3f528d39cb5fe738852e44f009db87964cf051
task/prefetch-dependencies/0.1/prefetch-dependencies.yaml
  • quay.io/containerbuildsystem/cachi2 sha256:bd8abcd9782af134d3c0d2f91cd469424ce413195dcfc050a7321ae0b29f5507
task/s2i-java/0.1/s2i-java.yaml
  • registry.redhat.io/ocp-tools-4-tech-preview/source-to-image-rhel8 sha256:637c15600359cb45bc01445b5e811b6240ca239f0ebfe406b50146e34f68f631
  • quay.io/redhat-appstudio/syft v0.47.0
  • registry.access.redhat.com/ubi8/python-39 1-105
  • quay.io/redhat-appstudio/cosign v1.13.1
task/s2i-nodejs/0.1/s2i-nodejs.yaml
  • registry.redhat.io/ocp-tools-4-tech-preview/source-to-image-rhel8 sha256:e518e05a730ae066e371a4bd36a5af9cedc8686fd04bd59648d20ea0a486d7e5
  • quay.io/redhat-appstudio/syft v0.47.0
  • registry.access.redhat.com/ubi9/python-39 1-108
  • quay.io/redhat-appstudio/cosign v1.13.1
task/sanity-inspect-image/0.1/sanity-inspect-image.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/sanity-label-check/0.1/sanity-label-check.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/sast-go/0.1/sast-go.yaml
task/sast-java-sec-check/0.1/sast-java-sec-check.yaml
task/sast-snyk-check/0.1/sast-snyk-check.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/sbom-json-check/0.1/sbom-json-check.yaml
  • quay.io/redhat-appstudio/hacbs-test v1.0.11@sha256:acf4e35adfbe16916d400f36b616236d872c2527c7618ffc6758ae930e353668
task/summary/0.1/summary.yaml
  • registry.redhat.io/openshift4/ose-cli v4.12@sha256:9f0cdc00b1b1a3c17411e50653253b9f6bb5329ea4fb82ad983790a6dbf2d9ad
task/tkn-bundle/0.1/tkn-bundle.yaml
  • quay.io/openshift-pipeline/openshift-pipelines-cli-tkn 1.10
task/update-infra-deployments/0.1/update-infra-deployments.yaml
  • quay.io/chmouel/github-app-token sha256:b4f2af12e9beea68055995ccdbdb86cfe1be97688c618117e5da2243dc1da18e

  • Check this box to trigger a request for Renovate to run again on this repository

Simplify how images are built in this repo

This repo creates multiple images for delivery into the app studio platform.
Default pipelines are delivered into quay.io/redhat-appstudio/build-templates-bundle
Base Image for the ClusterTasks are in quay.io/redhat-appstudio/appstudio-utils

The versions for these images are maintained separately in each directory which leads to confusion
Options

  1. We change the release build to build both images at the same version. This requires the cluster tasks which are defined in the app studio environment to get version bumped every time any new release occurs.
  2. We write some update scripts which update the infra-deployment repository contents with the correct versions of the two (or more) images to prevent the wrong images updates.

I think we should do both -- and embed the automation in the .ci script for building and then installing new pipelines into app studio.

RFE: way to set up a "install root" within a buildah task

In various contexts, we want to install packages into a sub directory of the root directory, and then use them as the root directory of a subsequent stage. Examples include:

In all of these cases, we really don't want to install into a bare directory - this will result in weirdness like redirections to /dev/null going into a actual file. The directory at least needs a skeleton /dev populated and would ideally have /proc and /sys as well.

There are two basic ways that this could be enabled:

  • Allow running buildah build with --cap-add=all --security-opt=label=disable ; this is sufficient to allow the Containerfile to create its own mount namespace and populate the root directory itself.

  • Instead specify the install directory as a task parameter, and have the buildah task add something like:

           --device=/dev/null:"$INSTALL_ROOT"/dev/null
           --device=/dev/random:"$INSTALL_ROOT"/dev/random
           --device=/dev/urandom:"$INSTALL_ROOT"/dev/urandom
           --device=/dev/zero:"$INSTALL_ROOT"/dev/zero
    

The advantage of this is that it is more flexible and satisfies some other needs for nested sandboxing1. It may the only way that actually works for the bootc case [@cgwalters - is that right?]. On the other hand, the second approach is more obviously secure. 2

Footnotes

  1. To delve some into this, if we were using buildah with user namespaces, then --cap-add=all would not be necessary since the Containerfile could create a nested user namespace where it had the SYS_ADMIN capability, and then create a mount namespace and set things up there. But with --isolation=chroot the run commands are being run in an active chroot, and the Linux kernel disables user namespace creation when a chroot is active, so we have to actually give the SYS_ADMIN capability to the subprocess so it can create a mount namespace

  2. --cap-add=all will not compromise overall system security, since we're counting on the pod running the task for that, and buildah --isolation=chroot is already not that strong. And of course, an actual malicious component could specify their own pipeline with its own build steps already. The main weakness it introduces is that it could make it easier for malicious content pulled into the Containerfile during the build process to mess with the build artifacts and build metadata.

Use commit-timestamp in tasks in pursuit of reproducible builds

There are lots of challenges to producing bit-wise reproducible builds. One of them is to get all tools all the way through the build chain to use the same time. See https://reproducible-builds.org/docs/ and in particular https://reproducible-builds.org/docs/source-date-epoch/

  • We should expose commit-timestamp as SOURCE_BUILD_EPOCH in all of our tasks. Not all tools respect that, but it is the standard environment variable that they should respect.
  • Notably, buildah doesn't respect it, but it does accept a --timestamp argument that will help produce the same bits, assuming that nothing inside the containerfile being built is not sensitive to the time.
  • We should also provide a --build-arg SOURCE_BUILD_EPOCH=$SOURCE_BUILD_EPOCH to each buildah build, so that if a containerfile accepts that as an ARG, it can take advantage of that.

Doing these things won't give us all 100% bit-wise reproducible builds, but it will close some gaps in that direction.

RUN <<EOF without an interpreter and cachi2

RUN <<EOF
mkdir -p /etc/foo
cp -aR /src/foo /etc/foo
EOF

gets turned into:

RUN . /cachi2/cachi2.env && <<EOF
mkdir -p /etc/foo
cp -aR /src/foo /etc/foo
EOF

which is weirdly not a syntax error, but a no-op. The right transformation would be something like:

RUN . /cachi2/cachi2.env && /bin/sh <<EOF
mkdir -p /etc/foo
cp -aR /src/foo /etc/foo
EOF

Once you consider RUN --mount=type=bin,src=/a,dest=/b <<EOF wedging this into the current code gets a little tricky.

buildah: Use either openat2(RESOLVE_BENEATH) or don't follow links

I came across this bit of code that runs after the just-built image is mounted (to be passed to scanners):

https://github.com/konflux-ci/build-definitions/blame/38c6cd3f4733ed1ee638ce43bacd1096e3e5076d/task/buildah-remote/0.2/buildah-remote.yaml#L487

What would be a lot less ugly than just blowing away all symbolic links is using Linux's openat2 system call has RESOLVE_IN_ROOT which allows a process to safely inspect a distinct root and resolve any symlinks as if they're in that root.

Or perhaps simpler often, just...don't follow symlinks in whatever is doing this scanning. (Why would it traverse symlinks?)

Consume the Pipeline definitions for App Studio Components

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.