Giter VIP home page Giter VIP logo

oci-build-task's Introduction

oci-build task

A Concourse task for building OCI images. Currently uses buildkit for building.

A stretch goal of this is to support running without privileged: true, though it currently still requires it.

usage

The task implementation is available as an image on Docker Hub at concourse/oci-build-task. (This image is built from Dockerfile using the oci-build task itself.)

This task implementation started as a spike to explore patterns around reusable tasks to hopefully lead to a proper RFC. Until that RFC is written and implemented, configuration is still done by way of providing your own task config as follows:

image_resource

First, your task needs to point to the oci-build-task image:

image_resource:
  type: registry-image
  source:
    repository: concourse/oci-build-task

params

Next, any of the following optional parameters may be specified:

(As a convention in the list below, all task parameters are specified with a leading $, in order to remind their environment variable nature, just like shell variables that one would use with the $VAR syntax. When specifying those in the params: YAML dictionary of a task definition though, the leading $ is irrelevant, as readers will notice in the examples below.)

  • $CONTEXT (default .): the path to the directory to provide as the context for the build.

  • $DOCKERFILE (default $CONTEXT/Dockerfile): the path to the Dockerfile to build.

  • $BUILDKIT_SSH your ssh key location that is mounted in your Dockerfile. This is generally used for pulling dependencies from private repositories.

    For Example. In your Dockerfile, you can mount a key as

    RUN --mount=type=ssh,id=github_ssh_key pip install -U -r ./hats/requirements-test.txt
    

    Then in your Concourse YAML configuration:

    params:
      BUILDKIT_SSH: github_ssh_key=<PATH-TO-YOUR-KEY>
    

    Read more about ssh mount here.

  • $BUILD_ARG_*: params prefixed with BUILD_ARG_ will be provided as build args. For example BUILD_ARG_foo=bar, will set the foo build arg as bar.

  • $BUILD_ARGS_FILE (default empty): path to a file containing build args in the form foo=bar, one per line. Empty lines are skipped.

    Example file contents:

    [email protected]
    HOW_MANY_THINGS=1
    DO_THING=false
    
  • $BUILDKIT_SECRET_*: files with extra secrets which are made available via --mount=type=secret,id=.... See New Docker Build secret information for more information on build secrets.

    For example, running with BUILDKIT_SECRET_config=my-repo/config will allow you to do the following...

    RUN --mount=type=secret,id=config cat /run/secrets/config
    
  • $BUILDKIT_SECRETTEXT_*: literal text of extra secrets to be made available via the same mechanism described for $BUILDKIT_SECRET_* above. The difference is that this is easier to use with credential managers:

    BUILDKIT_SECRETTEXT_mysecret=(( mysecret )) puts the content that (( mysecret )) expands to in /run/secrets/mysecret.

  • $IMAGE_ARG_*: params prefixed with IMAGE_ARG_* point to image tarballs (i.e. docker save format) to preload so that they do not have to be fetched during the build. An image reference will be provided as the given build arg name. For example, IMAGE_ARG_base_image=ubuntu/image.tar will set base_image to a local image reference for using ubuntu/image.tar.

    This must be accepted as an argument for use; for example:

    ARG base_image
    FROM ${base_image}
    
  • $IMAGE_PLATFORM: Specify the target platform to build the image for. For example IMAGE_PLATFORM=linux/arm64 will build the image for the Linux OS and arm64 architecture. By default, images will be built for the current worker's platform that the task is running on.

  • $LABEL_*: params prefixed with LABEL_ will be set as image labels. For example LABEL_foo=bar, will set the foo label to bar.

  • $LABELS_FILE (default empty): path to a file containing labels in the form foo=bar, one per line. Empty lines are skipped.

  • $TARGET (default empty): a target build stage to build, as named with the FROM โ€ฆ AS <NAME> syntax in your Dockerfile.

  • $TARGET_FILE (default empty): path to a file containing the name of the target build stage to build.

  • $ADDITIONAL_TARGETS (default empty): a comma-separated (,) list of additional target build stages to build.

  • $REGISTRY_MIRRORS (default empty): registry mirrors to use for docker.io.

  • $UNPACK_ROOTFS (default false): unpack the image as Concourse's image format (rootfs/, metadata.json) for use with the image task step option.

  • $OUTPUT_OCI (default false): outputs an OCI compliant image, allowing for multi-arch image builds when setting IMAGE_PLATFORM to multiple platforms. The image output format will be a directory when this flag is set to true.

  • $BUILDKIT_ADD_HOSTS (default empty): extra host definitions for buildkit to properly resolve custom hostnames. The value is as comma-separated (,) list of key-value pairs (using syntax hostname=ip-address), each defining an IP address for resolving some custom hostname.

Note: this is the main pain point with reusable tasks - env vars are kind of an awkward way to configure a task. Once the RFC lands these will turn into a JSON structure similar to configuring params on a resource, and task params will become env instead.

inputs

There are no required inputs - your task should just list each artifact it needs as an input. Typically this is in close correlation with $CONTEXT:

params:
  CONTEXT: my-image

inputs:
- name: my-image

Should your build be dependent on multiple inputs, you may want to leave $CONTEXT as its default (.) and set an explicit path to the $DOCKERFILE:

params:
  DOCKERFILE: my-repo/Dockerfile

inputs:
- name: my-repo
- name: some-dependency

It might also make sense to place one input under another, like so:

params:
  CONTEXT: my-repo

inputs:
- name: my-repo
- name: some-dependency
  path: my-repo/some-dependency

Or, to fully rely on the default behavior and use path to wire up the context accordingly, you could set your primary context as path: . and set up any additional inputs underneath:

inputs:
- name: my-repo
  path: .
- name: some-dependency

outputs

A single output named image may be configured:

outputs:
- name: image

Use output_mapping to map this output to a different name in your build plan. This approach should be used if you're building multiple images in parallel so that they can have distinct names.

The output will contain the following files:

  • image.tar: the OCI image tarball. This tarball can be uploaded to a registry using the Registry Image resource.

  • digest: the digest of the OCI config. This file can be used to tag the image after it has been loaded with docker load, like so:

    docker load -i image/image.tar
    docker tag $(cat image/digest) my-name

If $UNPACK_ROOTFS is configured, the following additional entries will be created:

  • rootfs/*: the unpacked contents of the image's filesystem.

  • metadata.json: a JSON file containing the image's env and user configuration.

This is a Concourse-specific format to support using the newly built image for a subsequent task by pointing the task step's image option to the output, like so:

plan:
- task: build-image
  params:
    UNPACK_ROOTFS: true
  output_mapping: {image: my-built-image}
- task: use-image
  image: my-built-image

(The output_mapping here is just for clarity; alternatively you could just set image: image.)

Note: at some point Concourse will likely standardize on OCI instead.

caches

Caching can be enabled by caching the cache path on the task:

caches:
- path: cache

NOTE: the contents of --mount=type=cache directories are not cached, see #87

run

Your task should run the build executable:

run:
  path: build

migrating from the docker-image resource

The docker-image resource was previously used for building and pushing a Docker image to a registry in one fell swoop.

The oci-build task, in contrast, only supports building images - it does not support pushing or even tagging the image. It can be used to build an image and use it for a subsequent task image without pushing it to a registry, by configuring $UNPACK_ROOTFS.

In order to push the newly built image, you can use a resource like the registry-image resource like so:

resources:
- name: my-image-src
  type: git
  source:
    uri: https://github.com/...

- name: my-image
  type: registry-image
  source:
    repository: my-user/my-repo

jobs:
- name: build-and-push
  plan:
  # fetch repository source (containing Dockerfile)
  - get: my-image-src

  # build using `oci-build` task
  #
  # note: this task config could be pushed into `my-image-src` and loaded using
  # `file:` instead
  - task: build
    privileged: true
    config:
      platform: linux

      image_resource:
        type: registry-image
        source:
          repository: concourse/oci-build-task

      inputs:
      - name: my-image-src
        path: .

      outputs:
      - name: image

      run:
        path: build

  # push using `registry-image` resource
  - put: my-image
    params: {image: image/image.tar}

differences from builder task

The builder task was a stepping stone that led to the oci-build task. It is now deprecated. The transition should be relatively smooth, with the following differences:

  • The oci-build task does not support configuring $REPOSITORY or $TAG.
    • for running the image with docker, a digest file is provided which can be tagged with docker tag
    • for pushing the image, the repository and tag are configured in the registry-image resource
  • The oci-build task has a more efficient caching implementation. By using buildkit directly we can make use of its local cache exporter/importer, which doesn't require a separate translation step for saving into the task cache.
  • This task is written in Go instead of Bash, and has tests!

example

This repo contains an example.yml, which builds the image for the task itself:

fly -t dev execute -c example.yml -o image=. -p
docker load -i image.tar

That -p at the end is not a typo; it runs the task with elevated privileges.

oci-build-task's People

Contributors

bgandon avatar bradfordboyle avatar charles-dyfis-net avatar chenbh avatar clarafu avatar dackroyd avatar databus23 avatar dependabot[bot] avatar ei-grad avatar evanchaoli avatar favna avatar jmccann avatar mdhender avatar multimac avatar pn-santos avatar rafw87 avatar rdrgmnzs avatar rliebz avatar simonxming avatar snarlysodboxer avatar spgreenberg avatar spire-allyjweir avatar taylorsilva avatar thetechnick avatar toc-me[bot] avatar tomjw64 avatar vito avatar xtremerui avatar yasn77 avatar zozozoeli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oci-build-task's Issues

Is http proxy usage during build supported ?

I wanted to switch from docker-image resource to oci-build-task for image creation. It seems oci-build-task does not support usage of http proxy during the build.

This is some example log output from a build of a Ubuntu based image. The http proxy is configured in concourse and usually all tasks have the http_proxy variables set

#5 [2/10] RUN apt-get update --fix-missing && apt-get -y upgrade
#5 0.169 Err:1 http://archive.ubuntu.com/ubuntu bionic InRelease
#5 0.169   Could not resolve 'archive.ubuntu.com'
#5 0.169 Err:2 http://security.ubuntu.com/ubuntu bionic-security InRelease
#5 0.169   Could not resolve 'security.ubuntu.com'
#5 0.169 Err:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
#5 0.169   Could not resolve 'archive.ubuntu.com'
#5 0.170 Err:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
#5 0.170   Could not resolve 'archive.ubuntu.com'
#5 0.173 Reading package lists...

buildkit secrets not working? (help)

I am using the following task.yml

platform: linux
image_resource:
  type: registry-image
  source:
    repository: vito/oci-build-task
params:
  CONTEXT: workdir
  BUILDKIT_SECRET_mysecret: workdir/mysecret.txt
outputs:
  - name: image
  - name: workdir
run:
  path: sh
  args:
    - -euxo
    - pipefail
    - -c
    - |
      cat >workdir/Dockerfile <<EOF
      # syntax=docker/dockerfile:1.2

      FROM alpine

      # shows secret from default secret location:
      RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret

      # shows secret from custom secret location:
      RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
      EOF

      echo -n "test value" >workdir/mysecret.txt

      build

When running with fly execute -c task.yml --privileged, I get the following output:

(click to expand)
initializing
selected worker: i-007d43aa9569b607e
INFO: found resource cache from local cache

selected worker: i-065a8ff9e1dc71e02
running sh -euxo pipefail -c cat >workdir/Dockerfile <<EOF
# syntax=docker/dockerfile:1.2

FROM alpine

# shows secret from default secret location:
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret

# shows secret from custom secret location:
RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
EOF

echo -n "test value" >workdir/mysecret.txt

build

+ cat
+ echo -n 'test value'
+ build
INFO[0000] building image
#1 [internal] load build definition from Dockerfile
#1 sha256:480d890de9b69b5da8631b479d94af2b21331184cfdf12df2727992110a38743
#1 transferring dockerfile: 296B done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 sha256:07db6c4d1fd4febdc3b0ff744acb853ba5289dd81706899585ee6ed1c4c4ecce
#2 transferring context: 2B done
#2 DONE 0.0s

#3 resolve image config for docker.io/docker/dockerfile:1.2
#3 sha256:b239a20f31d7f1e5744984df3d652780f1a82c37554dd73e1ad47c8eb05b0d69
#3 DONE 2.7s

#4 docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc
#4 sha256:37e0c519b0431ef5446f4dd0a4588ba695f961e9b0e800cd8c7f5ba6165af727
#4 resolve docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc done
#4 sha256:3cc8e449ce9f6e0752ede8f50a7334bf0c7b2d24d76da2ffae7aa6a729dd1da4 0B / 9.64MB 0.2s
#4 sha256:3cc8e449ce9f6e0752ede8f50a7334bf0c7b2d24d76da2ffae7aa6a729dd1da4 9.64MB / 9.64MB 0.3s done
#4 extracting sha256:3cc8e449ce9f6e0752ede8f50a7334bf0c7b2d24d76da2ffae7aa6a729dd1da4
#4 extracting sha256:3cc8e449ce9f6e0752ede8f50a7334bf0c7b2d24d76da2ffae7aa6a729dd1da4 0.1s done
#4 DONE 0.4s

#5 [internal] load metadata for docker.io/library/alpine:latest
#5 sha256:d4fb25f5b5c00defc20ce26f2efc4e288de8834ed5aa59dff877b495ba88fda6
#5 DONE 1.3s

#6 [1/3] FROM docker.io/library/alpine@sha256:69e70a79f2d41ab5d637de98c1e0b055206ba40a8145e7bddb55ccc04e13cf8f
#6 sha256:c46b598ce45cd8cd40f4705fe76968d7d0b56fd99853cf12e4fad77079938f1d
#6 resolve docker.io/library/alpine@sha256:69e70a79f2d41ab5d637de98c1e0b055206ba40a8145e7bddb55ccc04e13cf8f done
#6 sha256:540db60ca9383eac9e418f78490994d0af424aab7bf6d0e47ac8ed4e2e9bcbba 2.81MB / 2.81MB 0.3s done
#6 extracting sha256:540db60ca9383eac9e418f78490994d0af424aab7bf6d0e47ac8ed4e2e9bcbba 0.1s done
#6 DONE 0.3s

#7 [2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
#7 sha256:a132a773ff5e6ff464dda89ce2cbabaf875360f594853fedc96bbd2e9e88edf0
#7 0.133 cat: can't open '/run/secrets/mysecret': No such file or directory
#7 ERROR: executor failed running [/bin/sh -c cat /run/secrets/mysecret]: exit code: 1
------
 > [2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret:
------
Dockerfile:6
--------------------
   4 |
   5 |     # shows secret from default secret location:
   6 | >>> RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
   7 |
   8 |     # shows secret from custom secret location:
--------------------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c cat /run/secrets/mysecret]: exit code: 1
FATA[0007] failed to build: build: exit status 1
FATA[0007] failed to run task: exit status 1
failed

You can see, apparently the secrets are not being mounted inside?

#7 [2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
#7 sha256:a132a773ff5e6ff464dda89ce2cbabaf875360f594853fedc96bbd2e9e88edf0
#7 0.133 cat: can't open '/run/secrets/mysecret': No such file or directory

@multimac is there any extra configuration, perhaps inside of the Concourse deployment itself, that is required for this feature to work? Also, could you post the task.yml you used to test in #36?

unpack_rootfs param not causing the image to unpack

After the last step in my docker file the task ends without "exporting to oci image format".

  - name: get_repo_and_build_image
    plan:
    - get: git_repo
      trigger: true
    - task: build
      privileged: true
      config:
        platform: linux

        image_resource:
          type: registry-image
          source:
            repository: vito/oci-build-task

        params:
          DOCKERFILE: git_repo/compose/Dockerfile
          CONTEXT: git_repo
          unpack_rootfs: true

        inputs:
        - name: git_repo

        outputs:
        - name: my_image

        run:
          path: build 

Feature request: Image Labels

When migrating our pipelines to use registry-image + oci-build-task, we could no longer add labels to our images at the time of being built.

It would be nice to support a feature similar to the docker-image-resource's labels or labels_file out params.

It seems like buildkit supports labeling through --opts so there seems to be an easy avenue to making this happen.

Failed to build image in Concourse due to readonly fs

Hi,
Updated Concourse to 6.7.5 version and enabled containerd. Just after that was not able to build image with oci-build-task.
Here is code:

jobs:
- name: build-docs
  plan:
  - get: docs-git
    trigger: true
  - task: build-docs-container
    privileged: true
    config:
      platform: linux
      image_resource:
        type: registry-image
        source:
          repository: vito/oci-build-task
      caches:
      - path: cache
      inputs:
      - name: docs-git
      outputs:
      - name: image
      params:
        CONTEXT: docs-git
      run:
        path: build

This code was ok on 6.7.2 concourse and after upgrade to 6.7.5 Concourse and enable containerd I got this error:

#10 0.077 container_linux.go:367: starting container process caused: process_linux.go:340: applying cgroup configuration for process caused: mkdir /sys/fs/cgroup/cpuset/buildkit: read-only file system
#10 ERROR: executor failed running [/bin/sh -c apk -q add gcc musl-dev]: exit code: 1

It just try to install some packages in alpine image.
I found that this topic was discussed in concourse github issue and it looked like /dev/shm size. I checked our Concourse worker setup and there are already 64Mb assigned to shared memory.

Could you looks at this issue?

What I found:
concourse/concourse#6348
#38

$TARGET

i have a Dockerfile like:

FROM python AS resource
...

FROM resource AS unit_test
...

FROM resource
...
CMD ["python", " run sth ..."]

when I don't set TARGET in params, it will skip building unit_test stage.
when I set TARGET: unit_test in params, will it build to the end?

Image layer cache is not applied correctly

My Concourse pipeline:

- name: build-image
  plan:
  - get: code-on-develop
    trigger: true
  - config:
      caches:
      - path: cache
      container_limits: {}
      image_resource:
        source:
          repository: vito/oci-build-task
          tag: master
        type: registry-image
      inputs:
      - name: code-on-develop
      outputs:
      - name: image
      params:
        CONTEXT: code-on-develop
        TAG: latest
      platform: linux
      run:
        path: build
    privileged: true
    task: build-image-task
resource_types:
- name: git-multibranch
  source:
    repository: cfcommunity/git-multibranch-resource
  type: docker-image
resources:
- name: code-on-develop
  source:
    branch: develop-cache-debug
    uri: some url
  type: git-multibranch

My docker file:

FROM node:12.13.1-alpine as builder
WORKDIR /root/my-app
COPY package.json yarn.lock ./
RUN yarn install --no-optional --pure-lockfile
COPY . /root/my-app
RUN npm run build

FROM nginx:1.17.4-alpine

COPY --from=builder /root/my-app /usr/share/nginx/html
COPY entrypoint.sh /usr/local/bin
RUN apk -U --no-cache add gettext jq
ENTRYPOINT ["entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]

Current situation:

The Concourse job build-image will take 5 minutes to build without any caches. This is an expected behaviour when it builds for the very first time. From my understanding, buildkit will generate a cache directory for example at /tmp/build/d93f6c07/cache inside the container. Then since we have set up - config: caches: - path: cache, it will be saved to Concourse worker.

If the job is manually triggered again with the same git commit hash (means the source code has absolutely no changes), it only takes around 30 seconds to complete, as all image layers are well cached. And I believe the cache is loaded from Concourse worker to the container's directory such as /tmp/build/d93f6c07/cache.

To maximise the image layer cache we can utilise, we separate our yarn build before copying the entire source code, like COPY package.json yarn.lock ./ RUN yarn install --no-optional --pure-lockfile COPY . /root/my-app. Therefore as long as package.json and yarn.lock doesn't change, but the rest of the source code does, at least the image layer up to RUN yarn install --no-optional --pure-lockfile should be cached. Unfortunately sometimes it doesn't work. Once the source code is touched, it will break the image layer cache for yarn install although it is above. However the image layers from runtime still work, just not from builder.

To debug this issue, we hijacked into the running job container and tested by running buildctl locally and it works all right.

Do you have any ideas how else we can debug?

Using ADD with XZ compressed tarball fails

Apologies for a potentially slim bug report, but it would appear that oci-build-task only supports uncompressed tarballs.

This has caused a few days of banging my head against the wall trying to figure out why it fails.

I was building an image from scratch:

FROM scratch as build

ADD "./built_roots/base/rootfs.tar.xz" /

RUN apt-get update && \
    apt install -y ca-certificates dumb-init && \
    apt-get clean autoclean && \
    apt-get autoremove --yes && \
    rm -rf /var/lib/apt/lists/*

FROM scratch as main
COPY --from=build / /

The error output from running a task configured as such:

      - task:       build-base
        privileged: true
        config:
          platform: linux
          image_resource:
            type: registry-image
            source:
              repository: vito/oci-build-task

          params:
            TARGET:                  main
            CONTEXT:                 git-resource/base_images/debian-base

          inputs:
            - name: git-resource
            - name: built_roots
              path: git-resource/base_images/debian-base/built_roots

          outputs:
            - name: image-base

          run:
            path: build

was a variant of this:

selected worker: work-03
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 402B done
#2 DONE 4.4s

#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 5.0s

#3 [internal] load build context
#3 transferring context: 30.13MB 0.2s done
#3 DONE 1.4s

#4 [build 1/2] ADD ./built_roots/base/rootfs.tar.xz /
#4 DONE 1.2s

#5 [build 2/2] RUN apt-get update &&     apt install -y ca-certificates dum...
#5 0.900 container_linux.go:345: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory"
#5 ERROR: executor failed running [/bin/sh -c apt-get update &&     apt install -y ca-certificates dumb-init &&     apt-get clean autoclean &&     apt-get autoremove --yes &&     rm -rf /var/lib/apt/lists/* /etc/apt/apt.conf.d/01proxy]: buildkit-runc did not terminate sucessfully

------
 > [build 2/2] RUN apt-get update &&     apt install -y ca-certificates dumb-init &&     apt-get clean autoclean &&     apt-get autoremove --yes &&     rm -rf /var/lib/apt/lists/*:
------

error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to build LLB: executor failed running [/bin/sh -c apt-get update &&     apt install -y ca-certificates dumb-init &&     apt-get clean autoclean &&     apt-get autoremove --yes &&     rm -rf /var/lib/apt/lists/*]: buildkit-runc did not terminate sucessfully

FATA[0011] failed to build: build: exit status 1        
FATA[0011] failed to run task: exit status 1            

Unpacking the xz-compressed rootfs was enough to get this to build correctly.
This dockerfile worked just fine with normal docker build and buildctl build.

Can't upgrade/uninstall pip packages in Dockerfile

Hello,

Since recently our team is migrating to Concourse. I'm using oci-build-task to build my images and noticed that some of the builds are failing when trying to upgrade some of the pip packages in two different layers of a Dockerfile. Fortunately, it's easy to reproduce with dockerfile like this one:


FROM ubuntu:bionic

RUN apt-get update && apt-get install -y python3-pip python-tox

RUN pip3 install --upgrade pip

# RUN pip3 install --upgrade six
RUN pip3 uninstall -y six

WORKDIR /opt/testdir

CMD ["sh", "-exc" "'echo works'"]

Here "six" package is installed firstly by python-tox and then I need to upgrade it in subsequent step. However, intentionally I've commented the upgrade step and replaced it with uninstall -- this way we can clearly see where is the problem. With this setup the error message is:

#6 [3/5] RUN pip3 install --upgrade pip
#6 2.463 Collecting pip
#6 2.578   Downloading https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl (1.5MB)
#6 2.899 Installing collected packages: pip
#6 2.900   Found existing installation: pip 9.0.1
#6 2.903     Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
#6 4.102 Successfully installed pip-20.1.1
#6 DONE 4.4s

#7 [4/5] RUN pip3 uninstall -y six
#7 1.026 Found existing installation: six 1.11.0
#7 1.028 Uninstalling six-1.11.0:
#7 1.423 ERROR: Exception:
#7 1.423 Traceback (most recent call last):
#7 1.423   File "/usr/lib/python3.6/shutil.py", line 550, in move
#7 1.423     os.rename(src, real_dst)
#7 1.423 OSError: [Errno 18] Invalid cross-device link: '/usr/lib/python3/dist-packages/six-1.11.0.egg-info' -> '/usr/lib/python3/dist-packages/~ix-1.11.0.egg-info'
#7 1.423
#7 1.423 During handling of the above exception, another exception occurred:
#7 1.423
#7 1.423 Traceback (most recent call last):
#7 1.423   File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 188, in _main
#7 1.423     status = self.run(options, args)
#7 1.423   File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/uninstall.py", line 86, in run
#7 1.423     auto_confirm=options.yes, verbose=self.verbosity > 0,
#7 1.423   File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/req_install.py", line 676, in uninstall
#7 1.423     uninstalled_pathset.remove(auto_confirm, verbose)
#7 1.423   File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/req_uninstall.py", line 394, in remove
#7 1.423     moved.stash(path)
#7 1.423   File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/req_uninstall.py", line 283, in stash
#7 1.423     renames(path, new_path)
#7 1.423   File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/misc.py", line 349, in renames
#7 1.423     shutil.move(old, new)
#7 1.423   File "/usr/lib/python3.6/shutil.py", line 562, in move
#7 1.423     rmtree(src)
#7 1.423   File "/usr/lib/python3.6/shutil.py", line 490, in rmtree
#7 1.423     onerror(os.rmdir, path, sys.exc_info())
#7 1.423   File "/usr/lib/python3.6/shutil.py", line 488, in rmtree
#7 1.423     os.rmdir(path)
#7 1.423 OSError: [Errno 22] Invalid argument: '/usr/lib/python3/dist-packages/six-1.11.0.egg-info'
#7 ERROR: executor failed running [/bin/sh -c pip3 uninstall -y six]: buildkit-runc did not terminate successfully
------
 > [4/5] RUN pip3 uninstall -y six:
------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c pip3 uninstall -y six]: buildkit-runc did not terminate successfully
FATA[0134] failed to build: build: exit status 1
FATA[0134] failed to run task: exit status 1
failed

I start the project on Concourse 5.8 with the following command:
fly -t rw execute -c test.yaml -i code="." --privileged

With the following test.yaml in the same dir as the Dockerfile:

---
platform: linux

image_resource:
  type: registry-image
  source:
    repository: vito/oci-build-task

inputs:
  - name: code

outputs:
  - name: image

caches:
  - path: cache

params:
  CONTEXT: "./code"

run:
  path: build

The version installed from apt-get is pip-9.. which is working fine, however it's now deprecated. Our internal artifactory server insists on newer version of pip. However, with every version >9.. the previous error is displayed.

The interesting thin is that I tried on my Mac to build this image:
docker build -t test .
And with Buildkit as well:
DOCKER_BUILDKIT=1 docker build -t test .

Both of these commands succeed on the Mac.

Moreover, this Docker file is building fine on Concourse with DCinD (this task: https://github.com/meAmidos/dcind).

That being said, I suspect that the problem is in 'pip'! There is even a bug report opened on github for this last year:
pypa/pip#6943

However, the reason I'm writing the bug report here is because I don't really know what is going on under the hood either in oci-build-task or Concourse. Or maybe Buildkit is confusing something? Or maybe Buildkit in docker container is complicating stuff? The thing is that these docker image builds are not working only in this configuration: Concourse + oci-build-task. Apparently, python/pip is seeing the specific file located on different FS.

Regards,
Kristiyan

Tag for recent 0.7.0 bump?

Hi, seems a tag wasn't created after the buildkit version bump to 0.7.0

I ran into an issue when using 0.2 that didn't happen and before, tried the master tag and worked fine ๐Ÿ‘ (I assume master has 0.7.0 of buildkit?)

@vito would it possible to tag and push a version of the resource with buildkit 0.7.0 or is there some know issue preventing this? (I can for now depend on master tag but would be better to depend on an immutable tag if possible)

In case anyone can decipher this the error is:

#4 ERROR: failed to extract layer sha256:8648bd3934e1b3cf6a6cc225776bfdf7381d1bff8c1ce9d20e9fe65b300d14d5: mount callback failed on /tmp/containerd-mount818919733: mkdir /tmp/containerd-mount818919733/var/run: invalid argument

the exact same build works ok with the tag 0.1 and master of vito/oci-build-task but throws that error with 0.2

Building task with output named anything but "image" fails at exporting the built tarball.

This works:

      - task:       build-base
        privileged: true
        config:
          platform: linux
          image_resource:
            type: registry-image
            source:
              repository: vito/oci-build-task

          params:
            TARGET:                  main
            CONTEXT: .
            DOCKERFILE:              git-resource/base_images/debian-base/Dockerfile
            BUILD_ARG_APT_CACHE_URL: http://10.10.1.1:3142
            BUILD_ARG_ROOTFS_FILE:   ./built_roots/base/rootfs.tar

          inputs:
            - name: git-resource
            - name: built_roots

          outputs:
            - name: image

          run:
            path: build

Output:

#2 [internal] load .dockerignore
#2 DONE 0.0s

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 489B done
#1 DONE 1.6s

#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 2.6s

#3 [internal] load build context
#3 transferring context: 119.25MB 0.8s done
#3 DONE 1.8s

#4 [build 1/2] ADD ./built_roots/base/rootfs.tar /
#4 DONE 2.9s

#5 [build 2/2] RUN printf "Acquire::http { Proxy "%s"; }; " "http://10.10.1...
#5 1.125 Get:1 http://deb.debian.org/debian buster InRelease [121 kB]
#5 1.126 Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
#5 1.134 Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
#5 1.221 Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [266 kB]
#5 1.307 Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
#5 1.402 Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [7848 B]
#5 2.184 Fetched 8420 kB in 1s (7886 kB/s)
#5 2.184 Reading package lists...
#5 2.612 
#5 2.612 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
#5 2.612 
#5 2.620 Reading package lists...
#5 3.040 Building dependency tree...
#5 3.130 Reading state information...
#5 3.208 The following additional packages will be installed:
#5 3.208   libssl1.1 openssl
#5 3.222 The following NEW packages will be installed:
#5 3.222   ca-certificates dumb-init libssl1.1 openssl
#5 3.238 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
#5 3.238 Need to get 2560 kB of archives.
#5 3.238 After this operation, 6115 kB of additional disk space will be used.
#5 3.238 Get:1 http://security.debian.org/debian-security buster/updates/main amd64 libssl1.1 amd64 1.1.1d-0+deb10u4 [1538 kB]
#5 3.238 Get:2 http://deb.debian.org/debian buster-updates/main amd64 ca-certificates all 20200601~deb10u2 [166 kB]
#5 3.240 Get:3 http://deb.debian.org/debian buster/main amd64 dumb-init amd64 1.2.2-1.1 [13.6 kB]
#5 3.250 Get:4 http://security.debian.org/debian-security buster/updates/main amd64 openssl amd64 1.1.1d-0+deb10u4 [843 kB]
#5 4.093 debconf: delaying package configuration, since apt-utils is not installed
#5 4.111 Fetched 2560 kB in 0s (120 MB/s)
#5 4.759 Selecting previously unselected package libssl1.1:amd64.
(Reading database ... 6677 files and directories currently installed.)
#5 4.769 Preparing to unpack .../libssl1.1_1.1.1d-0+deb10u4_amd64.deb ...
#5 5.042 Unpacking libssl1.1:amd64 (1.1.1d-0+deb10u4) ...
#5 5.842 Selecting previously unselected package openssl.
#5 5.844 Preparing to unpack .../openssl_1.1.1d-0+deb10u4_amd64.deb ...
#5 5.933 Unpacking openssl (1.1.1d-0+deb10u4) ...
#5 6.975 Selecting previously unselected package ca-certificates.
#5 6.977 Preparing to unpack .../ca-certificates_20200601~deb10u2_all.deb ...
#5 7.058 Unpacking ca-certificates (20200601~deb10u2) ...
#5 8.117 Selecting previously unselected package dumb-init.
#5 8.119 Preparing to unpack .../dumb-init_1.2.2-1.1_amd64.deb ...
#5 8.183 Unpacking dumb-init (1.2.2-1.1) ...
#5 9.015 Setting up dumb-init (1.2.2-1.1) ...
#5 9.508 Setting up libssl1.1:amd64 (1.1.1d-0+deb10u4) ...
#5 10.26 debconf: unable to initialize frontend: Dialog
#5 10.26 debconf: (TERM is not set, so the dialog frontend is not usable.)
#5 10.26 debconf: falling back to frontend: Readline
#5 10.26 debconf: unable to initialize frontend: Readline
#5 10.26 debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/x86_64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
#5 10.26 debconf: falling back to frontend: Teletype
#5 11.55 Setting up openssl (1.1.1d-0+deb10u4) ...
#5 12.47 Setting up ca-certificates (20200601~deb10u2) ...
#5 12.85 debconf: unable to initialize frontend: Dialog
#5 12.85 debconf: (TERM is not set, so the dialog frontend is not usable.)
#5 12.85 debconf: falling back to frontend: Readline
#5 12.85 debconf: unable to initialize frontend: Readline
#5 12.85 debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/x86_64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
#5 12.85 debconf: falling back to frontend: Teletype
#5 13.14 Updating certificates in /etc/ssl/certs...
#5 13.59 137 added, 0 removed; done.
#5 14.53 Processing triggers for libc-bin (2.28-10) ...
#5 14.91 Processing triggers for ca-certificates (20200601~deb10u2) ...
#5 15.14 Updating certificates in /etc/ssl/certs...
#5 15.51 0 added, 0 removed; done.
#5 15.51 Running hooks in /etc/ca-certificates/update.d...
#5 15.51 done.
#5 15.86 Reading package lists...
#5 16.29 Building dependency tree...
#5 16.40 Reading state information...
#5 16.50 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
#5 DONE 17.4s

#6 [main 1/1] COPY --from=build / /
#6 DONE 3.2s

#7 exporting to oci image format
#7 exporting layers
#7 exporting layers 8.4s done
#7 exporting manifest sha256:79c0e05d1dc08376d0272c8a6e6370b6535644b2a1e5795c74769ce4a20c4d70
#7 exporting manifest sha256:79c0e05d1dc08376d0272c8a6e6370b6535644b2a1e5795c74769ce4a20c4d70 0.7s done
#7 exporting config sha256:87c8335f99eab45b7282223a29e2c9b2966510aad198916c782033842bbc33b4
#7 exporting config sha256:87c8335f99eab45b7282223a29e2c9b2966510aad198916c782033842bbc33b4 1.1s done
#7 sending tarball
#7 sending tarball 0.3s done
#7 DONE 11.8s

This does not:

      - task:       build-base
        privileged: true
        config:
          platform: linux
          image_resource:
            type: registry-image
            source:
              repository: vito/oci-build-task

          params:
            TARGET:                  main
            CONTEXT: .
            DOCKERFILE:              git-resource/base_images/debian-base/Dockerfile
            BUILD_ARG_APT_CACHE_URL: http://10.10.1.1:3142
            BUILD_ARG_ROOTFS_FILE:   ./built_roots/base/rootfs.tar

          inputs:
            - name: git-resource
            - name: built_roots

          outputs:
            - name: image_base

          run:
            path: build

Output:

selected worker: work-01
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 ...

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 489B done
#1 DONE 1.7s

#2 [internal] load .dockerignore
#2 DONE 2.3s

#3 [internal] load build context
#3 transferring context: 119.25MB 0.7s done
#3 DONE 3.7s

#4 [build 1/2] ADD ./built_roots/base/rootfs.tar /
#4 DONE 2.7s

#5 [build 2/2] RUN printf "Acquire::http { Proxy "%s"; }; " "http://10.10.1...
#5 1.128 Get:1 http://deb.debian.org/debian buster InRelease [121 kB]
#5 1.130 Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
#5 1.139 Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
#5 1.236 Get:4 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
#5 1.333 Get:5 http://deb.debian.org/debian buster-updates/main amd64 Packages [7848 B]
#5 1.401 Get:6 http://security.debian.org/debian-security buster/updates/main amd64 Packages [266 kB]
#5 2.159 Fetched 8420 kB in 1s (8093 kB/s)
#5 2.159 Reading package lists...
#5 2.580 
#5 2.580 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
#5 2.580 
#5 2.587 Reading package lists...
#5 3.013 Building dependency tree...
#5 3.104 Reading state information...
#5 3.178 The following additional packages will be installed:
#5 3.178   libssl1.1 openssl
#5 3.195 The following NEW packages will be installed:
#5 3.196   ca-certificates dumb-init libssl1.1 openssl
#5 3.215 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
#5 3.215 Need to get 2560 kB of archives.
#5 3.215 After this operation, 6115 kB of additional disk space will be used.
#5 3.215 Get:1 http://security.debian.org/debian-security buster/updates/main amd64 libssl1.1 amd64 1.1.1d-0+deb10u4 [1538 kB]
#5 3.215 Get:2 http://deb.debian.org/debian buster-updates/main amd64 ca-certificates all 20200601~deb10u2 [166 kB]
#5 3.216 Get:3 http://deb.debian.org/debian buster/main amd64 dumb-init amd64 1.2.2-1.1 [13.6 kB]
#5 3.230 Get:4 http://security.debian.org/debian-security buster/updates/main amd64 openssl amd64 1.1.1d-0+deb10u4 [843 kB]
#5 3.762 debconf: delaying package configuration, since apt-utils is not installed
#5 3.779 Fetched 2560 kB in 0s (116 MB/s)
#5 4.173 Selecting previously unselected package libssl1.1:amd64.
(Reading database ... 6677 files and directories currently installed.)
#5 4.183 Preparing to unpack .../libssl1.1_1.1.1d-0+deb10u4_amd64.deb ...
#5 4.557 Unpacking libssl1.1:amd64 (1.1.1d-0+deb10u4) ...
#5 5.724 Selecting previously unselected package openssl.
#5 5.726 Preparing to unpack .../openssl_1.1.1d-0+deb10u4_amd64.deb ...
#5 5.898 Unpacking openssl (1.1.1d-0+deb10u4) ...
#5 7.835 Selecting previously unselected package ca-certificates.
#5 7.835 Preparing to unpack .../ca-certificates_20200601~deb10u2_all.deb ...
#5 8.057 Unpacking ca-certificates (20200601~deb10u2) ...
#5 9.442 Selecting previously unselected package dumb-init.
#5 9.446 Preparing to unpack .../dumb-init_1.2.2-1.1_amd64.deb ...
#5 9.542 Unpacking dumb-init (1.2.2-1.1) ...
#5 10.45 Setting up dumb-init (1.2.2-1.1) ...
#5 11.02 Setting up libssl1.1:amd64 (1.1.1d-0+deb10u4) ...
#5 11.51 debconf: unable to initialize frontend: Dialog
#5 11.51 debconf: (TERM is not set, so the dialog frontend is not usable.)
#5 11.51 debconf: falling back to frontend: Readline
#5 11.51 debconf: unable to initialize frontend: Readline
#5 11.51 debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/x86_64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
#5 11.51 debconf: falling back to frontend: Teletype
#5 12.41 Setting up openssl (1.1.1d-0+deb10u4) ...
#5 13.31 Setting up ca-certificates (20200601~deb10u2) ...
#5 13.53 debconf: unable to initialize frontend: Dialog
#5 13.53 debconf: (TERM is not set, so the dialog frontend is not usable.)
#5 13.53 debconf: falling back to frontend: Readline
#5 13.53 debconf: unable to initialize frontend: Readline
#5 13.53 debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/x86_64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
#5 13.53 debconf: falling back to frontend: Teletype
#5 13.82 Updating certificates in /etc/ssl/certs...
#5 14.28 137 added, 0 removed; done.
#5 15.08 Processing triggers for libc-bin (2.28-10) ...
#5 15.54 Processing triggers for ca-certificates (20200601~deb10u2) ...
#5 15.76 Updating certificates in /etc/ssl/certs...
#5 16.15 0 added, 0 removed; done.
#5 16.15 Running hooks in /etc/ca-certificates/update.d...
#5 16.15 done.
#5 16.60 Reading package lists...
#5 17.03 Building dependency tree...
#5 17.12 Reading state information...
#5 17.21 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
#5 DONE 18.5s

#6 [main 1/1] COPY --from=build / /
#6 DONE 3.0s

TL;DR:
Changing outputs from:

          outputs:
            - name: image

to

          outputs:
            - name: image_base

results in no execution of the "exporting to oci image format" step

Separate output for rootfs?

When I added the $UNPACK_ROOTFS option to the builder-task after short usage I created this PR vmware-archive/builder-task#26 to move the rootfs to a separate output. The reason being better performance when passing outputs as subsequent tasks most likely never need both the oci tarball and the rootfs.
Was this dropped deliberately or just accidentally? Is it worth filing a PR for that?

Cannot hijack task container that is based on oci-build-task

Problem described in summary.

Task file:

---
platform: linux

inputs:
  - name: pull-request
    path: .

image_resource:
  type: registry-image
  source:
    repository: vito/oci-build-task
    tag: 0.4.0

caches:
  - path: cache

run:
  path: build

When the task failed I tried to hijack the container and git this:

Screenshot 2020-12-10 at 13 30 02

Looks like bash is missing in $PATH

README.md docker load instructions wrong?

See concourse/registry-image-resource#72 for a description of the issue.

In the README.md this is mentioned:

  • digest: the digest of the OCI config. This file can be used to tag the image after it has been loaded with docker load, like so:
    docker load -i image/image.tar
    docker tag $(cat image/digest) my-name

Wondering if this is actually correct? Shouldn't the format be

docker tag $image_repo@$(cat image/digest) my-name:my-tag

?

Also even if one uses that format, docker doesn't seem to save/load the image digest which means this won't work at all... (see linked issue for a workaround).

I won't mind submitting a PR to try and fix the docs, just wanted to check I'm not missing anything first.

Building more than one image

Hi,

I have in my repository 4 projects that are generating 4 jars. The Dockerfile is the same in these 4 projects.

Could I build more than one image in the same task?

Configurable CA certs for fetching `FROM` images

Dependent on moby/moby#38303

Devil's advocate for not implementing this: there is overlap with the registry-image resource once it supports TLS configuration and once #1 lands. Should we encourage explicitly tracking FROM images as resources instead of supporting the same overlapping feature set? Tasks shouldn't be fetching things willy-nilly anyway. Tracking it as a resource would provide caching and explicit version management.

min cache settings results in poor caching

Hi,

I'm wondering if the cache export with mode=min was made deliberately here:
https://github.com/vito/oci-build-task/blob/master/task.go#L60

In my observation this leads to almost no cache effect in my multi-stage builds of go binaries as only layers of the final image are actually cached. In general in multistage build the final image does not consist of many layers (e.g. in the case of go: mostly copy the build binary into a very small base image).
Changing the setting to max and rebuilding the image does lead to much better cache results (e.g. the expensive steps in the intermediate builder image being cached).

Can we change the setting to max or should we maybe make it configurable?

Is there support for a privately hosted registry?

Hello,

I did not know where else to ask, hopefully this is okay here..

I would love to know if there is support for pushing to a "private" registry? From my understanding the docker-image output resource relies on a repository, but all the examples are from the docker-hub.

Would it be the registry-image-resource that I would be pushing from? If so I can ask this over there.

Thanks!!

multi stage docker file ignores some stages

Not sure if this is a buildkit issue or an issue with this task.

Here is a (stripped down pseudo) Dockerfile that, when built using this task, ends up only building the "build stage" and the "run stage" and ignoring the "test stage" entirely.

When building with docker, it correctly builds all stages.

I suspect either this task or buildkit is not correctly handling the FROM <earlier stage> as foo step

# build stage
# ==========
FROM python:3.8.0 as build
ENV DOCKER_STAGE=build PYTHONUNBUFFERED=1

WORKDIR /build

RUN do-build

# run stage
# ===========
FROM python:3.8.0-slim as run
ENV DOCKER_STAGE=run

CMD ["foo"]

# test stage
# =========
FROM run as test

WORKDIR /workspace

COPY tests/ tests/

RUN python -m unittest discover

# output stage
# ===========
FROM run

build issue "operation not permitted"

#2 [internal] load .dockerignore
#2 ...

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 59B done
#1 DONE 0.1s

#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.2s
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to mount /tmp/buildkit-mount291075877: [{Type:bind Source:/tmp/buildkitd/runc-native/snapshots/snapshots/1 Options:[rbind ro]}]: operation not permitted

Feature request: support the ability to configure a registry mirror

Thanks for your work on oci-build-task! Is it worth exposing to users the ability to configure a registry mirror with a pull-through cache?

Why would this be helpful?

Docker Hub will begin exposing strict rate limits November 1. Concourse issue 6073 seeks to develop a plan for dealing with the rate limits, discussion around which mentions perhaps providing Concourse users the ability to configure a global registry mirror with a pull-through cache.

However, I don't believe Concourse issue 6073 discussion currently accounts for the rate limiting that will be imposed on Concourse users of the oci-build-task whose Dockerfiles leverage FROM directives pulling images from hub.docker.com. If I understand correctly, oci-build-task users would benefit from the ability to configure oci-build-task to use a registry mirror with a pull-through cache as well such that, if configured, directives like FROM foo/bar would attempt to pull foo/bar from the configured registry mirror rather than the rate-limited hub.docker.com.

Context

My team currently operates a rather large Concourse instance serving hundreds of teams and thousands of developers, many of whom use the oci-build-task in concert with the registry-image-resource to build and publish container images. I believe these users would greatly benefit from the ability to configure a registry mirror in their pipelines' use of oci-build-task.

buildkit-runc did not terminate successfully

With a generic Angular application, Given this task definition:

platform: linux

image_resource:
  type: registry-image
  source:
    repository: vito/oci-build-task

inputs:
  - name: repo

outputs:
  - name: image

params:
  DOCKERFILE: repo/src/subproject/deployments/docker/Dockerfile
  CONTEXT: repo/src/subproject

run:
  path: build

With this Dockerfile:

FROM node:12 as builder

WORKDIR /home/node/app
COPY . .

ARG VAR_URL=${VAR_URL}
ENV API_URL=${VAR_URL}

RUN npm install --loglevel=warn
RUN npm install -g --save-dev @types/node
RUN npm install -g @angular/cli
RUN $(npm bin)/ng build -

# Stage 2
FROM nginx:1.17.1-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=builder /home/node/app/dist/subproject/ /usr/share/nginx/html
EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

And this is a generic pipeline definition:

jobs:
  - name: prepare-container-image
    plan:
      - get: repo
        trigger: true
      - task: build-image
        privileged: true
        file: repo/src/subproject/ci/tasks/build-repo-image.yml

This is the error I receive:

#10 [builder 3/7] COPY . .
#10 DONE 0.3s

#11 [builder 4/7] RUN npm install --loglevel=warn
#11 0.247 container_linux.go:367: starting container process caused: process_linux.go:326: applying cgroup configuration for process caused: mkdir /sys/fs/cgroup/cpuset/buildkit: read-only file system
#11 ERROR: executor failed running [/bin/sh -c npm install --loglevel=warn]: buildkit-runc did not terminate successfully
------
> [builder 4/7] RUN npm install --loglevel=warn:
------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c npm install --loglevel=warn]: buildkit-runc did not terminate successfully
FATA[0027] failed to build: build: exit status 1
FATA[0027] failed to run task: exit status 1

Additional Info

  • Concourse v6.6.0
  • Manual worker is using runc, Helm-deployed worker is using the default.
  • Builds a different multi-stage image without issues.
  • The workers are not out of disk space.
  • This error happens in Helm-deployed Concourse workers and manually deployed Concourse workers.
  • No other steps fail.
  • The docker image build locally.

Failing to load IMAGE_ARG_*

When specifying running the following task:

      - get: content-source
      - get: node-image
      - get: hugo-builder
        params:
          format: oci
      - get: nginx
        params:
          format: oci
      - get: ecr-registry
        params:
          format: oci
     - task: build
        privileged: true
        config:
          platform: linux
          image_resource:
            type: registry-image
            source:
              repository: vito/oci-build-task
              tag: master
          inputs:
            - name: hugo-builder
              path: modified-sources/hugo-builder
            - name: nginx
              path: modified-sources/nginx
            - name: ecr-registry
              path: modified-sources/ecr-registry
            - name: modified-sources
          outputs:
            - name: image
          params:
            CONTEXT: ecr-registry
            IMAGE_ARG_hugo_image: "hugo-builder/image.tar"
            IMAGE_ARG_nginx_image: "nginx/image.tar"
            DOCKERFILE: Dockerfile.staging
          caches:
            - path: cache
          run:
            dir: modified-sources
            path: build

getting a failure of:

Dockerfile.staging:1
--------------------
   1 | >>> FROM ${hugo_image}
   2 |     COPY ./package.json ./package.json
   3 |     COPY ./package-lock.json ./package-lock.json
--------------------
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: base name (${hugo_image}) should not be blank
FATA[0000] failed to build: build: exit status 1        
FATA[0000] failed to run task: exit status 1     

Wrong diff ID calculated error when building container image

I am getting the following error when trying to build an image:

error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0:
 failed to build LLB: wrong diff id calculated on extraction "sha256:d5d063acba121b99927d52088ad7dfd9ee982880626fb10e1aef5f3ead99892d"

This is the pipeline configuration:

resources:
  - name: code-repo
    type: git
    source:
      branch: master
      uri: https://github.com/alexbrand/spring-boot-rest-example.git

  - name: container-image
    type: registry-image
    source:
      repository: ((docker-hub-account))/spring-boot-rest-example
      username: ((docker-hub-user))
      password: ((docker-hub-pass))

jobs:
  - name: build-push-image
    plan:
    - get: code-repo
      trigger: true

    - task: build
      privileged: true
      config:
        platform: linux
        image_resource:
          type: registry-image
          source:
            repository: vito/oci-build-task
        inputs:
        - name: code-repo
          path: .
        outputs:
        - name: image
        run:
          path: build

    - put: container-image
      params:
        additional_tags: code-repo/.git/short_ref
        image: image/image.tar

Full build logs:

fetching vito/oci-build-task@sha256:cfb2983956145f54a4996c2aff5fc598856c8722922a6e73f9ebfa3d9b3f9813
4fe2ade4980c [==========================================] 2.1MiB/2.1MiB
23e16c159a3e [==========================================] 7.5MiB/7.5MiB
c455d41ca53a [==============================================] 899b/899b
65f69dce8edb [========================================] 22.9MiB/22.9MiB
3df4123dfd16 [==========================================] 2.1MiB/2.1MiB
c266d1ba4b08 [==========================================] 1.7MiB/1.7MiB
6a3b4b7d898b [==============================================] 744b/744b
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 859B done
#2 DONE 0.0s

#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

#4 [internal] load metadata for docker.io/library/maven:3.6.3-jdk-11-slim
#4 ...

#3 [internal] load metadata for gcr.io/distroless/java:11
#3 DONE 0.6s

#4 [internal] load metadata for docker.io/library/maven:3.6.3-jdk-11-slim
#4 DONE 0.7s

#8 [internal] load build context
#8 transferring context: 39.90kB done
#8 DONE 0.0s

#6 [maven_tool_chain_cache 1/8] FROM docker.io/library/maven:3.6.3-jdk-11-s...
#6 resolve docker.io/library/maven:3.6.3-jdk-11-slim@sha256:f53c375e8391df1b005aaa5d59145d457e8cf3fd2ba4b562baeef44a471323fa done
#6 sha256:bc51dd8edc1b1132bb97827ed6bd16aac332b03fb03d4c02d2088067a5fbb499 56.44kB / 27.09MB 0.1s
#6 sha256:186d54f60a04eacc9acb9f6fdede650475e3c231ec231635607b7df11ec34063 0B / 9.58MB 0.1s
#6 sha256:27378801d6060c82aa6e8c5bc80672354f80a9a3a50371fa790876d5258adb1f 360B / 360B 0.1s done
#6 sha256:adf4d10a782b71ea3c226ce20c1acdfb0aeefb137fb5c5fa3100aaa275ec71cd 210B / 210B 0.1s done
#6 sha256:2a41cd9e737b0a2b6625263325f462a8991a0e66eaa35dc73ef9c26f58dd6608 9.49kB / 9.49kB done
#6 sha256:d1d06863bb82bdc3f4cd17d49c0dee8720f91c4d9e40cce1b4a691f9459266be 0B / 3.25MB 0.1s
#6 sha256:7c35c75d6bdb8ca6f0ab362d68d1960320b4d16c43386b76993970907198bfde 0B / 2.61MB 0.1s
#6 sha256:5c2041c657279f379fd8b1ad2e608910966ae0c123d3eeeb1ffd8d0429624fba 0B / 196.34MB 0.1s
#6 sha256:f53c375e8391df1b005aaa5d59145d457e8cf3fd2ba4b562baeef44a471323fa 549B / 549B done
#6 sha256:82e384dd71920dfb1b098549272bf3a348bd242d1afdd0baff1507ced69c77a2 2.00kB / 2.00kB done
#6 sha256:50115f90b29477cffbad6f7c4e384bc901d9a9b24d511f8b60ed8de4b9ecb6c2 0B / 856B 0.1s
#6 sha256:bc51dd8edc1b1132bb97827ed6bd16aac332b03fb03d4c02d2088067a5fbb499 3.24MB / 27.09MB 0.3s
#6 sha256:186d54f60a04eacc9acb9f6fdede650475e3c231ec231635607b7df11ec34063 4.03MB / 9.58MB 0.3s
#6 sha256:d1d06863bb82bdc3f4cd17d49c0dee8720f91c4d9e40cce1b4a691f9459266be 2.78MB / 3.25MB 0.3s
#6 sha256:7c35c75d6bdb8ca6f0ab362d68d1960320b4d16c43386b76993970907198bfde 2.61MB / 2.61MB 0.3s done
#6 sha256:50115f90b29477cffbad6f7c4e384bc901d9a9b24d511f8b60ed8de4b9ecb6c2 856B / 856B 0.1s done
#6 sha256:bc51dd8edc1b1132bb97827ed6bd16aac332b03fb03d4c02d2088067a5fbb499 9.49MB / 27.09MB 0.6s
#6 sha256:186d54f60a04eacc9acb9f6fdede650475e3c231ec231635607b7df11ec34063 9.58MB / 9.58MB 0.5s done
#6 sha256:d1d06863bb82bdc3f4cd17d49c0dee8720f91c4d9e40cce1b4a691f9459266be 3.25MB / 3.25MB 0.3s done
#6 sha256:5c2041c657279f379fd8b1ad2e608910966ae0c123d3eeeb1ffd8d0429624fba 17.33MB / 196.34MB 0.6s
#6 sha256:bc51dd8edc1b1132bb97827ed6bd16aac332b03fb03d4c02d2088067a5fbb499 22.08MB / 27.09MB 0.9s
#6 sha256:5c2041c657279f379fd8b1ad2e608910966ae0c123d3eeeb1ffd8d0429624fba 31.66MB / 196.34MB 0.9s
#6 sha256:bc51dd8edc1b1132bb97827ed6bd16aac332b03fb03d4c02d2088067a5fbb499 27.09MB / 27.09MB 1.0s
#6 sha256:bc51dd8edc1b1132bb97827ed6bd16aac332b03fb03d4c02d2088067a5fbb499 27.09MB / 27.09MB 1.1s done
#6 sha256:5c2041c657279f379fd8b1ad2e608910966ae0c123d3eeeb1ffd8d0429624fba 65.92MB / 196.34MB 1.3s
#6 sha256:5c2041c657279f379fd8b1ad2e608910966ae0c123d3eeeb1ffd8d0429624fba 83.71MB / 196.34MB 1.5s
#6 ...

#5 [stage-1 1/2] FROM gcr.io/distroless/java:11@sha256:b715126ebd36e5d5c2fd...
#5 resolve gcr.io/distroless/java:11@sha256:b715126ebd36e5d5c2fd730f46a5b3c3b760e82dc18dffff7f5498d0151137c9 done
#5 sha256:b715126ebd36e5d5c2fd730f46a5b3c3b760e82dc18dffff7f5498d0151137c9 1.16kB / 1.16kB done
#5 sha256:c5806e11a467bea9d9206123cee9e4c1c9fae3ceed1629abfe080166de318f38 63.52MB / 63.52MB 1.4s done
#5 sha256:24f0c933cbef83faee52f82c7f889c727b1ece5123b92d036c52fa865480f037 657.29kB / 657.29kB 0.2s done
#5 sha256:37c25e886b24e7c9999e53f310aa87066a2cc8fabba0e866e4769854f0f2803a 965B / 965B done
#5 sha256:69e2f037cdb30c8d329b17dad42cd9d92a45d93c17e6699650b23c55eceacb5f 7.33MB / 7.33MB 0.4s done
#5 sha256:3e010093287c245d72a774033b4cddd6451a820bfbb1948c97798e1838858dd2 643.66kB / 643.66kB 0.2s done
#5 unpacking gcr.io/distroless/java:11@sha256:b715126ebd36e5d5c2fd730f46a5b3c3b760e82dc18dffff7f5498d0151137c9 0.2s done
#5 ERROR: wrong diff id calculated on extraction "sha256:d5d063acba121b99927d52088ad7dfd9ee982880626fb10e1aef5f3ead99892d"

#6 [maven_tool_chain_cache 1/8] FROM docker.io/library/maven:3.6.3-jdk-11-s...
#6 CANCELED
------
 > [stage-1 1/2] FROM gcr.io/distroless/java:11@sha256:b715126ebd36e5d5c2fd730f46a5b3c3b760e82dc18dffff7f5498d0151137c9:
------
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to build LLB: wrong diff id calculated on extraction "sha256:d5d063acba121b99927d52088ad7dfd9ee982880626fb10e1aef5f3ead99892d"
FATA[0006] failed to build: build: exit status 1        
FATA[0006] failed to run task: exit status 1  

Builds fail with "preparing rootfs caused: invalid argument"

Hey folks!

I'm trying to use oci-build-task on a remote worker running Ubuntu 20.04, and the build process is failing at:

#8 [build 4/8] RUN pip install -r requirements.txt
10:39:06
#8 0.317 container_linux.go:367: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:46: preparing rootfs caused: invalid argument

At the same time, dmesg on the host logs:

new mount options do not match the existing superblock, will be ignored

My worker is using a tmpfs for the workdir, but I've also tried using the default /tmp in my Ubuntu install, with the same results.

I see some similar issues have been raised, and google tells me that this has something to do with cgroups, but I'm not sure where to go to from here.

Any ideas? :)

Thanks!
D

How to use the --secret flag from the build-image job

Hello,

I am wondering how to use the --secret flag for docker build in a concourse build-image job.

For example, we have to copy a file from an aws s3 bucket to the docker image, for that we pass the credentials to the docker build phase using the --secret flag like described from this link:

RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
$ docker build --secret id=aws,src=$HOME/.aws/credentials .

=> How to integrate that in the build-image job

jobs:
  - name: build-image
    serial: true
    plan:
      - get: repo
        trigger: true
      - task: build
        privileged: true
        config:
          platform: linux
          image_resource:
            type: registry-image
            source:
              repository: vito/oci-build-task
          params:
            DOCKERFILE: images/Dockerfile
            CONTEXT: images
          inputs:
            - name: repo
              path: .
          outputs:
            - name: image

FYI, this PR #34 is implementing a similar feature but for the --ssh flag.

Best,

Upstream moby/buildkit dependency

This is more of a query than a request or bug report (I know @vito is busy with v6.0 - leaving this here for consideration).

This resource image seems to currently depend on moby/buildkit:master however the docker hub image vito/oci-build-task hasn't been updated in 2 months.

So my question is twofold:

  • If this is supposed to track moby/buildkit:master should it not auto-trigger (or trigger on a regular basis) on new versions of moby/buildkit:master?
  • Would it be preferable to track a particular tag of moby/buildkit? as in moby/buildkit:v0.6.3 for example? (and bump that explicitly as needed)

PS: Not sure if the pipeline building this resource image has a particular pinned SHA of moby/buildkit:master

Support http basic auth credentials for FROM repo.

I'd like to switch over to this new task (from the docker-image-resource task). However the FROM image in my dockerfile requires you to specify username/password to pull it. I don't see any way to currently do this with the oci-builder-task. This seems like pretty important functionality especially if the community is trying to push people in the oci-build-task direction.

From Alex:
"It might be easy enough to add support for it; I think buildkit will look under $DOCKER_CONFIG/config.json for credentials, the task just doesn't support creating/configuring it yet."

Does anyone else know of a workaround?

overlayfs snapshot driver

I'm not sure for others but for me when using this image to build other images it's not using overlayfs as the snapshot driver but rather the naive driver. This means when unpacking an image each layer unpacked is a copy of previous layer with addition of new content rather then having an overlay of just the difference. This is time consuming with the disk IO involved and can use a lot more disk space to build images.

When I intercept a step using this image I see

ps -ef | grep buildkitd
130 root      2:21 buildkitd --root /tmp/buildkitd --addr unix:///tmp/buildkitd/buildkitd.sock

If I wait for the build to finish and start buildkitd myself adding --debug I see

~ # buildkitd --root /tmp/buildkitd --addr unix:///tmp/buildkitd/buildkitd.sock --debug
DEBU[0000] auto snapshotter: overlayfs is not available for /tmp/buildkitd, trying fuse-overlayfs: failed to mount overlay: invalid argument 

If I change --root to use a filesystem that isn't on overlay it is then able to use overlayfs driver

~ # df -h /tmp
Filesystem                Size      Used Available Use% Mounted on
overlay                  49.0G     12.2G     34.2G  26% /
~ # df -h /scratch
Filesystem                Size      Used Available Use% Mounted on
/dev/vdb                 49.0G     12.2G     34.2G  26% /scratch
~ # buildkitd --root /scratch/tmp/buildkitd --addr unix:///tmp/buildkitd/buildkitd.sock --debug
INFO[0000] auto snapshotter: using overlayfs            
...

Not sure if we can just trust /scratch to exist but I think we should figure out using a --root that will take advantage of overlayfs snapshot driver.

context=cwd and caching enabled may cause ever increasing image size

Pipeline to demonstrate

jobs:
- name: Build Growing Image
  plan:
  # This would normally be a git resource, but done to make this self contained
  - task: write dockerfile  
    config:
      platform: linux
      image_resource:
        type: registry-image
        source: {repository: alpine}
      outputs: [name: docker]
      run:
        path: sh
        args: [-ec, 'printf "FROM alpine\nCOPY . /data/\n" >docker/Dockerfile']
  - task: build
    privileged: true
    config:
      platform: linux
      image_resource:
        type: registry-image
        source: {repository: vito/oci-build-task}
      inputs: [{name: docker, path: .}]
      outputs: [{name: image}]
      caches: [{path: cache}]
      run: {path: build}

After repeatedly running the job on a single worker (to make sure that I was getting the cache every time). I had context sizes of 181B, 2.79MB, 5.58MB, 11.15MB, 22.30MB, ... As you can see the context (and resultant image) is doubling in size every build. This is also starting from the tiny alpine image; Depending on your base image, you can easily get to multiple GB in a few builds.

The image size depends on the Dockerfile and .dockerignore files, but COPY . is not that uncommon in a Dockerfile. Even with a Dockerfile only copying specific items, having context include the cache is not great as this requires sending the entire contents of cache into memory which for larger images, the cache can become very large.

Since nothing is actually wrong, I would recommend either removing the section talking about path: . or make note of this behavior when CONTEXT=. and cache is enabled.

cache saves are suddenly failing

#24 writing layer sha256:df25d6379e2e96da40fd486dd27cada82f981297576999c97e5b49eecab92e76
#24 writing layer sha256:df25d6379e2e96da40fd486dd27cada82f981297576999c97e5b49eecab92e76 0.4s done
#24 writing config sha256:0d0f44b862a9954cdaee2e3260da9469a23407182218a44db36f853c1c1b7109
ERRO[0090] (*service).Write failed                       error="rpc error: code = Canceled desc = context canceled" expected="sha256:781c81f1712469207aa8d56967b55cfd67986c9aadf5d80e8d4d7ae4f933b1c0" ref="sha256:781c81f1712469207aa8d56967b55cfd67986c9aadf5d80e8d4d7ae4f933b1c0" total=3290
#24 writing config sha256:0d0f44b862a9954cdaee2e3260da9469a23407182218a44db36f853c1c1b7109 0.0s done
#24 writing manifest sha256:781c81f1712469207aa8d56967b55cfd67986c9aadf5d80e8d4d7ae4f933b1c0 0.0s done
#24 DONE 1.7s

The cache is no-longer being used on follow-up builds, being re-built every time, so I suspect the above error may be related.

Starting with an issue here, in case it's common among concourse users, or an issue specific to concourse.

How to migrate load_base from docker-image

Hi,

How to migrate load_base please:

resources:
- name: openjdk-11
  type: docker-image
  source:
    repository: ((our.repo))/adoptopenjdk/openjdk11
    tag: alpine-jre
    <<: *repo-conf

jobs:
- name: job01
  plan:
  [...]
  - get: openjdk-11
    params: { save: true }
  - put: docker-app
    params:
      load_base: openjdk-11
      build: output-build
      tag_file: version-app/version

I suppose something like that

jobs:
- name: job01
  plan:
  [...]
  - get: openjdk-11
    params: { format: oci }
  - task: build
      privileged: true
      config:
        platform: linux
        image_resource:
          type: registry-image
          source:
            repository: concourse/oci-build-task
        caches:
          - path: cache ????
        params:
          IMAGE_ARG_????: ????/image.tar
        inputs:
        - name: output-build
          path: .
        outputs:
        - name: image
        run:
          path: build
  - put: docker-app
    params: 
      image: image/image.tar
      version: version-app/version

But I don't know how to use cache properly
or how to set IMAGE_ARG_***

In the doc:
For example, IMAGE_ARG_base_image=ubuntu/image.tar will set base_image to a local image reference for using ubuntu/image.tar
I suppose, I had to replace ubuntu by openjdk-11 with my example, but not sure
And I don't know, what to do with base_image:

Is it possible to have a doc for this use case please

Thx

There's no way to pass --ssh flag when using K8s credentials manager

Hi Vito,

Here's a frustrating issue. In my Dockerfile, some pip dependencies come from private repositories. For a manual build, I pass the key as the following:

  1. Inside Dockerfile
    RUN --mount=type=ssh,id=github_ssh_key $VIRTUAL_ENV/bin/pip3 install -U -r ./hats/requirements-test.txt
  2. When executing docker command
    DOCKER_BUILDKIT=1 d build --progress=plain -t image:tag -f Dockerfile github_ssh_key=<PATH-TO-KEY>/.

Now, my Concourse uses K8s credentials manager as it runs as Helm Chart. My job looks like the following:

jobs:

- name: build-image
  plan:
  - get: repo
  - task: build-image-task
    privileged: true
    config:
      platform: linux
      image_resource:
        type: registry-image
        source:
          repository: vito/oci-build-task
      inputs:
      - name: repo
      outputs:
      - name: image
      run:
        path: build
      caches:
      - path: cache
      params:
        CONTEXT: repo
        BUILD_ARG_ssh: github_ssh_key=((github-key))

((github-key)) is a K8s secret resource.

I was hoping the line BUILD_ARG_ssh: github_ssh_key=((github-key)) could pass the ssh flag as --ssh github_ssh_key=<MY-KEY>, but I only realized that the --ssh flag only accepts the path to the key, not the key value. Now since I'm using K8s credentials manager from Helm Chart, I have no idea how to find the path to the key.

Add support for `--add-host` when building an image

I am looking at using this task to replace some tasks that I've written that use docker/dind for building Docker images. For some of our images we need to use the --add-host option of docker build.

Skimming the code for buildkit, I think this could be done with --opt add-hosts=<host:ip>. This is my first time looking at buildkit, so I may be entirely mistaken.

I suppose the simplest way to add this functionality is to check for a non-empty task param called ADD_HOSTS.

build exit status 2: rbind ro: operation not permitted

I'm trying to get this image to build per the example but I cannot seem to get it work...
(I have tried this using the "master", "latest", and "0.1.0" tags)

jobs:
  job Build PHP-FPM (Stretch) has changed:
  name: Build PHP-FPM (Stretch)
  plan:
  - in_parallel:
      steps:
      - get: repo
        resource: AEGIS Repository
  - task: build
    config:
      platform: linux
      image_resource:
        type: registry-image
        source:
          repository: vito/oci-build-task
          tag: master
      params:
        CONTEXT: .
        DOCKERFILE: ./install/php/Dockerfile.debian
      run:
        path: build
      inputs:
      - name: repo
        path: .

Everytime I run the build, I get log output like this:

#2 [internal] load .dockerignore
#2 ...

#1 [internal] load build definition from Dockerfile.debian
#1 transferring dockerfile: 3.04kB 0.0s done
#1 DONE 1.6s

#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 2.4s
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to mount /tmp/buildkit-mount734887458: [{Type:bind Source:/tmp/buildkitd/runc-native/snapshots/snapshots/1 Options:[rbind ro]}]: operation not permitted
FATA[0003] failed to build: build: exit status 1        
FATA[0003] failed to run task: exit status 1        

Have I missed something?

Multi-host build cache

I've turned on the caching for my builds using:

                  caches:
                  - path: cache

This appears to only cache the build on the worker which it is executed on? We use a pool of workers which is constantly scaling up and down so the workers are not long lived. This severely limits the usefulness of worker based caching.

The above probably makes sense, really. oci-build-task writes the docker image to file and then a subsequent step pushes it to the registry so it makes sense that I might have to set up something similar for the cache as well? However I don't see any resources associated with pushing buildkit cache to my repo. I tried sync'ing the cache directory to s3 and then loading that as an input but that didn't seem to work either.

Is this the intended behavior? Is there a way to cache builds across workers that I'm not seeing?

Failed to build arm image

I'm using oci-build-task to create images, but not only linux/amd64, when I try to use any of these:

  • IMAGE_PLATFORM: linux/arm/v6
  • IMAGE_PLATFORM: linux/arm/v7
  • IMAGE_PLATFORM: linux/arm64

I get the following error

#8 18.15 Traceback (most recent call last):
#8 18.15   File "/usr/local/bin/pip", line 8, in <module>
#8 18.16     sys.exit(main())
#8 18.16   File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/main.py", line 70, in main
#8 18.16     return command.main(cmd_args)
#8 18.16   File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 98, in main
#8 18.16     return self._main(args)
#8 18.16   File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 214, in _main
#8 18.16     self.handle_pip_version_check(options)
#8 18.16   File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 143, in handle_pip_version_check
#8 18.16     session = self._build_session(
#8 18.16   File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 88, in _build_session
#8 18.16     session = PipSession(
#8 18.16   File "/usr/local/lib/python3.9/site-packages/pip/_internal/network/session.py", line 289, in __init__
#8 18.17     self.headers["User-Agent"] = user_agent()
#8 18.17   File "/usr/local/lib/python3.9/site-packages/pip/_internal/network/session.py", line 132, in user_agent
#8 18.17     linux_distribution = distro.linux_distribution()  # type: ignore
#8 18.17   File "/usr/local/lib/python3.9/site-packages/pip/_vendor/distro.py", line 125, in linux_distribution
#8 18.17     return _distro.linux_distribution(full_distribution_name)
#8 18.17   File "/usr/local/lib/python3.9/site-packages/pip/_vendor/distro.py", line 681, in linux_distribution
#8 18.17     self.version(),
#8 18.17   File "/usr/local/lib/python3.9/site-packages/pip/_vendor/distro.py", line 741, in version
#8 18.17     self.lsb_release_attr('release'),
#8 18.17   File "/usr/local/lib/python3.9/site-packages/pip/_vendor/distro.py", line 903, in lsb_release_attr
#8 18.17     return self._lsb_release_info.get(attribute, '')
#8 18.17   File "/usr/local/lib/python3.9/site-packages/pip/_vendor/distro.py", line 556, in __get__
#8 18.18     ret = obj.__dict__[self._fname] = self._f(obj)
#8 18.18   File "/usr/local/lib/python3.9/site-packages/pip/_vendor/distro.py", line 1014, in _lsb_release_info
#8 18.18     stdout = subprocess.check_output(cmd, stderr=devnull)
#8 18.18   File "/usr/local/lib/python3.9/subprocess.py", line 424, in check_output
#8 18.18     return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
#8 18.18   File "/usr/local/lib/python3.9/subprocess.py", line 528, in run
#8 18.18     raise CalledProcessError(retcode, process.args,
#8 18.18 subprocess.CalledProcessError: Command '('lsb_release', '-a')' returned non-zero exit status 1.
#8 ERROR: executor failed running [/dev/.buildkit_qemu_emulator /bin/sh -c pip install --no-cache-dir -r requirements.txt]: exit code: 1
------
 > [4/4] RUN pip install --no-cache-dir -r requirements.txt:
------
Dockerfile:7
--------------------
   5 |     COPY . .
   6 |     
   7 | >>> RUN pip install --no-cache-dir -r requirements.txt
   8 |     
   9 |     ENTRYPOINT python ./helloWorld.py
--------------------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/dev/.buildkit_qemu_emulator /bin/sh -c pip install --no-cache-dir -r requirements.txt]: exit code: 1
FATA[0055] failed to build: build: exit status 1        
FATA[0055] failed to run task: exit status 1            

In this repo you will be able to find an example repository (Dockerfile, helloWorld.py and requirement.txt) that I'm using and the pipeline to create.

https://github.com/woolter/test-oci-build-task

The code related with the push is commented as in this moment I'm not able to build the image

Trailing newlines end up in buildctl command's build args options

๐Ÿ‘‹ ๐Ÿ‘‹
There is a missing chomp/trim somewhere in here, and I am not sure where exactly the right place would be to add it. But here's an example where it's an issue:

use-case for context:

  • I have a job that builds an image and pushes it with a tag value got from the git resource, after a load_var.
  • in the next step, a second image is built that uses that tag in its FROM, referencing the base image it just built.
    • the tag is set via a BUILD_ARG_ param in this task.
    • the build arg defaults to latest but can be overridden when building to specify the base image's tag.

the Dockerfile of the second image starts something like this:

ARG BASE_IMG_TAG=latest
FROM the-base-image:${BASE_IMG_TAG}

# etc

and a (pseudoconfig) pipeline like

resources:
- name: src
  type: git
  icon: git
  source:
    uri: thing.git

- name: foo
  type: registry-image
  icon: docker
  source:
    repository: image-foo

jobs:
- name: build
  plan:
  - get: src
  - load_var: base-img-tag
    file: src/.git/describe_ref # <-------------- !! this guy has a trailing \n !

  # tasks to build and push a base image [...]

  # task to build the next image using the base image:
  - task: build-foo
    privileged: true
    config:
      platform: linux
        image_resource:
          type: registry-image
    	  source: {repository: vito/oci-build-task}
     inputs:
     - name: src
     outputs:
     - name: image
     caches:
     - path: cache
     run:
       path: build
    params: 
      CONTEXT: src
      DOCKERFILE: src/Dockerfile
      BUILD_ARG_BAZ: ((normal-var-works-fine))
      BUILD_ARG_BASE_IMG_TAG: ((.:base-img-tag)) # <-------------- !! this guy!

With debug param set true, you get a log line like

DEBU[0004] building   buildctl-args="[build --progress plain --frontend dockerfile.v0 --local context=src --local dockerfile=src --opt filename=Dockerfile --output type=docker,dest=/tmp/build/88f95496/image/image.tar --export-cache type=local,mode=min,dest=/tmp/build/88f95496/cache --opt build-arg:BASE_IMG_TAG=4.42.1-86-gfc5e687f\n --opt build-arg:BAZ=no-problem-yo  

โŒ --opt build-arg:BASE_IMG_TAG=4.42.1-86-gfc5e687f\n

which predictably causes

failed to create LLB definition: failed to parse stage name "the-base-image:4.42.1-86-gfc5e687f\n": invalid reference format

This works fine by using .git/short_ref in the load_var step for use as the tag value instead. I guess something to do with how those files are populated makes the difference.
(debug log snippet:

[...] --opt build-arg:BASE_IMG_TAG=fc5e687 [...]

no \n in sight)

So, I'm not sure if this should be fixed

  • in the git resource - make the describe_ref file be the same as short_ref with respect to line endings
  • in the load_vars step implementation, and (optionally?) trim line endings when loading from raw files
  • in this task's code, to always trim \n from build args values
  • all of the above?

guidance appreciated ๐Ÿ™

Filesystem "proc" must be mounted on ordinary directory

Concourse 6.40

I'm using the oci-image-task to attempt to build an image, however while executing a RUN command in my Dockerfile it gets this error:

container_linux.go:367: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:58: mounting "proc" to rootfs at "/proc" caused: filesystem "proc" must be mounted on ordinary directory

The error seems to be coming from this line of code where it checks for symlinks.

If I hijack the task and run the following commands:

/tmp/buildkitd/runc-overlayfs/snapshots/snapshots/42/fs # rm -rf proc
/tmp/buildkitd/runc-overlayfs/snapshots/snapshots/42/fs # mkdir -p proc
/tmp/buildkitd/runc-overlayfs/snapshots/snapshots/42/fs # rm -rf sys
/tmp/buildkitd/runc-overlayfs/snapshots/snapshots/42/fs # mkdir -p sys

I can then manually execute the build command which then succeeds.

Support insecure registries in FROM

Looks like this is possible by configuring buildkitd like so:

[registry."some-registry.example.com"]
http = true

However I have the same concerns as with #2 - should we encourage folks to use the registry-image resource for fetching these instead once #1 is done?

Is systemd namespace necessary in bin/setup-cgroups?

Snippet from bin/setup-cgroups :

if ! test -e /sys/fs/cgroup/systemd ; then
  mkdir /sys/fs/cgroup/systemd
  mount -t cgroup -o none,name=systemd none /sys/fs/cgroup/systemd
fi

It kind of assumes that on host system there is SystemD, in my case it is not true.
So upon first task run everything works, but then new cgroup is created and any attempt to start containers on host ends with docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown., as there is now new cgroup namespace and docker wants it all mounted.

Not sure if this namespace is required by something, but if not snippet removal then maybe env flag to skip it would be good solution.

docs for contributing

I'd be nice to have documentation around how to contribute to this project.

I'm trying to work on #37 and had a hard time figuring out how to run tests.

This is what I ended up having to do ... not sure if there's a better or more correct way.

docker run --privileged -it -v $(pwd):/code ubuntu /bin/sh
apt update
apt install curl golang
cd /code
scripts/test # repeat this as needed

Also had to spend a little time to figure out how to build with experimental stuff using DOCKER_BUILDKIT=1:

DOCKER_BUILDKIT=1 docker build -t vito/oci-build-task:rc .

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.