Giter VIP home page Giter VIP logo

buildx's Introduction

The Moby Project

Moby Project logo

Moby is an open-source project created by Docker to enable and accelerate software containerization.

It provides a "Lego set" of toolkit components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts and professionals to experiment and exchange ideas. Components include container build tools, a container registry, orchestration tools, a runtime and more, and these can be used as building blocks in conjunction with other tools and projects.

Principles

Moby is an open project guided by strong principles, aiming to be modular, flexible and without too strong an opinion on user experience. It is open to the community to help set its direction.

  • Modular: the project includes lots of components that have well-defined functions and APIs that work together.
  • Batteries included but swappable: Moby includes enough components to build fully featured container systems, but its modular architecture ensures that most of the components can be swapped by different implementations.
  • Usable security: Moby provides secure defaults without compromising usability.
  • Developer focused: The APIs are intended to be functional and useful to build powerful tools. They are not necessarily intended as end user tools but as components aimed at developers. Documentation and UX is aimed at developers not end users.

Audience

The Moby Project is intended for engineers, integrators and enthusiasts looking to modify, hack, fix, experiment, invent and build systems based on containers. It is not for people looking for a commercially supported system, but for people who want to work and learn with open source code.

Relationship with Docker

The components and tools in the Moby Project are initially the open source components that Docker and the community have built for the Docker Project. New projects can be added if they fit with the community goals. Docker is committed to using Moby as the upstream for the Docker Product. However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.

The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful. The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.


Legal

Brought to you courtesy of our legal counsel. For more context, please see the NOTICE document in this repo.

Use and transfer of Moby may be subject to certain restrictions by the United States and other governments.

It is your responsibility to ensure that your use and/or transfer does not violate applicable laws.

For more information, please see https://www.bis.doc.gov

Licensing

Moby is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

buildx's People

Contributors

akihirosuda avatar bossmc avatar cpuguy83 avatar crazy-max avatar dependabot[bot] avatar developer-guy avatar dgageot avatar dvdksn avatar errordeveloper avatar felixdesouza avatar glours avatar iankingori avatar jedevc avatar jsternberg avatar kenyon avatar ktock avatar laurazard avatar laurentgoderre avatar morlay avatar mqasimsarfraz avatar ndeloof avatar nicks avatar nicksieger avatar saulshanabrook avatar silvin-lubecki avatar thajeztah avatar tonistiigi avatar ulyssessouza avatar vanstee avatar vvoland avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

buildx's Issues

add kubepod driver

Similarly to docker-container driver there could be a support to kubepod driver that bootstraps itself inside a k8s cluster. Kube endpoints in docker context can be used for providing initial configuration. Switching between drivers implemented in #20 . Would be nice if all supported platforms of k8s workers could be also detected automatically.

@AkihiroSuda

"sending tarball" takes a long time even when the image already exists

When I build an image which already exists (because of a previous build on the same engine with 100% cache hit), the builder still spends a lot of time in "sending tarball". This causes a noticeable delay in the build. Perhaps this delay could be optimized away in the case of 100% cache hit?

For example, when building a 1.84GB image with 51 layers, the entire build is 9s, of which 8s is in "sending tarball" (see output below).

It would be awesome if fully cached builds returned at near-interactive speed!

 => [internal] load build definition from Dockerfile          0.0s
 => => transferring dockerfile: 2.53kB                        0.0s
 => [internal] load .dockerignore                             0.0s
 => => transferring context: 2B                               0.0s
 => [internal] load metadata for docker.io/library/alpine:la  1.0s
 => [1/51] FROM docker.io/library/alpine@sha256:6a92cd1fcdc8  0.0s
 => => resolve docker.io/library/alpine@sha256:6a92cd1fcdc8d  0.0s
 => CACHED [2/51] RUN apk update                              0.0s
 => CACHED [3/51] RUN apk add openssh                         0.0s
 => CACHED [4/51] RUN apk add bash                            0.0s
 => CACHED [5/51] RUN apk add bind-tools                      0.0s
 => CACHED [6/51] RUN apk add curl                            0.0s
 => CACHED [7/51] RUN apk add docker                          0.0s
 => CACHED [8/51] RUN apk add g++                             0.0s
 => CACHED [9/51] RUN apk add gcc                             0.0s
 => CACHED [10/51] RUN apk add git                            0.0s
 => CACHED [11/51] RUN apk add git-perl                       0.0s
 => CACHED [12/51] RUN apk add make                           0.0s
 => CACHED [13/51] RUN apk add python                         0.0s
 => CACHED [14/51] RUN apk add openssl-dev                    0.0s
 => CACHED [15/51] RUN apk add vim                            0.0s
 => CACHED [16/51] RUN apk add py-pip                         0.0s
 => CACHED [17/51] RUN apk add file                           0.0s
 => CACHED [18/51] RUN apk add groff                          0.0s
 => CACHED [19/51] RUN apk add jq                             0.0s
 => CACHED [20/51] RUN apk add man                            0.0s
 => CACHED [21/51] RUN cd /tmp && git clone https://github.c  0.0s
 => CACHED [22/51] RUN apk add go                             0.0s
 => CACHED [23/51] RUN apk add coreutils                      0.0s
 => CACHED [24/51] RUN apk add python2-dev                    0.0s
 => CACHED [25/51] RUN apk add python3-dev                    0.0s
 => CACHED [26/51] RUN apk add tar                            0.0s
 => CACHED [27/51] RUN apk add vim                            0.0s
 => CACHED [28/51] RUN apk add rsync                          0.0s
 => CACHED [29/51] RUN apk add less                           0.0s
 => CACHED [30/51] RUN pip install awscli                     0.0s
 => CACHED [31/51] RUN curl --silent --location "https://git  0.0s
 => CACHED [32/51] RUN curl https://dl.google.com/dl/cloudsd  0.0s
 => CACHED [33/51] RUN curl -L -o /usr/local/bin/kubectl htt  0.0s
 => CACHED [34/51] RUN curl -L -o /usr/local/bin/kustomize    0.0s
 => CACHED [35/51] RUN apk add ruby                           0.0s
 => CACHED [36/51] RUN apk add ruby-dev                       0.0s
 => CACHED [37/51] RUN gem install bigdecimal --no-ri --no-r  0.0s
 => CACHED [38/51] RUN gem install kubernetes-deploy --no-ri  0.0s
 => CACHED [39/51] RUN apk add npm                            0.0s
 => CACHED [40/51] RUN npm config set unsafe-perm true        0.0s
 => CACHED [41/51] RUN npm install -g yarn                    0.0s
 => CACHED [42/51] RUN npm install -g netlify-cli             0.0s
 => CACHED [43/51] RUN apk add libffi-dev                     0.0s
 => CACHED [44/51] RUN pip install docker-compose             0.0s
 => CACHED [45/51] RUN apk add mysql-client                   0.0s
 => CACHED [46/51] RUN (cd /tmp && curl -L -O https://releas  0.0s
 => CACHED [47/51] RUN apk add shadow sudo                    0.0s
 => CACHED [48/51] RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL'  0.0s
 => CACHED [49/51] RUN useradd -G docker,wheel -m -s /bin/ba  0.0s
 => CACHED [50/51] RUN groupmod -o -g 999 docker              0.0s
 => CACHED [51/51] WORKDIR /home/sh                           0.0s
 => exporting to oci image format                             8.8s
 => => exporting layers                                       0.3s
 => => exporting manifest sha256:69088589c4e63094e51ae0e34e6  0.0s
 => => exporting config sha256:65db1e1d42a26452307b43bc5c683  0.0s
 => => sending tarball                                        8.3s
 => importing to docker                                       0.1s

Apk add not working on building buildx

Cloning git

# git clone git://github.com/docker/buildx && cd buildx
Cloning into 'buildx'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 5065 (delta 0), reused 4 (delta 0), pack-reused 5050
Receiving objects: 100% (5065/5065), 5.70 MiB | 9.00 MiB/s, done.
Resolving deltas: 100% (1618/1618), done.

Installing

# make install
./hack/binaries
+ progressFlag=
+ '[' '' == true ']'
+ case $buildmode in
+ binariesDocker
+ mkdir -p bin/tmp
+ export DOCKER_BUILDKIT=1
+ DOCKER_BUILDKIT=1
++ mktemp -t docker-iidfile.XXXXXXXXXX
+ iidfile=/tmp/docker-iidfile.DBjShhbJPS
+ platformFlag=
+ '[' -n '' ']'
+ docker build --target=binaries --iidfile /tmp/docker-iidfile.DBjShhbJPS --force-rm .
[+] Building 12.9s (14/17)
 => [internal] load build definition from Dockerfile                                                                         0.0s
 => => transferring dockerfile: 3.01kB                                                                                       0.0s
 => [internal] load .dockerignore                                                                                            0.0s
 => => transferring context: 56B                                                                                             0.0s
 => resolve image config for docker.io/docker/dockerfile:1.1-experimental                                                    1.2s
 => CACHED docker-image://docker.io/docker/dockerfile:1.1-experimental@sha256:9022e911101f01b2854c7a4b2c77f524b998891941da5  0.0s
 => [internal] load build definition from Dockerfile                                                                         0.0s
 => => transferring dockerfile: 3.01kB                                                                                       0.0s
 => [internal] load .dockerignore                                                                                            0.0s
 => [internal] load metadata for docker.io/tonistiigi/xx:golang@sha256:6f7d999551dd471b58f70716754290495690efa8421e0a1fcf18  0.0s
 => [internal] load metadata for docker.io/library/golang:1.12-alpine                                                        0.4s
 => CACHED [xgo 1/1] FROM docker.io/tonistiigi/xx:golang@sha256:6f7d999551dd471b58f70716754290495690efa8421e0a1fcf18eb11d0c  0.0s
 => CACHED [internal] helper image for file operations                                                                       0.0s
 => [gobase 1/3] FROM docker.io/library/golang:1.12-alpine@sha256:87e527712342efdb8ec5ddf2d57e87de7bd4d2fedf9f6f3547ee5768b  0.0s
 => [internal] load build context                                                                                            0.7s
 => => transferring context: 29.64MB                                                                                         0.7s
 => CACHED [gobase 2/3] COPY --from=xgo / /                                                                                  0.0s
 => ERROR [gobase 3/3] RUN apk add --no-cache file git                                                                      10.7s
------
 > [gobase 3/3] RUN apk add --no-cache file git:
#13 0.526 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
#13 5.534 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
#13 5.534 WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz: temporary error (try again later)
#13 10.54 WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz: temporary error (try again later)
#13 10.54 ERROR: unsatisfiable constraints:
#13 10.54   file (missing):
#13 10.54     required by: world[file]
#13 10.54   git (missing):
#13 10.54     required by: world[git]
------
rpc error: code = Unknown desc = executor failed running [/bin/sh -c apk add --no-cache file git]: exit code: 2
Makefile:5: recipe for target 'binaries' failed
make: *** [binaries] Error 1

Docker version

# docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.1
 Git commit:        2d0083d
 Built:             Wed Jul  3 12:13:59 2019
 OS/Arch:           linux/amd64
 Experimental:      true

Server:
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.1
  Git commit:       2d0083d
  Built:            Mon Jul  1 19:31:12 2019
  OS/Arch:          linux/amd64
  Experimental:     true

The apk command also fails when DOCKER_CLI_EXPERIMENTAL and DOCKER_BUILDKIT are enabled in env and trying to build images.

Building non experimental works fine.

Wrong Accept header set when doing a multi-arch build

Steps to Reproduce

  1. Clone the Docker doodle and cd into a doodle dir (e.g. birthday2019)

  2. Do a multi-arch build and push to a registry

docker buildx build -f Dockerfile.cross --platform linux/amd64,linux/arm64,linux/arm/v8,linux/s390x,linux/ppc64le,windows/amd64 -t rabrams-dev.dtr.caas.docker.io/admin/alpine:latest --push .
  1. Observe that buildx will try to push up individual image manifests before trying to unify into a manifest list. You will get a PUT request to /v2/admin/alpine/manifests/latest with the payload
{
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "schemaVersion": 2,
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "digest": "sha256:517227285745b426e5eb4d9d63dedc1b6759c6ac640fd432c0c5f011b709aa74",
      "size": 801
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:58fd16eaae0bf5c343b8f43832206c1e7f3ff712bee5257a90cf2b51703b58e9",
         "size": 100457713
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:3ac6604e4ee640afc57931337dc8691e323d4f52280b2986b4e54b83b179e932",
         "size": 1403916
      }
   ]
}
  1. Buildx will do a HEAD request for the manifest it just created but set a manifest list header
Accept:         application/vnd.docker.distribution.manifest.list.v2+json, *

and as a consequence the manifest is considered invalid and the client will get a 400

stuck at pushing image to docker hub

I created a build context to use my local machine amd64 and a remote Rpi arm/v7

docker buildx ls
NAME/NODE DRIVER/ENDPOINT             STATUS  PLATFORMS
multi *   docker-container                    
  multi0  unix:///var/run/docker.sock running linux/amd64
  multi1  ssh://[email protected] running linux/arm/v7, linux/arm/v6
default   docker                              
  default default                     running linux/amd64

The build process seems to gets stuck at pushing the image.

docker buildx build --platform linux/amd64,linux/arm/v7 -t arribada/gateway --push --progress=plain  .

It is a simple golang app

FROM --platform=$BUILDPLATFORM golang:alpine AS build
RUN apk add --no-cache git
COPY ./ /tmp/builder/
WORKDIR /tmp/builder/
RUN CGO_ENABLED=0 go build  -o main .
FROM alpine
COPY --from=build main /usr/local/bin/
CMD /usr/local/bin/main
docker buildx build --platform linux/amd64,linux/arm/v7 -t arribada/gateway --push --progress=plain --no-cache .
#1 [internal] booting buildkit
#1 starting container buildx_buildkit_multi1
#1 starting container buildx_buildkit_multi1 8.2s done
#1 DONE 8.2s

#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 32B done
#2 DONE 0.1s

#5 [linux/amd64 internal] load metadata for docker.io/library/golang:alpine
#5 DONE 1.9s

#4 [linux/amd64 internal] load metadata for docker.io/library/alpine:latest
#4 DONE 2.0s

#7 [linux/amd64 build 1/5] FROM docker.io/library/golang:alpine@sha256:87e5...
#7 resolve docker.io/library/golang:alpine@sha256:87e527712342efdb8ec5ddf2d57e87de7bd4d2fedf9f6f3547ee5768bb3c43ff done
#7 CACHED

#6 [linux/amd64 stage-1 1/2] FROM docker.io/library/alpine@sha256:6a92cd1fc...
#6 resolve docker.io/library/alpine@sha256:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998 done
#6 CACHED

#9 [internal] load build context
#9 transferring context: 119B done
#9 DONE 0.1s

#8 [linux/amd64 build 2/5] RUN apk add --no-cache git
#8 0.272 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
#8 0.662 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
#8 0.890 (1/5) Installing nghttp2-libs (1.38.0-r0)
#8 0.939 (2/5) Installing libcurl (7.65.1-r0)
#8 0.998 (3/5) Installing expat (2.2.7-r0)
#8 1.044 (4/5) Installing pcre2 (10.33-r0)
#8 1.107 (5/5) Installing git (2.22.0-r0)
#8 1.753 Executing busybox-1.30.1-r2.trigger
#8 1.758 OK: 21 MiB in 20 packages
#8 DONE 1.9s

#10 [linux/amd64 build 3/5] COPY ./ /tmp/builder/
#10 DONE 0.1s

#11 [linux/amd64 build 4/5] WORKDIR /tmp/builder/
#11 DONE 0.1s

#12 [linux/amd64 build 5/5] RUN CGO_ENABLED=0 go build  -o main .
#12 0.323 go: finding github.com/pkg/errors v0.8.1
#12 0.324 go: finding github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
#12 0.324 go: finding github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4
#12 0.324 go: finding github.com/brocaar/lorawan v0.0.0-20190725071148-7d77cf375455
#12 0.325 go: finding github.com/twpayne/go-geom v1.0.6-0.20190712172859-6e5079ee5888
#12 1.187 go: finding gopkg.in/alecthomas/kingpin.v2 v2.2.6
#12 2.522 go: finding github.com/sirupsen/logrus v1.3.0
#12 2.524 go: finding github.com/stretchr/testify v1.3.0
#12 2.525 go: finding github.com/smartystreets/assertions v0.0.0-20190401211740-f487f9de1cd3
#12 2.526 go: finding github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a
#12 2.527 go: finding github.com/jacobsa/oglemock v0.0.0-20150831005832-e94d794d06ff
#12 2.528 go: finding github.com/jacobsa/crypto v0.0.0-20180924003735-d95898ceee07
#12 2.529 go: finding github.com/NickBall/go-aes-key-wrap v0.0.0-20170929221519-1c3aa3e4dfc5
#12 2.620 go: finding github.com/opencontainers/runc v0.1.1
#12 2.789 go: finding golang.org/x/net v0.0.0-20190328230028-74de082e2cca
#12 3.499 go: finding github.com/ory/dockertest v3.3.4+incompatible
#12 4.391 go: finding github.com/DATA-DOG/go-sqlmock v1.3.2
#12 4.636 go: finding github.com/opencontainers/go-digest v1.0.0-rc1
#12 4.738 go: finding github.com/d4l3k/messagediff v1.2.1
#12 4.919 go: finding github.com/opencontainers/image-spec v1.0.1
#12 5.108 go: finding golang.org/x/crypto v0.0.0-20180904163835-0709b304e793
#12 5.381 go: finding github.com/davecgh/go-spew v1.1.0
#12 6.047 go: finding github.com/lib/pq v1.0.0
#12 6.544 go: finding github.com/twpayne/go-kml v1.0.0
#12 6.596 go: finding github.com/jacobsa/ogletest v0.0.0-20170503003838-80d50a735a11
#12 6.774 go: finding github.com/docker/go-connections v0.4.0
#12 6.778 go: finding github.com/jacobsa/oglematchers v0.0.0-20150720000706-141901ea67cd
#12 7.481 go: finding github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5
#12 7.650 go: finding github.com/pmezard/go-difflib v1.0.0
#12 8.153 go: finding github.com/stretchr/objx v0.1.1
#12 8.449 go: finding golang.org/x/tools v0.0.0-20190328211700-ab21143f2384
#12 8.470 go: finding github.com/jacobsa/reqtrace v0.0.0-20150505043853-245c9e0234cb
#12 8.485 go: finding github.com/stretchr/testify v1.2.2
#12 8.614 go: finding github.com/jtolds/gls v4.20.0+incompatible
#12 8.906 go: finding github.com/twpayne/go-polyline v1.0.0
#12 8.909 go: finding github.com/gopherjs/gopherjs v0.0.0-20190328170749-bb2674552d8f
#12 8.988 go: finding golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c
#12 9.007 go: finding golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3
#12 9.034 go: finding github.com/davecgh/go-spew v1.1.1
#12 9.305 go: finding golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2
#12 9.523 go: finding golang.org/x/sys v0.0.0-20190402054613-e4093980e83e
#12 9.534 go: finding golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
#12 10.02 go: finding github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1
#12 10.37 go: finding github.com/containerd/continuity v0.0.0-20181203112020-004b46473808
#12 10.38 go: finding github.com/sirupsen/logrus v1.4.1
#12 10.60 go: finding github.com/stretchr/objx v0.1.0
#12 10.60 go: finding github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d
#12 10.96 go: finding github.com/cenkalti/backoff v2.1.1+incompatible
#12 12.38 go: finding github.com/docker/go-units v0.3.3
#12 12.55 go: finding golang.org/x/text v0.3.0
#12 12.83 go: finding golang.org/x/sys v0.0.0-20190405154228-4b34438f7a67
#12 13.51 go: finding golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33
#12 13.52 go: finding golang.org/x/net v0.0.0-20190311183353-d8887717615a
#12 13.52 go: finding github.com/konsorten/go-windows-terminal-sequences v1.0.1
#12 18.22 go: downloading github.com/twpayne/go-geom v1.0.6-0.20190712172859-6e5079ee5888
#12 18.22 go: downloading github.com/brocaar/lorawan v0.0.0-20190725071148-7d77cf375455
#12 18.22 go: downloading github.com/pkg/errors v0.8.1
#12 18.25 go: downloading gopkg.in/alecthomas/kingpin.v2 v2.2.6
#12 18.30 go: extracting github.com/pkg/errors v0.8.1
#12 18.36 go: extracting gopkg.in/alecthomas/kingpin.v2 v2.2.6
#12 18.37 go: downloading github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4
#12 18.37 go: downloading github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
#12 18.39 go: extracting github.com/brocaar/lorawan v0.0.0-20190725071148-7d77cf375455
#12 18.39 go: extracting github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4
#12 18.40 go: downloading github.com/jacobsa/crypto v0.0.0-20180924003735-d95898ceee07
#12 18.42 go: extracting github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
#12 18.44 go: extracting github.com/twpayne/go-geom v1.0.6-0.20190712172859-6e5079ee5888
#12 18.89 go: extracting github.com/jacobsa/crypto v0.0.0-20180924003735-d95898ceee07
#12 DONE 22.0s

#13 [linux/amd64 stage-1 2/2] COPY --from=build main /usr/local/bin/
#13 DONE 0.1s

#14 exporting to image
#14 exporting layers
#14 exporting layers 0.9s done
#14 exporting manifest sha256:b1df6565ea0301148310cbcba589664316e6467da060d44ebc83035f23ee976c 0.0s done
#14 exporting config sha256:edf051c77aa488f30f30a7891394af6d3a3a0459f87562f397b10c54b8ad5b96 0.0s done
#14 exporting manifest list sha256:5e95279884f4819a5bea408ef9b577b056c7d4aba397024052a664c2ffa1ca03 0.0s done
#14 pushing layers
#14 pushing layers 8.0s done
#14 pushing manifest for docker.io/arribada/gateway
#14 pushing manifest for docker.io/arribada/gateway 1.0s done
#14 DONE 10.0s

Reuse Go build cache to speed up incremental builds of Go applications

Does docker-buildx have a way to reuse intermediary outputs such as Go build cache, to speed up incremental builds? I believe that the regular docker build does not offer a way to do this, so I'm trying my luck here on the bleeding edge :)

The goal would be to make docker-buildx of my Go app just as fast as a direct go build. At the moment that's not the case, because all my dependencies need to be re-built every time. I imagine this would be useful in many cases, not just for Go applications.

Trying to use buildx and .net core not working, .net core issue?

Is this a .net issue or a buildx issue ?

Building for AMD/64 works fine :

d:\Dev\dockertesting>docker buildx build --platform linux/amd64 -t ramblinggeekuk/dockertesting --push . [+] Building 211.7s (15/15) FINISHED => [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 32B 0.0s => [internal] load .dockerignore 0.3s => => transferring context: 2B 0.0s => [internal] load metadata for mcr.microsoft.com/dotnet/core/aspnet:2.2 1.1s => [internal] load metadata for mcr.microsoft.com/dotnet/core/sdk:2.2 1.2s => [build-env 1/6] FROM mcr.microsoft.com/dotnet/core/sdk:2.2@sha256:b4c25c26dc73f498073fcdb4aefe1677 110.2s => => resolve mcr.microsoft.com/dotnet/core/sdk:2.2@sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79ef 0.0s => => sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79effa76df768006b5c345b8 2.19kB / 2.19kB 0.0s => => sha256:36add2b62b779b538e3062839e1978d78e12e26ec2214940e9043924a29890c0 1.80kB / 1.80kB 0.0s => => sha256:ade3808af4d74b6181b7412b73314b5806fa39d140def18c6ee1cdbcb3ed41b1 300.69MB / 300.69MB 40.3s => => sha256:52c7fe5918815504427b3168845267e876464f8b010ccc09d0f61eb67dd6a17e 4.41kB / 4.41kB 0.0s => => sha256:dbdc36973392a980d56b8fab63383ae44582f6502001d8bbdd543aa3bf1d746e 10.79MB / 10.79MB 9.3s => => sha256:aaef3e0262580b9032fc6741fb099c7313834c7cf332500901e87ceeb38ac153 50.07MB / 50.07MB 58.7s => => sha256:a4d8138d0f6b5a441aaa533faf5fe0c3996a6ca42643c46f4402c7e8bda53742 45.34MB / 45.34MB 53.0s => => sha256:f59d6d019dd5b8398eb8d794e3fafe31f9411cc99a71dabfa587bf732b4a7385 4.34MB / 4.34MB 62.4s => => sha256:f62345fbba0dbbb77ba8aca5b81a4f0d8ec16c5d540def66c0b8e8d6492fa444 13.25MB / 13.25MB 59.3s => => sha256:373065ab5fafec0e8bcfd74485dcd728f40b80800867c553e80c7cd92cd5d504 173.83MB / 173.83MB 79.8s => => unpacking mcr.microsoft.com/dotnet/core/sdk:2.2@sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c7 29.0s => [stage-1 1/3] FROM mcr.microsoft.com/dotnet/core/aspnet:2.2@sha256:b18d512d00aff0937699014a9ba44234 54.5s => => resolve mcr.microsoft.com/dotnet/core/aspnet:2.2@sha256:b18d512d00aff0937699014a9ba44234692ce424c 0.0s => => sha256:b18d512d00aff0937699014a9ba44234692ce424c70248bedaa5a60972d77327 2.19kB / 2.19kB 0.0s => => sha256:8bd16a07ec8b72f4131a1747ff479048db65b48f54ae2ced1ffb1b42798c952e 1.16kB / 1.16kB 0.0s => => sha256:318149b63beb70e442e84e530f4472f9354e3906874c35be2ba5045b5f7a8c7a 4.06kB / 4.06kB 0.0s => => sha256:fc7181108d403205fda45b28dbddfa1cf07e772fa41244e44f53a341b8b1893d 22.49MB / 22.49MB 27.7s => => sha256:2c86df27317feb8a2806928aa12f27e6c580894e0cb844cb25aaed1420964e3d 17.69MB / 17.69MB 40.7s => => sha256:66dd687a6ad17486c0e3bc4e3c3690cefb7de9ad55f654e65cf657016ed4194c 2.98MB / 2.98MB 41.5s => => sha256:a7638d93f1fe40e3393bfb685305ce5022179c288f5b2a717978ccae465b4d7a 62.13MB / 62.13MB 48.1s => => unpacking mcr.microsoft.com/dotnet/core/aspnet:2.2@sha256:b18d512d00aff0937699014a9ba44234692ce42 5.2s => [internal] load build context 0.5s => => transferring context: 6.63kB 0.0s => [stage-1 2/3] WORKDIR /app 0.4s => [build-env 2/6] WORKDIR /app 0.2s => [build-env 3/6] COPY *.csproj ./ 0.7s => [build-env 4/6] RUN dotnet restore 10.7s => [build-env 5/6] COPY . ./ 0.4s => [build-env 6/6] RUN dotnet publish -c Release -o out 3.6s => [stage-1 3/3] COPY --from=build-env /app/out . 0.3s => exporting to image 83.5s => => exporting layers 1.0s => => exporting manifest sha256:6ac874b02ae2ce6c86c5d79290a04694778b2f86ff787285650c11dce4b2a37e 0.2s => => exporting config sha256:e9625bb3b3e783bcb6f6b7dd8b3ad4a1f090a1156be3bf237d5d4b7c8f97ebcc 0.2s => => pushing layers 81.3s => => pushing manifest for docker.io/ramblinggeekuk/dockertesting:latest 0.6s

Building for linux/arm/v7 - fails...

'd:\Dev\dockertesting>docker buildx build --platform linux/arm/v7 -t ramblinggeekuk/dockertesting --push .
[+] Building 112.6s (11/14)
=> [internal] load .dockerignore 0.2s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.3s
=> => transferring dockerfile: 443B 0.0s
=> [internal] load metadata for mcr.microsoft.com/dotnet/core/aspnet:2.2 1.5s
=> [internal] load metadata for mcr.microsoft.com/dotnet/core/sdk:2.2 1.6s
=> [build-env 1/6] FROM mcr.microsoft.com/dotnet/core/sdk:2.2@sha256:b4c25c26dc73f498073fcdb4aefe1677 102.6s
=> => resolve mcr.microsoft.com/dotnet/core/sdk:2.2@sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79ef 0.0s
=> => sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79effa76df768006b5c345b8 2.19kB / 2.19kB 0.0s
=> => sha256:27c2d2f8b92b964c1e3f4de6c8025b1f0362a1f3436118d77b3dbfa921cfd9c9 1.80kB / 1.80kB 0.0s
=> => sha256:493f51ba80c0d5fd46ea25516eb221089190b416d5a8cc2c898517dea68519a4 4.91kB / 4.91kB 0.0s
=> => sha256:e06d849c15a63e2cf30d5c5af0d9aa87b2f7c6cbfe0e8c3e351fa4c5d4666d11 300.71MB / 300.71MB 44.8s
=> => sha256:41835060b113803e2ca628a32805c2e1178fe441b81d3e77427749fec4de06e9 9.49MB / 9.49MB 45.9s
=> => sha256:da770cd5eae6caeefe9468e318964be31036c06e729c2d983756906ede859b17 46.39MB / 46.39MB 51.2s
=> => sha256:582caf5d2e7bf5e75a96afc2254a97f6e86ad72c8815429ada61280467cc6d6f 3.92MB / 3.92MB 45.0s
=> => sha256:dd04b2ffc5474ba8df46350a273baaf841243fda01cfe05d3e5429e4ecc9bb19 144.38MB / 144.38MB 73.9s
=> => sha256:fa48f739865746afb4020d2d370105be51d23dd6ad6faa8663e1365b607d46c2 13.04MB / 13.04MB 52.3s
=> => sha256:dcb61f1d45657be196f648f75a07805b856fb8f4aebb61138c03c12e2919ee9e 42.08MB / 42.08MB 57.5s
=> => unpacking mcr.microsoft.com/dotnet/core/sdk:2.2@sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c7 27.0s
=> [stage-1 1/3] FROM mcr.microsoft.com/dotnet/core/aspnet:2.2@sha256:b18d512d00aff0937699014a9ba44234 18.5s
=> => resolve mcr.microsoft.com/dotnet/core/aspnet:2.2@sha256:b18d512d00aff0937699014a9ba44234692ce424c 0.0s
=> => sha256:b18d512d00aff0937699014a9ba44234692ce424c70248bedaa5a60972d77327 2.19kB / 2.19kB 0.0s
=> => sha256:9ad51bcfeeb6e58218f23fb1f4c5229b39008cc245c9df1fcf8c9330c18a2acb 1.16kB / 1.16kB 0.0s
=> => sha256:8b7eead4e00d6228dbbf945848d78b43580687575eb8cba1d7a2b11129186f77 4.07kB / 4.07kB 0.0s
=> => sha256:a51e654c7ec5bf1fd3f38645d4bc8aa40f86ca7803d70031a9828ae65e3b67ae 63.47MB / 63.47MB 8.9s
=> => sha256:2eead4197fac409644fd8aaf115559d6383b0d56f1ad04d7116aaabbcbea8bed 19.28MB / 19.28MB 10.3s
=> => sha256:9358a462710e1891aec7076e8674e6f522f08a9f3624dc1f55554c2fc7cb99ea 16.30MB / 16.30MB 12.0s
=> => sha256:14144450932b5358107e71ebcd25ec878cb799ccc75ec39386e374d0dad903b3 2.88MB / 2.88MB 12.2s
=> => unpacking mcr.microsoft.com/dotnet/core/aspnet:2.2@sha256:b18d512d00aff0937699014a9ba44234692ce42 4.5s
=> [internal] load build context 0.2s
=> => transferring context: 7.04kB 0.0s
=> [stage-1 2/3] WORKDIR /app 0.2s
=> [build-env 2/6] WORKDIR /app 0.3s
=> [build-env 3/6] COPY *.csproj ./ 0.4s
=> ERROR [build-env 4/6] RUN dotnet restore 7.0s'

bake doesn't pass env vars to replace values in docker-compose.yaml

Having an environment variable on a tty doesn't seem be passed to buildx, as it was on docker-compose

besides #16 , I don't see an option to pass args

export FOO=bar
/usr/libexec/docker/cli-plugins/docker-buildx bake --progress plain -f docker-compose.yml
  image:
    image: image/test:${FOO}
    build:
      context: .
      dockerfile: buildx

github.com/docker/buildx v0.2.2-6-g2b03339-tp-docker 2b03339

Client: Docker Engine - Community
Version: 19.03.0-rc3
API version: 1.40
Go version: go1.12.5
Git commit: 27fcb77
Built: Thu Jun 20 02:02:44 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.0-rc3
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 27fcb77
Built: Thu Jun 20 02:01:20 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683

[Bug] `buildx` cannot build what `docker build` can build

Taking example of https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook.

all-spark-notebook can be built by normal docker build as

git clone https://github.com/jupyter/docker-stacks.git
cd docker-stacks/all-spark-notebook
docker build -t all .

However, it fails when using buildx:

git clone https://github.com/jupyter/docker-stacks.git
cd docker-stacks/all-spark-notebook
docker buildx build -t allx .

Here is the error message

$ docker buildx build -t allx .                                                    
[+] Building 276.2s (8/9)                                                                                                             
 => [internal] load build definition from Dockerfile                                                                              0.2s
 => => transferring dockerfile: 1.42kB                                                                                            0.0s
 => [internal] load .dockerignore                                                                                                 0.3s
 => => transferring context: 66B                                                                                                  0.0s
 => [internal] load metadata for docker.io/jupyter/pyspark-notebook:latest                                                        0.0s
 => [1/6] FROM docker.io/jupyter/pyspark-notebook                                                                                 1.5s
 => => resolve docker.io/jupyter/pyspark-notebook:latest                                                                          0.0s
 => [2/6] RUN fix-permissions /usr/local/spark/R/lib                                                                              1.8s
 => [3/6] RUN apt-get update &&     apt-get install -y --no-install-recommends     fonts-dejavu     gfortran     gcc &&     rm   22.9s
 => [4/6] RUN conda install --quiet --yes     'r-base=3.5.1'     'r-irkernel=0.8*'     'r-ggplot2=3.1*'     'r-sparklyr=0.9*'   177.2s
 => ERROR [5/6] RUN pip install --no-cache-dir     https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/  72.1s
------
 > [5/6] RUN pip install --no-cache-dir     https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree
-0.3.0.tar.gz     &&     jupyter toree install --sys-prefix &&     rm -rf /home/jovyan/.local &&     fix-permissions /opt/conda &&    
fix-permissions /home/jovyan:
#8 1.456 Collecting https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree-0.3.0.tar.gz   
#8 2.195   Downloading https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree-0.3.0.tar.gz (25.9MB
)
#8 66.81 Requirement already satisfied: jupyter_core>=4.0 in /opt/conda/lib/python3.7/site-packages (from toree==0.3.0) (4.4.0)
#8 66.82 Requirement already satisfied: jupyter_client>=4.0 in /opt/conda/lib/python3.7/site-packages (from toree==0.3.0) (5.2.4)
#8 66.83 Requirement already satisfied: traitlets<5.0,>=4.0 in /opt/conda/lib/python3.7/site-packages (from toree==0.3.0) (4.3.2)
#8 66.84 Requirement already satisfied: tornado>=4.1 in /opt/conda/lib/python3.7/site-packages (from jupyter_client>=4.0->toree==0.3.0)
 (6.0.2)
#8 66.84 Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/lib/python3.7/site-packages (from jupyter_client>=4.0->toree
==0.3.0) (2.8.0) 
#8 66.85 Requirement already satisfied: pyzmq>=13 in /opt/conda/lib/python3.7/site-packages (from jupyter_client>=4.0->toree==0.3.0) (1
8.0.1)  
#8 66.85 Requirement already satisfied: ipython_genutils in /opt/conda/lib/python3.7/site-packages (from traitlets<5.0,>=4.0->toree==0.
3.0) (0.2.0)   
#8 66.85 Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from traitlets<5.0,>=4.0->toree==0.3.0) (1.12.0)
#8 66.85 Requirement already satisfied: decorator in /opt/conda/lib/python3.7/site-packages (from traitlets<5.0,>=4.0->toree==0.3.0) (4
.4.0)   
#8 66.85 Building wheels for collected packages: toree   
#8 66.86   Building wheel for toree (setup.py): started  
#8 68.55   Building wheel for toree (setup.py): finished with status 'done'   
#8 68.55   Stored in directory: /tmp/pip-ephem-wheel-cache-b297gkpf/wheels/1c/fe/6a/1a8d5d7d0274ccd5c160f3e2ef9477f3b071b7f9bb0ce6c96a
#8 68.82 Successfully built toree   
#8 69.19 Installing collected packages: toree
#8 69.29 Successfully installed toree-0.3.0
#8 69.69 [ToreeInstall] Installing Apache Toree version 0.3.0   
#8 69.69 [ToreeInstall] 
#8 69.69 Apache Toree is an effort undergoing incubation at the Apache Software 
#8 69.69 Foundation (ASF), sponsored by the Apache Incubator PMC. 
#8 69.69
#8 69.69 Incubation is required of all newly accepted projects until a further review
#8 69.69 indicates that the infrastructure, communications, and decision making process
#8 69.69 have stabilized in a manner consistent with other successful ASF projects.  
#8 69.69
#8 69.69 While incubation status is not necessarily a reflection of the completeness 
#8 69.69 or stability of the code, it does indicate that the project has yet to be   
#8 69.69 fully endorsed by the ASF. 
#8 69.69 [ToreeInstall] Creating kernel Scala
#8 69.69 Traceback (most recent call last):
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 528, in get  
#8 69.69     value = obj._trait_values[self.name] 
#8 69.69 KeyError: 'kernel_spec_manager'   
#8 69.69
#8 69.69 During handling of the above exception, another exception occurred:  
#8 69.69
#8 69.69 Traceback (most recent call last):
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 528, in get  
#8 69.69     value = obj._trait_values[self.name] 
#8 69.69 KeyError: 'data_dir'
#8 69.69
#8 69.69 During handling of the above exception, another exception occurred:  
#8 69.69
#8 69.69 Traceback (most recent call last):
#8 69.69   File "/opt/conda/bin/jupyter-toree", line 10, in <module>   
#8 69.69     sys.exit(main())
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/toree/toreeapp.py", line 165, in main      
#8 69.69     ToreeApp.launch_instance()    
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/config/application.py", line 658, in launch_instance  
#8 69.69     app.start()     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/toree/toreeapp.py", line 162, in start     
#8 69.69     return self.subapp.start()    
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/toree/toreeapp.py", line 125, in start     
#8 69.69     install_dir = self.kernel_spec_manager.install_kernel_spec(self.sourcedir,     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 556, in __get__     
#8 69.69     return self.get(obj, cls)     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 535, in get  
#8 69.69     value = self._validate(obj, dynamic_default())     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/jupyter_client/kernelspecapp.py", line 87, in _kernel_spec_manager_default    
#8 69.69     return KernelSpecManager(data_dir=self.data_dir)   
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 556, in __get__     
#8 69.69     return self.get(obj, cls)     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 535, in get  
#8 69.69     value = self._validate(obj, dynamic_default())     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/jupyter_core/application.py", line 93, in _data_dir_default     
#8 69.69     ensure_dir_exists(d, mode=0o700)     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/jupyter_core/utils/__init__.py", line 13, in ensure_dir_exists  
#8 69.69     os.makedirs(path, mode=mode)  
#8 69.69   File "/opt/conda/lib/python3.7/os.py", line 211, in makedirs
#8 69.69     makedirs(head, exist_ok=exist_ok)    
#8 69.69   File "/opt/conda/lib/python3.7/os.py", line 211, in makedirs
#8 69.69     makedirs(head, exist_ok=exist_ok)    
#8 69.69   File "/opt/conda/lib/python3.7/os.py", line 221, in makedirs
#8 69.69     mkdir(name, mode)      
#8 69.69 PermissionError: [Errno 13] Permission denied: '/home/jovyan/.local' 
------  
failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c pip install --no-cache-dir     https://dist.apach
e.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree-0.3.0.tar.gz     &&     jupyter toree install --sys-prefix &&
     rm -rf /home/$NB_USER/.local &&     fix-permissions $CONDA_DIR &&     fix-permissions /home/$NB_USER]: exit code: 1

Here is the docker version

$ docker version
Client:
 Version:           19.03.0-beta3
 API version:       1.40
 Go version:        go1.12.4
 Git commit:        c55e026
 Built:             Thu Apr 25 02:58:59 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.4
  Git commit:       c55e026
  Built:            Thu Apr 25 02:57:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ docker buildx version
github.com/docker/buildx  .m

No declared license.

i just wrote up a patch but there's no license anywhere. Also very unclear on contribution guidelines beyond running the test suite; do I need to use the DCO -s stuff?

ARG support in COPY

Description

Steps to reproduce the issue:

  1. docker build --build-arg IMAGE_BUILDER_NAME=node-builder-ms-xxxx --file Dockerfile/Dockerfile .

Describe the results you received:
invalid from flag value ${IMAGE_BUILDER_NAME}: invalid reference format: repository name must be lowercase

Describe the results you expected:
Successfully built fe978fd298a

Output of docker version:

Client:
 Version:	18.03.0-ce
 API version:	1.37
 Go version:	go1.9.4
 Git commit:	0520e24
 Built:	Wed Mar 21 23:06:22 2018
 OS/Arch:	darwin/amd64
 Experimental:	false
 Orchestrator:	swarm

Server:
 Engine:
  Version:	18.03.0-ce
  API version:	1.37 (minimum version 1.12)
  Go version:	go1.9.4
  Git commit:	0520e24
  Built:	Wed Mar 21 23:14:32 2018
  OS/Arch:	linux/amd64
  Experimental:	true

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 38
Server Version: 18.03.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.87-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.855GiB
Name: linuxkit-025000000001
ID: MWH6:QH5X:TT3F:ITI2:2C22:XQN3:2E3H:M4GK:HAZH:
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false```

**Additional environment details (AWS, VirtualBox, physical, etc.):**

FROM node:8-alpine
ARG IMAGE_BUILDER_NAME
COPY --from=${IMAGE_BUILDER_NAME} /src/dist /app/dist

Q: --cache-to for multi-stage build

After building a multi-stage dockerfile with --cache-to option, I notice only the final stage layers were saved to the --cache-to destination. How can I also cache the intermediate build stage layers? Do I have to use --target to build and cache each stage one by one?

Migrate buildx to the public Jenkins

Description: We recently created a new Jenkins at ci.docker.com/public. It may be worth it to migrate buildx from travis to Jenkins by creating a new Jenkinsfile using the declarative syntax.

[bug] Error: failed to solve: rpc error: code = Unknown desc = content digest sha256:...: not found

Summary

This error occurs randomly, and causes a build to fail when otherwise it would have succeeded. On re-running the exact same build, the error goes away.

Environment

docker-buildx version: standalone binary downloaded from https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64 (sha256 5c31e171440213bb7fffd023611b1aaa7e0498162c54cb708c2a8abe3679717e)

Docker engine version:

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 01:59:36 2019
  OS/Arch:          linux/amd64
  Experimental:     false

How to reproduce

Unfortunately, I have not found a way to reproduce it reliably. However it does not seem to be related to the contents of the Dockerfile, since I have reproduced it (accidentally) with two completely unrelated Dockerfiles.

Based on my understanding of the error message, it appears that --load is required to reproduce.

Here is the output of the most recent occurrence:

+ /home/sh/bin/docker-buildx build --load --build-arg user=sh --build-arg uid=1002 --build-arg docker_guid=999 -t dev -
[+] Building 11.1s (54/54) FINISHED
 => [internal] load .dockerignore                                                               0.0s
 => => transferring context: 2B                                                                 0.0s
 => [internal] load build definition from Dockerfile                                            0.1s
 => => transferring dockerfile: 2.30kB                                                          0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                0.4s
 => [1/49] FROM docker.io/library/alpine@sha256:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f  0.0s
 => => resolve docker.io/library/alpine@sha256:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f4  0.0s
 => CACHED [2/49] RUN apk update                                                                0.0s
 => CACHED [3/49] RUN apk add openssh                                                           0.0s
 => CACHED [4/49] RUN apk add bash                                                              0.0s
 => CACHED [5/49] RUN apk add bind-tools                                                        0.0s
 => CACHED [6/49] RUN apk add curl                                                              0.0s
 => CACHED [7/49] RUN apk add docker                                                            0.0s
 => CACHED [8/49] RUN apk add g++                                                               0.0s
 => CACHED [9/49] RUN apk add gcc                                                               0.0s
 => CACHED [10/49] RUN apk add git                                                              0.0s
 => CACHED [11/49] RUN apk add git-perl                                                         0.0s
 => CACHED [12/49] RUN apk add make                                                             0.0s
 => CACHED [13/49] RUN apk add python                                                           0.0s
 => CACHED [14/49] RUN apk add openssl-dev                                                      0.0s
 => CACHED [15/49] RUN apk add vim                                                              0.0s
 => CACHED [16/49] RUN apk add py-pip                                                           0.0s
 => CACHED [17/49] RUN apk add file                                                             0.0s
 => CACHED [18/49] RUN apk add groff                                                            0.0s
 => CACHED [19/49] RUN apk add jq                                                               0.0s
 => CACHED [20/49] RUN apk add man                                                              0.0s
 => CACHED [21/49] RUN cd /tmp && git clone https://github.com/AGWA/git-crypt && cd git-crypt   0.0s
 => CACHED [22/49] RUN apk add go                                                               0.0s
 => CACHED [23/49] RUN apk add coreutils                                                        0.0s
 => CACHED [24/49] RUN apk add python2-dev                                                      0.0s
 => CACHED [25/49] RUN apk add python3-dev                                                      0.0s
 => CACHED [26/49] RUN apk add tar                                                              0.0s
 => CACHED [27/49] RUN apk add vim                                                              0.0s
 => CACHED [28/49] RUN apk add rsync                                                            0.0s
 => CACHED [29/49] RUN apk add less                                                             0.0s
 => CACHED [30/49] RUN pip install awscli                                                       0.0s
 => CACHED [31/49] RUN curl --silent --location "https://github.com/weaveworks/eksctl/releases  0.0s
 => CACHED [32/49] RUN curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-  0.0s
 => CACHED [33/49] RUN curl -L -o /usr/local/bin/kubectl https://storage.googleapis.com/kubern  0.0s
 => CACHED [34/49] RUN curl -L -o /usr/local/bin/kustomize  https://github.com/kubernetes-sigs  0.0s
 => CACHED [35/49] RUN apk add ruby                                                             0.0s
 => CACHED [36/49] RUN apk add ruby-dev                                                         0.0s
 => CACHED [37/49] RUN gem install bigdecimal --no-ri --no-rdoc                                 0.0s
 => CACHED [38/49] RUN gem install kubernetes-deploy --no-ri --no-rdoc                          0.0s
 => CACHED [39/49] RUN apk add npm                                                              0.0s
 => CACHED [40/49] RUN npm config set unsafe-perm true                                          0.0s
 => CACHED [41/49] RUN npm install -g yarn                                                      0.0s
 => CACHED [42/49] RUN npm install -g netlify-cli                                               0.0s
 => CACHED [43/49] RUN apk add libffi-dev                                                       0.0s
 => CACHED [44/49] RUN pip install docker-compose                                               0.0s
 => CACHED [45/49] RUN apk add shadow sudo                                                      0.0s
 => CACHED [46/49] RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers                     0.0s
 => CACHED [47/49] RUN useradd -G docker,wheel -m -s /bin/bash -u 1002 sh                       0.0s
 => CACHED [48/49] RUN groupmod -o -g 999 docker                                                0.0s
 => CACHED [49/49] WORKDIR /home/sh                                                             0.0s
 => ERROR exporting to oci image format                                                        10.5s
 => => exporting layers                                                                         0.6s
 => => exporting manifest sha256:97c8dedb84183d0d87b6c930b12e1b9396bb81fd9c6587fe2fbb9ae092d30  0.0s
 => => exporting config sha256:ef15b3f6a374fd494a0a586f7b33ac9435953460776e268b20f2eee5f14def6  0.0s
 => => sending tarball                                                                          9.6s
 => importing to docker                                                                         0.0s
------
 > exporting to oci image format:
------
Error: failed to solve: rpc error: code = Unknown desc = content digest sha256:ef15b3f6a374fd494a0a586f7b33ac9435953460776e268b20f2eee5f14def65: not found
Usage:
  /home/sh/bin/docker-buildx build [OPTIONS] PATH | URL | - [flags]

Aliases:
  build, b

Flags:
      --add-host strings         Add a custom host-to-IP mapping (host:ip)
      --build-arg stringArray    Set build-time variables
      --cache-from stringArray   External cache sources (eg. user/app:cache, type=local,src=path/to/dir)
      --cache-to stringArray     Cache export destinations (eg. user/app:cache, type=local,dest=path/to/dir)
  -f, --file string              Name of the Dockerfile (Default is 'PATH/Dockerfile')
  -h, --help                     help for build
      --iidfile string           Write the image ID to the file
      --label stringArray        Set metadata for an image
      --load                     Shorthand for --output=type=docker
      --network string           Set the networking mode for the RUN instructions during build (default "default")
      --no-cache                 Do not use cache when building the image
  -o, --output stringArray       Output destination (format: type=local,dest=path)
      --platform stringArray     Set target platform for build
      --progress string          Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
      --pull                     Always attempt to pull a newer version of the image
      --push                     Shorthand for --output=type=registry
      --secret stringArray       Secret file to expose to the build: id=mysecret,src=/local/secret
      --ssh stringArray          SSH agent socket or keys to expose to the build (format: default|<id>[=<socket>|<key>[,<key>]])
  -t, --tag stringArray          Name and optionally a tag in the 'name:tag' format
--target string Set the target build stage to build.

Tag multiple stages and/or build multiple targets with one invocation of buildkit

Wanted to re look at this moby/buildkit#609 now that we have docker buildx bake.

I am trying to determine if bake has ultimately been built to solve this issue or are it's goals slightly different?

If I use a bake file like this:

group "default" {
    targets = ["prj1", "prj2"]
}

target "prj1" {
    dockerfile = "Dockerfile"
    target = "prj1",
    tags = ["prj1"]
}

target "prj2" {
    inherits = ["prj1"]
    target = "prj2",
    tags = ["prj2"]
}

And then execute with docker buildx bake -f ./docker-bake.hcl I only get prj1 image exported. prj2 still builds just doesn't get exported.

Also can / could a bake file be used to effectively stitch multiple dockerfiles together?

Our monorepo has 20+ projects, all with their own Dockerfile "segments" - I say this because if you tried to build one of these files by it's self it would probably fail as it depends on parent build stages defined in other files.

I have written a script that concatenates all the Dockerfiles together into a single Dockerfile in the correct order so that build stages refer to each other correctly and then I execute a buildkit build on this giant Dockerfile. Using the approach outlined at moby/buildkit#609

It would be great if I could define normal standalone Dockerfiles and then use a bake file to do the stitching / build in correct order, consider the following contrived example:

node-modules.dockerfile

FROM node
RUN npm install

prj1.dockerfile

FROM node-modules
RUN npm build

prj2.dockerfile

FROM node-modules
RUN npm build

docker-bake.hcl

group "default" {
    targets = ["prj1", "prj2"]
}

target "node-modules" {
    dockerfile = "node-modules.dockerfile"
    tags = ["node-modules"]
}

target "prj1" {
    depends = ["node-modules"]
    dockerfile = "prj1.dockerfile"
    tags = ["prj1"]
}

target "prj2" {
    depends = ["node-modules"]
    dockerfile = "prj2.dockerfile"
    tags = ["prj1"]
}

Perhaps bake can already do this but I haven't figured it out yet?

Building for ARM causes error often

When trying to compile a Golang project I am seeing the following:

 > [linux/arm64 bldr 4/4] RUN go build -o /bldr .:
#28 0.608 go: finding github.com/spf13/cobra v0.0.4
#28 0.620 go: finding github.com/hashicorp/go-multierror v1.0.0
#28 1.579 go: finding golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed
#28 1.586 go: finding golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522
#28 1.631 go: finding gopkg.in/yaml.v2 v2.2.2
#28 2.797 go: finding github.com/hashicorp/errwrap v1.0.0
#28 3.392 go: finding github.com/cpuguy83/go-md2man v1.0.10
#28 3.421 go: finding github.com/inconshreveable/mousetrap v1.0.0
#28 3.455 go: finding github.com/spf13/pflag v1.0.3
#28 3.466 go: finding github.com/mitchellh/go-homedir v1.1.0
#28 3.513 go: finding github.com/BurntSushi/toml v0.3.1
#28 3.552 go: finding github.com/spf13/viper v1.3.2
#28 3.679 fatal error: schedule: holding locks
#28 3.700 SIGSEGV: segmentation violation
#28 3.700 PC=0x4000283138 m=0 sigcode=1
#28 3.704 
#28 3.707 goroutine 82 [syscall]:
#28 3.709 runtime.notetsleepg(0x4000bb3f60, 0x246c4acc8, 0x14000022400)
#28 3.709       /usr/lib/go/src/runtime/lock_futex.go:227 +0x28 fp=0x140003bc750 sp=0x140003bc720 pc=0x400023dd70
#28 3.711 runtime.timerproc(0x4000bb3f40)
#28 3.711       /usr/lib/go/src/runtime/time.go:288 +0x2a4 fp=0x140003bc7d0 sp=0x140003bc750 pc=0x400027817c
#28 3.712 runtime.goexit()
#28 3.712       /usr/lib/go/src/runtime/asm_arm64.s:1114 +0x4 fp=0x140003bc7d0 sp=0x140003bc7d0 pc=0x400028588c
#28 3.715 created by runtime.(*timersBucket).addtimerLocked
#28 3.715       /usr/lib/go/src/runtime/time.go:170 +0xf4

on another run:

 > [linux/arm64 bldr 4/4] RUN go build -o /bldr .:
#20 0.728 go: finding github.com/hashicorp/go-multierror v1.0.0
#20 0.744 go: finding github.com/spf13/cobra v0.0.4
#20 1.509 go: finding golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed
#20 1.523 go: finding golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522
#20 2.127 go: finding gopkg.in/yaml.v2 v2.2.2
#20 5.053 go: finding github.com/hashicorp/errwrap v1.0.0
#20 5.334 go: finding github.com/cpuguy83/go-md2man v1.0.10
#20 5.368 go: finding github.com/inconshreveable/mousetrap v1.0.0
#20 5.380 go: finding github.com/mitchellh/go-homedir v1.1.0
#20 5.386 go: finding github.com/spf13/viper v1.3.2
#20 5.393 go: finding github.com/BurntSushi/toml v0.3.1
#20 5.427 go: finding github.com/spf13/pflag v1.0.3
#20 7.801 go: finding gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
#20 8.046 go: finding github.com/russross/blackfriday v1.5.2
#20 8.250 go: finding golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a
#20 8.443 go: finding github.com/spf13/cast v1.3.0
#20 8.477 go: finding github.com/hashicorp/hcl v1.0.0
#20 8.490 go: finding github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77
#20 8.493 go: finding github.com/magiconair/properties v1.8.0
#20 8.508 go: finding github.com/pelletier/go-toml v1.2.0
#20 8.556 go: finding github.com/mitchellh/mapstructure v1.1.2
#20 8.679 go: finding golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9
#20 13.12 go: finding golang.org/x/text v0.3.0
#20 13.37 go: finding github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8
#20 13.40 go: finding github.com/stretchr/testify v1.2.2
#20 13.81 go: finding github.com/fsnotify/fsnotify v1.4.7
#20 14.03 go: finding github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6
#20 14.28 go: finding github.com/spf13/afero v1.1.2
#20 14.43 go: finding github.com/pmezard/go-difflib v1.0.0
#20 14.48 fatal error: exitsyscall: syscall frame is no longer valid
#20 14.48 
#20 14.48 goroutine 6 [syscall]:
#20 14.49 syscall.Syscall(0x3f, 0x2a, 0x1400030e8f0, 0x8, 0x0, 0x0, 0x0)
#20 14.49       /usr/lib/go/src/syscall/asm_linux_arm64.s:9 +0x8 fp=0x1400030e810 sp=0x1400030e800 pc=0x40002a3ce0
#20 14.49 syscall.readlen(0x2a, 0x1400030e8f0, 0x8, 0x7, 0x140001e1180, 0x13)
#20 14.49       /usr/lib/go/src/syscall/zsyscall_linux_arm64.go:1026 +0x40 fp=0x1400030e860 sp=0x1400030e810 pc=0x40002a1fb8
#20 14.49 syscall.forkExec(0x140000a6390, 0xc, 0x1400013a480, 0x6, 0x6, 0x1400030ea48, 0x20, 0x0, 0x140000a84a0)
#20 14.49       /usr/lib/go/src/syscall/exec_unix.go:203 +0x284 fp=0x1400030e970 sp=0x1400030e860 pc=0x400029cdec
#20 14.49 runtime: unexpected return pc for syscall.StartProcess called from 0x2d
#20 14.49 stack: frame={sp:0x1400030e970, fp:0x1400030e9c0} stack=[0x1400030e000,0x14000310000)
#20 14.49 000001400030e870:  000001400030e8f0  0000000000000008 
#20 14.49 000001400030e880:  0000000000000007  00000140001e1180 

and another run:

 > [linux/arm/v7 bldr 4/4] RUN go build -o /bldr .:
#12 0.730 fatal error: fatal error: schedule: holding locks
#12 0.735 
#12 0.735 fatal error: unexpected signal during runtime execution
#12 0.735 panic during panic
#12 0.736 [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x0]
#12 0.738 
#12 0.738 runtime stack:
#12 0.740 runtime: unexpected return pc for runtime/internal/atomic.Xchg called from 0x1
#12 0.740 stack: frame={sp:0xeea5ec60, fp:0xeea5ec78} stack=[0xeea4b10c,0xeea5ed0c)
#12 0.742 eea5ebe0:  00000000  00000001  ffaf9b52  ff6f97c0 <runtime.fatalthrow+72> 
#12 0.746 eea5ebf0:  00472700  ff6f9608 <runtime.throw+96>  eea5ec24  fffebea0 
#12 0.748 eea5ec00:  eea5ec24  ff6f9608 <runtime.throw+96>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.750 eea5ec10:  eea5ec14  ff724158 <runtime.fatalthrow.func1+0>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.752 eea5ec20:  eea5ec24  ff710558 <runtime.sigpanic+588>  eea5ec2c  ff7240e8 <runtime.throw.func1+0> 
#12 0.754 eea5ec30:  ffb00a98  0000002a  ff6cc9a4 <runtime/internal/atomic.Xchg+36>  ffb00a98 
#12 0.756 eea5ec40:  0000002a  ff724140 <runtime.throw.func1+88>  fffebea0  00000000 
#12 0.757 eea5ec50:  00472700  ff6f9944 <runtime.startpanic_m+216>  ff6f9944 <runtime.startpanic_m+216>  ff6cc9a4 <runtime/internal/atomic.Xchg+36> 
#12 0.758 eea5ec60: <00000001  00000000  ff724184 <runtime.fatalthrow.func1+44>  fffebee8 
#12 0.760 eea5ec70:  00000001  00000001 >00472700  ff6f97c0 <runtime.fatalthrow+72> 
#12 0.762 eea5ec80:  ffae9b65  00000001  ffae9b65  00000001 
#12 0.763 eea5ec90:  eea5ecb4  ff6f9608 <runtime.throw+96>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.766 eea5eca0:  eea5eca4  ff724158 <runtime.fatalthrow.func1+0>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.769 eea5ecb0:  eea5ecb4  ff70199c <runtime.schedule+796>  00000001  00000002 
#12 0.772 eea5ecc0:  ffaf54b3  00000017  ff701aa8 <runtime.park_m+144>  ffaf54b3 
#12 0.773 eea5ecd0:  00000017  005122f0  00000000  00000001 
#12 0.774 eea5ece0:  ff701ad0 <runtime.park_m+184>  005122f0  ff7256a8 <runtime.mcall+96>  0045f6c0 
#12 0.776 eea5ecf0:  005122f0  00000001 
#12 0.777 runtime/internal/atomic.Xchg(0xff6f97c0, 0xffae9b65, 0x1)
#12 0.778       /usr/lib/go/src/runtime/internal/atomic/atomic_arm.go:60 +0x24

and another run:

 => [linux/arm64 bldr 4/4] RUN go build -o /bldr .                                                                                                                                                                                                                                                             347.1s
 => => # go: finding gopkg.in/yaml.v2 v2.2.2                                                                                                                                                                                                                                                                         
 => => # go: finding github.com/hashicorp/errwrap v1.0.0                                                                                                                                                                                                                                                             
 => => # go: finding github.com/spf13/viper v1.3.2                                                                                                                                                                                                                                                                   
 => => # go: finding github.com/cpuguy83/go-md2man v1.0.10                                                                                                                                                                                                                                                           
 => => # go: finding github.com/mitchellh/go-homedir v1.1.0                                                                                                                                                                                                                                                          
 => => # qemu: uncaught target signal 11 (Segmentation fault) - core dumped                                                                                                                                                                                                                                          
 => CACHED [linux/arm/v7 stage-1 3/5] RUN [ "ln", "-svf", "/bin/bash", "/bin/sh" ]                                                                                                                                                                                                                               0.0s
 => [linux/arm/v7 bldr 4/4] RUN go build -o /bldr .                                                                                                                                                                                                                                                            346.9s
 => => # go: finding github.com/hashicorp/go-multierror v1.0.0                                                                                                                                                                                                                                                       
 => => # go: finding golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed                                                                                                                                                                                                                                             
 => => # go: finding golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522                                                                                                                                                                                                                                         
 => => # go: finding gopkg.in/yaml.v2 v2.2.2                                                                                                                                                                                                                                                                         
 => => # go: finding github.com/hashicorp/errwrap v1.0.0                                                                                                                                                                                                                                                             
 => => # qemu: uncaught target signal 4 (Illegal instruction) - core dumped  

The Dockerfile for reference:

FROM --platform=$TARGETPLATFORM alpine:3.9 as bldr
RUN apk add build-base git go
COPY . .
ENV CGO_ENABLED 0
RUN go build -o /bldr .

FROM --platform=$TARGETPLATFORM alpine:3.9
RUN apk add --no-cache bash
RUN [ "ln", "-svf", "/bin/bash", "/bin/sh" ]
COPY --from=bldr /bldr /bin/bldr
WORKDIR /pkg
ENTRYPOINT ["/bin/bldr"]
CMD ["build"]
ONBUILD COPY . .
ONBUILD RUN bldr build

Support for starting buildkit with entitlements

As the title suggests, I'd like to run a build with docker buildx build --network=host, but I'm seeing:

failed to solve: rpc error: code = Unknown desc = network.host is not allowed

It looks like we need to set the proper entitlement, but docker buildx create does not expose that option.

Build cannot export to registry on localhost

While setting up a local demo, I found that buildx is unable to access a registry server running on localhost. I can see that my registry server is listening on port 5000 with a curl command. This is with the docker-container driver, so I suspect that container is not using the host namespace and therefore cannot see any services running on localhost.

$ docker buildx build -f Dockerfile.buildx --target debug --platform linux/amd64,linux/arm64 -t localhost:5000/bmitch-public/golang-hello:buildx1 --output type=registry .
[+] Building 3.1s (24/24) FINISHED
 => [internal] load build definition from Dockerfile.buildx                                                                       0.4s
 => => transferring dockerfile: 39B                                                                                               0.0s
 => [internal] load .dockerignore                                                                                                 0.5s
 => => transferring context: 34B                                                                                                  0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/debian:latest                                                      0.7s
 => [linux/amd64 internal] load metadata for docker.io/tonistiigi/xx:golang                                                       0.7s
 => [linux/amd64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                 0.8s
 => [internal] load build context                                                                                                 0.2s
 => => transferring context: 105B                                                                                                 0.0s
 => [linux/amd64 debug 1/2] FROM docker.io/library/debian@sha256:118cf8f3557e1ea766c02f36f05f6ac3e63628427ea8965fb861be904ec35a6  0.0s
 => => resolve docker.io/library/debian@sha256:118cf8f3557e1ea766c02f36f05f6ac3e63628427ea8965fb861be904ec35a6f                   0.0s
 => [linux/amd64 xgo 1/1] FROM docker.io/tonistiigi/xx:golang@sha256:4703827f56e3964eda6ca07be85046d1dd533eb0ed464e549266c10a4cd  0.0s
 => => resolve docker.io/tonistiigi/xx:golang@sha256:4703827f56e3964eda6ca07be85046d1dd533eb0ed464e549266c10a4cd8a29f             0.0s
 => [linux/amd64 dev 1/6] FROM docker.io/library/golang:1.12-alpine@sha256:cee6f4b901543e8e3f20da3a4f7caac6ea643fd5a46201c3c2387  0.0s
 => => resolve docker.io/library/golang:1.12-alpine@sha256:cee6f4b901543e8e3f20da3a4f7caac6ea643fd5a46201c3c2387183a332d989       0.0s
 => CACHED [linux/amd64 dev 2/6] COPY --from=xgo / /                                                                              0.0s
 => CACHED [linux/amd64 dev 3/6] RUN apk add --no-cache git ca-certificates                                                       0.0s
 => CACHED [linux/amd64 dev 4/6] RUN adduser -D appuser                                                                           0.0s
 => CACHED [linux/amd64 dev 5/6] WORKDIR /src                                                                                     0.0s
 => CACHED [linux/amd64 dev 6/6] COPY . /src/                                                                                     0.0s
 => CACHED [linux/amd64 build 1/1] RUN CGO_ENABLED=0 go build -ldflags '-w -extldflags -static' -o app .                          0.0s
 => CACHED [linux/amd64 debug 2/2] COPY --from=build /src/app /app                                                                0.0s
 => CACHED [linux/amd64 dev 2/6] COPY --from=xgo / /                                                                              0.0s
 => CACHED [linux/amd64 dev 3/6] RUN apk add --no-cache git ca-certificates                                                       0.0s
 => CACHED [linux/amd64 dev 4/6] RUN adduser -D appuser                                                                           0.0s
 => CACHED [linux/amd64 dev 5/6] WORKDIR /src                                                                                     0.0s
 => CACHED [linux/amd64 dev 6/6] COPY . /src/                                                                                     0.0s
 => CACHED [linux/amd64 build 1/1] RUN CGO_ENABLED=0 go build -ldflags '-w -extldflags -static' -o app .                          0.0s
 => CACHED [linux/amd64 debug 2/2] COPY --from=build /src/app /app                                                                0.0s
 => ERROR exporting to image                                                                                                      1.4s
 => => exporting layers                                                                                                           0.1s
 => => exporting manifest sha256:fb7fb1aacd96dcd6c9a6d2654fb2a9cf7692c3ebfd4d15bd1dd397d38713a589                                 0.2s
 => => exporting config sha256:8c443cd193baf5e58914a1ad50d8311e25f7d9ac86772a6ab2df99ed7f4ef6f3                                   0.2s
 => => exporting manifest sha256:d63ec5c6531662c1185b1cc90755573a1bbc1b4754998181847598433fe30e5e                                 0.2s
 => => exporting config sha256:3838e43619611f78eedbc6604fedc3ab134f2beb4225d45d10bb37698603189e                                   0.2s
 => => exporting manifest list sha256:4c8694f90dda751d32ccbd9e48bdeba1042467f07bd0193378e254141e7464ec                            0.2s
 => => pushing layers                                                                                                             0.0s
------
 > exporting to image:
------
failed to solve: rpc error: code = Unknown desc = failed to do request: Head http://localhost:5000/v2/bmitch-public/golang-hello/blobs/sha256:8c443cd193baf5e58914a1ad50d8311e25f7d9ac86772a6ab2df99ed7f4ef6f3: dial tcp 127.0.0.1:5000: connect: connection refused

$ curl -sSLk https://localhost:5000/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}

$ docker buildx ls
NAME/NODE     DRIVER/ENDPOINT             STATUS  PLATFORMS
new *         docker-container
  new0        unix:///var/run/docker.sock running linux/amd64
default       docker
  default     default                     running linux/amd64

Note this is a low priority issue for me, I'd much rather see #80 solved.

[Feature Request] Remove duplicate context transfers

As an example use case you may have a single intermediary target in your Dockerfile that acts like a cache for all downstream images. You build all your packages in your cache target and then assemble into different binaries for the different services that make up your app.

Today if you have n targets all of which are downstream from a single target, even if the upstream is the only thing that actually relies on the context, the context will be transferred n times in parallel.

Example

  1. Make a large context with
dd if=/dev/zero of=largefile count=262400 bs=1024
  1. Make a Dockerfile with one upstream image that actually uses the context, and three downstream images.
FROM scratch AS upstream

COPY largefile /largefile


FROM upstream AS downstream0

FROM upstream AS downstream1

FROM upstream AS downstream2
  1. Make a bake hcl with the targets
group "default" {
  targets = ["downstream0", "downstream1", "downstream2"]
}

target "downstream0" {
  target = "downstream0"
  tags = ["docker.io/rabrams/downstream0"]
}

target "downstream1" {
  target = "downstream0"
  tags = ["docker.io/rabrams/downstream1"]
}

target "downstream2" {
  target = "downstream0"
  tags = ["docker.io/rabrams/downstream2"]
}
  1. Run docker buildx bake and observe duplicate context transfers
$ docker buildx bake
[+] Building 1.1s (6/12)
 => [downstream1 internal] load .dockerignore                              0.2s
 => => transferring context: 2B                                            0.1s
 => [downstream2 internal] load build context                              0.9s
 => => transferring context: 327.77kB                                      0.9s
 => [downstream0 internal] load build context                              0.9s
 => => transferring context: 360.55kB                                      0.9s
 => [downstream1 internal] load build context                              0.9s
 => => transferring context: 229.45kB                                      0.9s

For large contexts and many downstream images this can be a problem because your uplink is divided between all the context transfers that are doing the same thing.

Supporting target platform: "ppc64le" and "s390x"

Hi,
I got below error messages when trying build "ppc64le" and "s390x" on my x86_64 Fedora machine. Is there way to build "ppc64le" and "s390x" images?

$ docker buildx build --rm -t my-fedora:aarch64 --platform linux/arm64 .

=> OK

Run the aarch64 image with QEMU and binfmt_misc.

$ uname -m
x86_64

$ docker run --rm -t my-fedora:aarch64 uname -m
aarch64

But below builds for target "ppc64le" and "s390x" are failed.

$ docker buildx build --rm -t my-fedora:ppc64le --platform linux/ppc64le .
...
failed to solve: rpc error: code = Unknown desc = runtime execution on platform linux/ppc64le not supported

$ docker buildx build --rm -t my-fedora:s390x --platform linux/s390x .
...
failed to solve: rpc error: code = Unknown desc = runtime execution on platform linux/s390x not supported

I used below Dockerfile.

$ cat Dockerfile 
# https://hub.docker.com/_/fedora
FROM fedora
RUN uname -m

My environment is like this.

$ uname -m
x86_64

$ cat /etc/fedora-release 
Fedora release 30 (Thirty)

$ docker --version
Docker version 19.03.0, build aeac9490dc

$ docker buildx version
github.com/docker/buildx v0.2.2-10-g3f18b65 3f18b659a09804c738226dbf6bacbcae54afd7c6
$ docker buildx inspect
Name:   default
Driver: docker

Nodes:
Name:      default
Endpoint:  default
Status:    running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/arm/v7, linux/arm/v6

Thank you.

simplify controlling plain output

Some CI systems run through terminal even if they really shouldn't and setting the --progress=plain flag in scripts is hard. We should also provide an environment variable that is much easier to configure for the auto behavior. Maybe we could just detect CONTINUOUS_INTEGRATION=1 directly but some users may prefer tty output even in ci.

BUILDKIT_PROGRESS_PLAIN=1
BUILDKIT_PROGRESS=plain

@tiborvass

ls output is weird

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker
  default default         running linux/amd64

Add ability to pipe Dockerfile

This is a feature request. I'd like to pipe a Dockerfile into the buildx build command. Right now I get:

cat Dockerfile | docker buildx build -f - .
WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 0.1s (2/2) FINISHED
 => [internal] load .dockerignore                                                                                                                                                                                                                                                                                                                                      0.0s
 => => transferring context: 2B                                                                                                                                                                                                                                                                                                                                        0.0s
 => [internal] load build definition from -                                                                                                                                                                                                                                                                                                                            0.0s
 => => transferring dockerfile: 2B                                                                                                                                                                                                                                                                                                                                     0.0s
failed to solve: rpc error: code = Unknown desc = failed to read dockerfile: open /tmp/buildkit-mount406044302/-: no such file or directory
[I] $

Platform darwin, what the? Since when? Is this for real?

In the readme there is mention of building for multiple platforms:

When invoking a build, the --platform flag can be used to specify the target platform for the build output, (e.g. linux/amd64, linux/arm64, darwin/amd64).

Is this suggesting that docker can now run natively on MacOs? Similar to building and running a native Windows container instead of running a Linux container inside a virtual machine.

I did a quick google but couldn't find anything obvious, I would have assumed something as major as this would have had plenty of advertisement so I am guessing it's not the holy grail I thought it was, so what does darwin/amd64 do in the context of docker buildx build --platform?

Rpmdb checksum is invalid: dCDPT(pkg checksums): wget.aarch64 0:1.14-18.el7_6.1 - u

When trying to install wget in Dockerfile, I hit the issue as below:

Rpmdb checksum is invalid: dCDPT(pkg checksums): wget.aarch64 0:1.14-18.el7_6.1 - u

The command is as below:

docker buildx build --platform linux/arm64 -t xxx:0.1 .

The Dockerfile is as below:

FROM xxx/centos7-aarch64-xxx:xxx
RUN yum install -y wget

I have specified in the command with --platform linux/arm64, the issue of rebuilding Rpmdb should not occur. Thanks.

Building images for multi-arch with --load parameter fails

While trying to build images for multi-architecture (AMD64 and ARM64), I tried to load them into the Docker daemon with the --load parameter but I got an error:

➜ docker buildx build --platform linux/arm64,linux/amd64 --load  -t carlosedp/test:v1  .
[+] Building 1.3s (24/24) FINISHED
 => [internal] load .dockerignore                                                                                                                                                        0.0s
 => => transferring context: 2B                                                                                                                                                          0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                     0.0s
 => => transferring dockerfile: 115B                                                                                                                                                     0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             0.8s
 => [linux/amd64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.0s
 => [linux/arm64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.2s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             1.2s
 => [linux/amd64 builder 1/5] FROM docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/amd64 stage-1 1/4] FROM docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => [internal] load build context                                                                                                                                                        0.0s
 => => transferring context: 232B                                                                                                                                                        0.0s
 => CACHED [linux/amd64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/amd64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/amd64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/amd64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/amd64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/amd64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => [linux/arm64 builder 1/5] FROM docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/arm64 stage-1 1/4] FROM docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => CACHED [linux/arm64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/arm64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/arm64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/arm64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/arm64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/arm64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => ERROR exporting to oci image format                                                                                                                                                  0.0s
------
 > exporting to oci image format:
------
failed to solve: rpc error: code = Unknown desc = docker exporter does not currently support exporting manifest lists

I understand that the daemon can't see the manifest lists but I believe there should be a way to tag the images with some variable, like:

docker buildx build --platform linux/arm64,linux/amd64 --load -t carlosedp/test:v1-$ARCH .

To have both images loaded into the daemon and ignoring the manifest list in this case.

Documentation missing some points

Hi all,

I've been using buildx on osx-desktop (edge version) to build some multiarch images. I'm now trying to replicate the buildx experience on linux. In this case debian stretch, QEMU installed and docker 19.03. I've had limited success, but I'm really looking forward to getting this working.

First thing is that if starting from scratch we need to add a list of required installs on the host to make this work (if there are any). Secondly in order to make the build on 19.03+ work i had to enable experimental on docker. It might be nice if this could be added to the docs.

Now its build I can get this working:

$ docker buildx build .
[+] Building 8.4s (23/32)
 => ...

my docker buildx ls however shows only support for linux/amd64 despite my having QEMU installed. I'm guessing i need to link this somehow. I tried:

docker buildx create --platform linux/amd64/,linux/arm64,linux/arm/v7 --name mybuilder

This seems to work:

NAME/NODE    DRIVER/ENDPOINT             STATUS   PLATFORMS
mybuilder *  docker-container
  mybuilder0 unix:///var/run/docker.sock inactive linux/amd64, linux/arm64, linux/arm/v7

However i can't build on those target platforms :(

root@ip-10-100-0-29:/etc/docker# docker buildx build --platform linux/arm64 -t richarvey/nginx-demo --push .
[+] Building 2.0s (3/3) FINISHED
 => [internal] booting buildkit                                                                                                                                                                                    1.8s
 => => pulling image moby/buildkit:master                                                                                                                                                                          1.3s
 => => creating container buildx_buildkit_mybuilder0                                                                                                                                                               0.5s
 => [internal] load .dockerignore                                                                                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                                                                                    0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                                               0.1s
 => => transferring dockerfile: 2B                                                                                                                                                                                 0.0s
failed to solve: rpc error: code = Unknown desc = failed to read dockerfile: open /tmp/buildkit-mount550903397/Dockerfile: no such file or directory

I'm probably missing a real simple step but couldn't find it int he guide. Any help really appreciated.

tracking missing build UI

The build UI should follow docker build and be compatible with for a possibility to merge the tools in the future.

Tracking things currently missing (add things):

  • remote contexts (git, https)
  • -f symlink
  • -f -
  • tar context form stdin
  • --cache-from + updates to other importers
  • extra hosts (+ net via entitlements?)
  • -q
  • --iidfile
  • --squash ?
  • -o type=docker to load into docker (+ a driver specific default would be nice)
  • new: --cache-to
  • --security-opt
  • --force-rm
  • --network=custom
  • --compress

@tiborvass

Support for linux/arm/v8?

First of all, the new multi-arch support in docker is amazing and the QEMU integration into Docker Desktop is a real life-saver!

I work with a huge variety of ARM machines all the way from armv5l to aarch64. We have a number of devices running armv8l. Is there any chance this could be added as a supported target (i.e. linux/arm/v8)?

Sharing a build stage between architectures

Hi,

We have a lot of images which have an npm stage purely to build/compile JS/CSS assets. Doing a multi-arch build runs the stage for each - which can get very time-consuming even though the output is the same. For instance the project I have open just now took 7 minutes to run the npm scripts for the linux/arm/v7 stage compared to about 1 for linux/amd64.

I suspect that kind of asset step is quite common so I was just wanting to raise a feature request of some way of marking a stage as 'shared' during multi-arch builds.

And just wanted to also say - buildx is really nice - thank you! :-)

Custom registry, push error on self-signed cert

'buildx' errors, while 'docker build' succeeds:

cat <<'EOD' > Dockerfile
FROM alpine
RUN touch /test
EOD
docker buildx build  \
  -t img.service.consul/alpine:test  \
 --platform=linux/amd64,linux/arm64,linux/arm  \
 --push  \
 .

[+] Building 2.1s (12/12) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                       0.0s
 => => transferring dockerfile: 65B                                                                                                                                                                        0.0s
 => [internal] load .dockerignore                                                                                                                                                                          0.1s
 => => transferring context: 2B                                                                                                                                                                            0.0s
 => [linux/arm/v7 internal] load metadata for docker.io/library/alpine:latest                                                                                                                              1.5s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                                                                               1.3s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                                                                               1.3s
 => CACHED [linux/arm64 1/2] FROM docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                         0.0s
 => => resolve docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                                            0.0s
 => [linux/arm64 2/2] RUN touch /test                                                                                                                                                                      0.1s
 => CACHED [linux/amd64 1/2] FROM docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                         0.0s
 => => resolve docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                                            0.0s
 => [linux/amd64 2/2] RUN touch /test                                                                                                                                                                      0.1s
 => CACHED [linux/arm/v7 1/2] FROM docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                        0.0s
 => => resolve docker.io/library/alpine@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                                            0.0s
 => [linux/arm/v7 2/2] RUN touch /test                                                                                                                                                                     0.2s
 => ERROR exporting to image                                                                                                                                                                               0.4s
 => => exporting layers                                                                                                                                                                                    0.2s
 => => exporting manifest sha256:879ac4adf9493121ff9bb12f8566ed993fa7079c59ae02b516a9287d6de7daea                                                                                                          0.0s
 => => exporting config sha256:42c2e158eb64d21ea6832e4842a3c11c9fd70c89afbf2fd0fbe2f16dd2698453                                                                                                            0.0s
 => => exporting manifest sha256:64b17d691e5c5ab257ad37b622c9ed219e50ea637bf5f7aa25e2f65f0bd0c26d                                                                                                          0.0s
 => => exporting config sha256:14af510e076d60389743c6fc7c99e2777ba56bdad11cbbffacb438e7c68f6321                                                                                                            0.0s
 => => exporting manifest sha256:2e87dbf064ba1c829a9d18525fa38e77baa26b654c04659b0fa3e75d6ea34ea5                                                                                                          0.0s
 => => exporting config sha256:f1b89d61d625bff13e65a679d2bdb1c513289999789ec2e13fe7acefca39adfd                                                                                                            0.0s
 => => exporting manifest list sha256:53f07aa12e20079138de3650629277928313e7bfdc59c3f22c93834fe11ba9f3                                                                                                     0.0s
 => => pushing layers                                                                                                                                                                                      0.0s
------
 > exporting to image:
------
failed to solve: rpc error: code = Unknown desc = failed to do request: Head https://img.service.consul/v2/alpine/blobs/sha256:8dc302c06141b7124ea05ccf2fdde10013ce594c28e5fe980047b0740891e398: x509: certificate signed by unknown authority

x509: certificate signed by unknown authority, but certificate chain is ok.

test:
: |openssl s_client -connect img.service.consul:443 [...] Verify return code: 0 (ok)

docker build + push works also:

docker build  \
 -t img.service.consul/x86_64/alpine:test  \
 .

Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM alpine
 ---> 055936d39205
Step 2/2 : RUN touch /test
 ---> Using cache
 ---> 7bd5dcd02d4c
Successfully built 7bd5dcd02d4c
Successfully tagged img.service.consul/x86_64/alpine:test

docker push img.service.consul/x86_64/alpine:test

The push refers to repository [img.service.consul/x86_64/alpine]
b13e8440598c: Pushed
f1b5933fe4b5: Pushed
test: digest: sha256:e9c2e8f188d0bedc6d3c26b39a6a75c36be5b4cbeedb9defc4f3b48953b4ef45 size: 734

buildx imagetools inspect again works:

docker buildx imagetools inspect img.service.consul/x86_64/alpine:test

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "size": 1645,
      "digest": "sha256:7bd5dcd02d4c340892fe431a40a39badf5695af58b669a33bd21b61159f4ffe5"
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 2757034,
         "digest": "sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10"
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 97,
         "digest": "sha256:6e0d312d9ebb1961db61366af2a5a323ad84155db2018457d2c5168c4f86e410"
      }
   ]
}

perhaps related to #57 (comment)

tested:

  • docker beta-4 + 20190517171028-f4f33ba16d nightly,
  • 'buildx' release + current master.
uname -mrov
4.19.44 #1-NixOS SMP Thu May 16 17:41:32 UTC 2019 x86_64 GNU/Linux

buildfile command

Add another command that produces a higher level build operation based on config from a file. An input file can be compose or another format that can stack the cli options into separate targets.

On invoking command all builds are invoked concurrently, looking like a single build to the user.

buildx buildfile -f compose.yml [target] [target]..

The file allows stacking all cli options, including things like exporter and remote cache settings (but probably excluding entitlements). When using multiple files you can override some properties with a file that is not committed to git. The cli flags can be used for the highest priority override. The non-compose format can support sharing code so that groups of options can be used as part of multiple targets.

The name needs improvement.

using --no-cache still uses CACHED layers and host cache

$ ~/.docker/cli-plugins/docker-buildx bake --no-cache -f kimi.yml
[+] Building 2.1s (13/21)
 => [internal] load build definition from buildx-builder                                                                                                                                                                                 0.0s
 => => transferring dockerfile: 36B                                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                        0.0s
 => => transferring context: 35B                                                                                                                                                                                                         0.0s
 => resolve image config for docker.io/docker/dockerfile:experimental                                                                                                                                                                    0.4s
 => CACHED docker-image://docker.io/docker/dockerfile:experimental@sha256:9022e911101f01b2854c7a4b2c77f524b998891941da55208e71c0335e6e82c3                                                                                               0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                                                                                                                                                         0.0s
 => [internal] load metadata for docker.io/library/node:8.11-alpine                                                                                                                                                                      0.0s
 => CANCELED [internal] load build context                                                                                                                                                                                               1.0s
 => => transferring context: 14.31MB                                                                                                                                                                                                     1.0s
 => [storage 1/6] FROM docker.io/library/alpine                                                                                                                                                                                  0.0s
 => CACHED [storage 2/6] WORKDIR /var/www                                                                                                                                                                                        0.0s
 => CACHED [storage 3/6] RUN adduser -DHSu 100 nginx -s /sbin/nologin                                                                                                                                                            0.0s
 => [builder 1/8] FROM docker.io/library/node:8.11-alpine                                                                                                                                                                        0.0s
 => CACHED [builder 2/8] WORKDIR /src                                                                                                                                                                                            0.0s
 => ERROR [builder 3/8] RUN --mount=type=cache,target=/var/cache     apk add --update     git                                                                                                                                    1.0s
------
 > [builder 3/8] RUN --mount=type=cache,target=/var/cache     apk add --update     git:
#12 0.775 fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz
#12 0.786 fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz
#12 0.788 ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.6/main: Bad file descriptor
#12 0.788 WARNING: Ignoring APKINDEX.84815163.tar.gz: Bad file descriptor
#12 0.805   git (missing):
#12 0.805     required by: world[git]
#12 0.805 ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.6/community: Bad file descriptor
#12 0.805 WARNING: Ignoring APKINDEX.24d64ab1.tar.gz: Bad file descriptor
#12 0.805 ERROR: unsatisfiable constraints:
------
Error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c apk add --update     git]: exit code: 1

github.com/docker/buildx v0.2.2-10-g3f18b65-tp-docker 3f18b65

Client: Docker Engine - Community
Version: 19.03.0-rc2
API version: 1.40
Go version: go1.12.5
Git commit: f97efcc
Built: Wed Jun 5 01:37:53 2019
OS/Arch: darwin/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.0-rc2
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: f97efcc
Built: Wed Jun 5 01:42:10 2019
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: v1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683

unable to build private git repository with ssh access

Problem:

If you have a private git repository you access using ssh key, you are not able to build it directly using buildx.

docker buildx build [email protected]:xxxx/yyyy.git fails with permission denied, Could not read from remote repository...

Of course he normal build command works docker build [email protected]:xxxx/yyyy.git

Expected behavior:

docker buildx build [email protected]:xxxx/yyyy.git should be able to use the ssh key or the ssh-agent in order to download the private project and then build the project.

Docker buildx version:

github.com/docker/buildx v0.2.0 91a2774376863c097ca936bf5e39aa3db0c72d0f

Docker version:

Client: Docker Engine - Community
 Version:           19.03.0-beta4
 API version:       1.40
 Go version:        go1.12.4
 Git commit:        e4666ebe81
 Built:             Tue May 14 12:47:08 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta4
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.4
  Git commit:       e4666ebe81
  Built:            Tue May 14 12:45:43 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

`make binaries` no bueno

after cloning 3f18b65 into local working dir:

$ make binaries
...
=> exporting to image                                                                                             0.0s
 => => exporting layers                                                                                            0.0s
 => => writing image sha256:590d78d23c7678533fa8f3669bc85e6b2bb559d04083a4fbc48793d6c91aaa3a                       0.0s
++ cat /tmp/docker-iidfile.Z3KsTw3vbo
+ iid=sha256:590d78d23c7678533fa8f3669bc85e6b2bb559d04083a4fbc48793d6c91aaa3a
++ docker create sha256:590d78d23c7678533fa8f3669bc85e6b2bb559d04083a4fbc48793d6c91aaa3a copy
+ containerID=f945249a3a4532fb2304b8b79d32f3963230f0fdb16371e45b40b7668f495bf4
+ docker cp f945249a3a4532fb2304b8b79d32f3963230f0fdb16371e45b40b7668f495bf4:/ bin/tmp
+ mv 'bin/tmp/build*' bin/
mv: cannot stat 'bin/tmp/build*': No such file or directory
Makefile:5: recipe for target 'binaries' failed
make: *** [binaries] Error 1

add "env" driver that connects to $BUILDKIT_HOST

This driver allows creating a buildx instance directly pointing to the existing buildkit endpoint.

docker buildx create --driver=env unix:///var/lib/buildkitd.sock
docker buildx create --driver=env uses BUILDKIT_HOST env value with UNIX sock default.

I guess TLS info is just passed with a custom driver-opt:

docker buildx create --driver=env --driver-opt ca=mycafile.pem https://foobar

rm/stop commands are no-op in this driver.

A better name could be "remote".

add caching for remembering state of nodes

Buildx does a lot of connections to the remote daemon to check existing state. Especially when using ssh transport the connection handshake is slow and this phase may take even more time than actual build.

For example when using 18.09 node:
docker context create foo --host ssh://node && docker context use foo

Before we can build (at least) following connections happen

  • cli calls ping before buildx main is called
  • buildx calls ping to optionally negotiate lower version
  • buildx calls /grpc to check if docker driver can be used
  • this fails to buildx calls /inspect to see if container is running
  • buildx actually starts the build connection

First ping is completely not needed but not sure if we can step around that. For /grpc support we can remember the state or deduct it from ping response.

Theoretically, we could avoid inspect and only call it when interactions with the container fail.

We could think about forcing http2 and reusing the connection.

@tiborvass

How can I enable i386 arch?

Hello,

I'm trying to build directly on i386, but I'm not sure how can I enable it. My host:

$ docker info
Client:
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.3.0)

Server:
 Containers: 2
  Running: 1
  Paused: 0
  Stopped: 1
 Images: 6
 Server Version: 19.03.1
 Storage Driver: btrfs
  Build Version: Btrfs v4.7.3
  Library Version: 101
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 85f6aa58b8a3170aec9824568f7a31832878b603.m
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.59-1-MANJARO
 Operating System: Manjaro Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 3.778GiB
 Name: vk496-pc
 ID: XLJG:PMBX:6NEF:3SHM:AZBA:FSMW:UPQK:DWLX:L7ZH:EXG3:EVTA:L35W
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: vk496
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine
$ docker buildx version
github.com/docker/buildx v0.3.0 c967f1d570fb393b4475c01efea58781608d093c

This is how I initialize the builder:

docker buildx inspect --bootstrap         ✔  10073  11:33:01
[+] Building 0.7s (1/1) FINISHED                                                                                                                
 => [internal] booting buildkit                                                                                                            0.7s
 => => starting container buildx_buildkit_mybuilder0                                                                                       0.7s
Name:   mybuilder
Driver: docker-container

Nodes:
Name:      mybuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64

$ docker run --rm --privileged multiarch/qemu-user-static --reset
$ docker buildx inspect --bootstrap                                       ✔  10059  11:42:40
[+] Building 0.9s (1/1) FINISHED                                                                                    
 => [internal] booting buildkit                                                                                0.9s
 => => starting container buildx_buildkit_mybuilder0                                                           0.9s
Name:   mybuilder
Driver: docker-container

Nodes:
Name:      mybuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/arm/v7, linux/arm/v6

I miss something?

image and standalone target

This binary should be usable without being invoked by docker as well. The image target should be runnable with docker run.

Eg.
docker run docker/buildx build -o type=oci,dest=- git://... > oci.tgz

For auth/local sources mounts would need to be used.

Depends on adding a new driver.

buildx is not a docker command on linux/amd64 ?

Hi,
I have docker 19.03 on ubuntu amd64 and I tried the steps in your readme but fail to get buildx command working. It complains command not found. What do I need to get it working? I tried beta release as well and same issue. I am looking to compile images for arm64.

Appreciate the help a lot,
Puja
$ cat ~/.docker/config.json
{
"experimental": "enabled"
}
$ ls -l ~/.docker/cli-plugins/docker-buildx
total 55936
drwxr-xr-x 2 pujag newhiredefaultgrp 4096 Aug 12 14:05 .
drwxr-xr-x 3 pujag newhiredefaultgrp 4096 Aug 12 14:01 ..
-rwxr-xr-x 1 pujag newhiredefaultgrp 57036919 Aug 12 14:04 buildx-v0.2.0.linux-amd64
$ docker version
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:21:05 2019
OS/Arch: linux/amd64
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:19:41 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683

add process driver

Add driver that can detect buildkitd in the $PATH and run it directly. For example, this driver can be used with #12 . Pidfile should be used for keeping track of the daemon and monitoring its state.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.