buildpacks / lifecycle Goto Github PK
View Code? Open in Web Editor NEWReference implementation of the Cloud Native Buildpacks lifecycle
Home Page: https://buildpacks.io
License: Apache License 2.0
Reference implementation of the Cloud Native Buildpacks lifecycle
Home Page: https://buildpacks.io
License: Apache License 2.0
launcher
currently respects the PACK_PROCESS_TYPE
evironment variable, it should respect CNB_PROCESS_TYPE
instead to be consistent with other lifecycle env vars.
As a user I may want to determine the provenance of a given lifecycle binary. Lifecycle binaries should be able to report their version and the git commit they were built from, like most other CLI tools.
Given a have built a lifecycle binary using VERSION=0.4.0 make build
When I run <lifecycle-binary> -version
Then The output should report the given version and git sha
Example:
$ lifecycle/out/exporter -version
0.4.0+f49a44e
Given a have built a lifecycle binary locally using make build
When I run <lifecycle-binary> -version
Then The output should report the version as dev
Example:
$ lifecycle/out/exporter -version
0.0.0+f49a44e
Given a lifecycle binary
When a user passes other flags or positional arguments in addition to the -version
flag
Then all other arguments should be ignored and the lifecycle will print the version
Knative provides Service Accounts as a mechanism to provide docker credentials to the build.
The lifecycle should encourage users to use this workflow instead of installing and using platform specific credential helpers.
Currently, when the lifecycle exports an image reusing layers, it builds the image's manifest in a different order than when the image was built without reusing layers. This leads to problems with reproducible builds and causes redundant data to downloading in some cases (Docker Daemon specifically).
The following images show the same source application being built with the same buildpacks twice successively. There is 100% layer reuse in the second build.
In the first one, within the io.pivotal.openjdk
buildpack, there are 7 layers, and their order appears to be more or less random (not the order they were written in at the very least).
openjdk-jre
security-provider-configurer
class-counter
java-security-properties
jvmkill
link-local-dns
memory-calculator
In the second build the order is clearly lexical.
class-counter
java-security-properties
jvmkill
link-local-dns
memory-calculator
openjdk-jre
security-provider-configurer
The order between these two builds should be consistent, and I believe that lexically ordering the layers is correct. At this time layer order of buildpack layer groups is consistent with buildpack execution but I'm not sure this is necessary and might also be better served by lexical order based on buildpack ID.
As a user of Cloud Native Buildpacks, I may want to audit the contents of my built image. Given that the lifecycle adds the launcher
to every generated dependency, a full audit will require the launcher version and metadata specifying the exact source code from which the launcher was built.
Given I am exporting an image using lifecycle version <version>
built from a git commit with sha <some-git-commit-sha>
When exporter
runs
Then exporter
should add the following metadata to the io.buildpacks.build.metadata
label
{"launcher": {"version": "<version>", "source": {"git": {"repository": "github.com/buildpack/lifecycle", "commit": "<some-git-commit-sha>"}}}, ...}
** This story assumes that the exporter launcher will be from the same lifecycle release as the other lifecycle binaries
When I run pack build
in a directory that does not pass any CNB group's detection, it shows the output of detect and does not do anything else; it just stays stuck on the last failed detection output.
It does not respond to CTRL+C
either, so I have to find the process ID and kill it manually.
Based on the output of my pack build
, I am using lifecycle version 0.3.0
.
Easiest way to reproduce is to just make an empty directory and run pack build test
in that directory.
I've rebuild pack
from HEAD, and docker pull
various cloudfoundry/cnb*:cflinuxfs3
images, but pack create-builder
is still fetching lifecycle-v0.3.0
:
$ docker pull cloudfoundry/cnb-build:cflinuxfs3
cflinuxfs3: Pulling from cloudfoundry/cnb-build
Digest: sha256:955dbf87753412b75296a4196de695222be3dcab08bca57a8c150dad46305e3c
Status: Image is up to date for cloudfoundry/cnb-build:cflinuxfs3
docker.io/cloudfoundry/cnb-build:cflinuxfs3
$ pack create-builder starkandwayne/discovery-builder -b builder.toml --no-pull
Creating builder starkandwayne/discovery-builder from build-image cloudfoundry/cnb-build:cflinuxfs3
Using cached version of "https://github.com/buildpack/lifecycle/releases/download/v0.3.0/lifecycle-v0.3.0+linux.x86-64.tgz"
Successfully created builder image starkandwayne/discovery-builder
Tip: Run pack build <image-name> --builder starkandwayne/discovery-builder to use this builder
When I run LIFECYCLE_VERSION=0.4.0 make package
Then a lifecycle.toml
file should be added to the root of the archive, with the following contents
[api]
platform = "0.1"
buildpack = "0.2"
[lifecycle]
version = "0.4.0"
Some Docker Registries are stricter due to schema 1 compatibility. Looking at the V2 Schema 1 Doc, the history
key is required for generating a Schema 1 compatible manifest. Without this, images produced by lifecycle pushed will run into this error.
Repro steps:
pack build testapp -p acceptance/testdata/node_app
docker inspect testapp
Notice that the buildpack version is empty in the io.buildpacks.lifecycle.metadata
:
\"buildpacks\":[{\"key\":\"io.buildpacks.samples.nodejs\",\"version\":\"\", ...
Acceptance Criteria
When I build an OCI image using the lifecycle:
pack build
or/layers
/workspace
Notes
app
to be a subdirectory of /layers
I see occasional errors between the buildpack lifecycle runner and the target registry.
[build-step-analyze]: Error: connect to repo store 'gcr.io/<snip>': Get https://gcr.io/v2/: dial tcp: i/o timeout
While it would be nice if GCR was more reliable from GKE via Knative, unreliable networks are sadly common. For non-authentication related errors, perhaps the buildpack should automatically retry the operation.
Currently the Bill of Materials in config/metadata.toml
contains metadata about items contributed by the buildpacks and claimed by each at the end of the build
phase. The bill of materials (or another related section in config/metadata.toml
should additionally contain an explicit listing of the buildpack ID and version used to stage the application as querying against this information can be useful as well.
Add v3 exporter to buildpack/lifecycle
Given an app directory and a launch directory created by a buildpack conforming to the v3 API
When <launch-dir>
, <repo-name>
, and <stack-repo-name>
are provided to the exporter using either the given syntax
exporter -launch <launch-dir> -stack <stack-repo-name> <repo-name>
or the following default values
<stack-repo-name>
defaults to PACK_STACK_NAME
env variable<launch-dir>
defaults to /launch
Then the exporter generates an image following the v3 buildpack spec
The <stack-repo-name>:run
image is used as the base image
The <launch-dir>/app
is appended as a layer to the image
A layer will be appended for each <launch-dir>/<buildpack-id>/<layer-id>.toml
file according to the following rules
<launch-dir>/<buildpack-id>/<layer-id>/
exists, it is added as an image layer<launch-dir>/<buildpack-id>/<layer-id>/
exists, the corresponding remote layer digest is read from the repo-image
labels and the remote layer is reusedThe image is labeled with metadata using the following structure
"sh.packs.build" : {
"stack": {
"sha": <stack-run-image-digest>
},
"app": {
"sha": <app-layer-digest>
},
"buildpacks": [
{
"key": <buildpack-id>,
"layers": {
<layer-name>: {
"sha": <layer-digest>,
"data": {
<structured-data-from-layer-toml>
}
},
...
}
},
...
]
}
<layer>.toml
files are stored in the label for use by the buildpack on subsequent builds (when deciding whether to rebuild a layer)<stack-repo-name>:run
image digest is added so that stack rebasing can occurThe image Config.Cmd
should be the command for process type web specified in the metadata (Non web type images are outside the scope of this issue, but will be addressed in subsequent issues)
Given that this command is a string and not an array of strings, it is expected that the <stack-repo-name>:run
entrypoint will be such that this is acceptable (eg. sh -c
or launcher
)
The cached image layer created by the go-mod-cnb can't be restored. This is due to the cached layer having read-only directories as detailed in paketo-buildpacks/go-mod-vendor#6.
It looks like the restorer
should implement functionality similar to tar --delay-directory-restore
to allow the cached layer be restored correctly
As a developer I want to run the same app image with different start commands so that I can use the same image to run processes of different types (web, worker, etc.)
Assume <launch-dir>/conf/metadata.toml
exists with the following format..
[[processes]]
type = "web"
command = "<web-command>"
[[processes]]
type = "other-type"
command = "<other-command>"
Given PACK_PROCESS_TYPE
is not set
Whenlauncher
is invoked with no args
Then launcher executes the <web-command>
Given PACK_PROCESS_TYPE
env var has value "other-type"
When launcher
is invoked with no args
Then launcher executes the <other-command>
When launcher
is invoked with positional args
/launcher <some> <override> <command>
Then launcher
executes the arguments as a command
Copying the six binaries of the lifecycle adds 72Mb to every builder image.
We could build a single binary instead and create symlinks for each phase of the lifecycle. It would work like busybox.
This would also make the build much faster.
wdyt?
Acceptance Criteria
Case 1
When I pass a path to cacher
via the path
parameter while executing a full lifecycle sequence
And The build completes successfully
Then The cache from that build is available to the next build (via path
parameter to the restorer
) that uses the same cache path
Case 2
When I pass a path to cacher
via the path
parameter while executing a full lifecycle sequence
And The build fails or is otherwise interrupted at any point
The cache from that build is not available to the next build (via path
parameter to the restorer
) that uses the same cache path
(edited 4/16 by @ameyer-pivotal. See buildpacks/knative-integration#3 for Knative case)
The problem is due to
if the label is an empty string then it fails to parse as JSON
There are really to scenarios and two options.
Scenarios:
Options:
My personal opinion is option 2 for both scenarios.
optional
section in v3 APIThis issue accomplishes three goals.
(1) Avoids unexpected behavior when a new image has been published to a given tag after analyzer
has run, but before exporter
has completed. Currently a buildpack may indicate (by leaving a layer.toml
file with no corresponding directory) that it wishes to reuse a layer. However, if that layer changes due to this race condition, the layer that is reused might be different than the layer validated by the buildpack
(2) Provides an easy way to publish the same image to multiple tags (e.g. an immutable rc-17
tag and a mutable :latest
tag). This is important for auditability and rollbacks when using registries like artifactory that garbage collect untagged images.
(3) Explicitly disentangles two concepts which the current lifecycle API blurs together. The previous (analyzed) image and the desired export tag.
analyzer
When I run analyzer registry.com/some/image:tag -analyzed /path/to/analyzed.toml
Then analyzer
, in additional to the current behavior, writes a analyzed.toml
file to the given path with the following information
[image]
reference = "registry.com/some/image@<image-digest>"
[metadata]
...
exporter
Given I have a file at path /path/to/analyzed.toml
with the following contents
[image]
reference = "registry.com/some/image@<image-digest>"
[metadata]
...
When I run exporter some/image:tag some/other:tag -analyzed /path/to/analyzed.toml
Then the exporter
will fetch reused layers from registry.com/some/image@<image-digest>
And the exporter
will export to all tags provided
-analyzed
will be an optional flag on both analyzer
and exporter
and will default to ./analyzed.toml
If exporter cannot find the analyzed.toml
file it will fails (this means that a previous version on the analyzer
binary cannot be used with the newest version of the exporter
binary, which seems like an acceptable limitation).
Given I run exporter
with the -daemon
flag and the -launch-cache
flag
exporter <image-name> -daemon -launch-cache </path/to/cache/dir>
When the exporter
reuses a layer from the previous image
Then the exporter
will first check the cache for the layer tar and fetch it from the daemon only if it cannot be found
When the layers must be retrieved from the daemon
Then all retrieved layers will be cached in the launch-cache
Given I run exporter
with the -launch-cache
flag, w/o the -daemon
flag
Then the exporter
prints an error and exits nonzero
Error: launch-cache option is only valid when exporting to a docker daemon
Given I run exporter
with the -daemon
flag and w/o the -launch-cache
flag
exporter <image-name> -daemon
When the exporter
reuses a layer from the previous image
Then current behavior is unchanged
Deliverables
Currently, each buildpack's launch.toml is cached between runs in the layer directory and can interfere with subsequent builds.
In v2 buildpacks, the $HOME
directory was typically /app
. This was useful for cases where - for whatever reason - the entire path to a file had to be specified during the running of an application. A good example is the Procfile
entry for some R
buildpacks, where the fullpath had to be used because of fakechroot
usage.
Switching from /workspace
back to /app
may increase compatibility with older apps by decreasing the number of changes necessary to migrate from v2 to v3 buildpacks.
Rebuilding sprint boot app with sample java build pack results in this error -
/buildpacks/sh.packs.samples.buildpack.java/0.0.1/bin/build: line 35: /launch/sh.packs.samples.buildpack.java/jdk/profile.d/jdk.sh: No such file or directory
Right now -runImage
is a required flag on the exporter
We should make this flag optional and exporter should choose an appropriate mirror from stack.toml
when the flag is not provided. This will allow us to easily take advantage of run image mirrors in the knative template and other cases where the lifecycle run outside of pack.
Given a group.toml
file with the following buildpacks
buildpacks = [{id = "buildpack1", version = "version1.1", }, {id = "buildpack2", version = "version2.1", optional = true}]
When exporter
is run with -group /path/to/group.toml
Or the group.toml
file exists at the default path
Then The buildpacks are added to the io.buildpacks.build.metadata
label on the exported image
{"buildpacks": [
{id = "buildpack1", version = "version1.1", },
{id = "buildpack2", version = "version2.1", }
], ...}
I get an undebuggable [builder] Error: failed to : exit status 100
while building a very simple Python app with heroku/buildpacks
stack.
The app I'm using is https://github.com/knative/docs/tree/dee2bd07515f81179ff81715f4499807c9e2d79e/docs/serving/samples/hello-world/helloworld-python
I deleted dockerfile and added a requirements.txt containing
Flask
gunicorn
and Procfile:
web: exec gunicorn --bind :$PORT --workers 1 --threads 8 app:app
$ pack build gcr.io/ahmetb-samples-playground/helloworld-python:pack --builder heroku/buildpacks
Pulling image index.docker.io/heroku/buildpacks:latest
latest: Pulling from heroku/buildpacks
Digest: sha256:9f86541d462ca1d812028104566c6a47e37445a27ad55f729b0182a6614c1085
Status: Image is up to date for heroku/buildpacks:latest
Selected run image heroku/pack:18 from builder
Pulling image heroku/pack:18
18: Pulling from heroku/pack
Digest: sha256:2230feb0c968f556fc2bca711595fda6355cd2a8c19ecb4d5b483abc30e51c3a
Status: Image is up to date for heroku/pack:18
Using build cache image pack-cache-60844030afda
Warning: lifecycle version unknown
===> DETECTING
[detector] Trying group of 2...
[detector] ======== Output: Ruby ========
[detector] no
[detector] ======== Results ========
[detector] Ruby: error (1)
[detector] Procfile: pass
[detector] Trying group of 2...
[detector] ======== Results ========
[detector] Python: pass
[detector] Procfile: pass
===> RESTORING
[restorer] cache image 'pack-cache-60844030afda' not found, nothing to restore
===> ANALYZING
[analyzer] WARNING: image 'gcr.io/ahmetb-samples-playground/helloworld-python:pack' not found or requires authentication to access
[analyzer] WARNING: image 'gcr.io/ahmetb-samples-playground/helloworld-python:pack' has incompatible 'io.buildpacks.lifecycle.metadata' label
===> BUILDING
[builder] -----> Installing python-3.6.8
[builder] -----> Installing pip
[builder] -----> Installing SQLite3
[builder] Error: failed to : exit status 100
As a user configuring a builder.toml file
I want pack to run validation on the run-image
I specify
So that I don't end up creating a useless or confusing builder
Given I have configured a builder.toml file that is valid except for the run-image field
Case 1: Specified run-image does not exist on docker daemon
When I run pack create-builder somebuilder -b builder.toml
Then I see Error: image 'whatever_I_named_the_run_image' does not exist on the deamon: not found
Case 2: Specified run-image cannot be found on the Internet
When I run pack create-builder somebuilder -b builder.toml --publish
Then I see an Error: image 'whatever_I_named_the_run_image' does not exist in registry: not found
Given a file at path <layers-dir>/config/metadata.toml
includes a bill of materials (BOM)
Example (a slightly abridged BOM generated from building https://github.com/buildpack/sample-java-app w/ the cloudfoundry/cnb:bionic
builder):
[bom]
[bom.auto-reconfiguration]
version = "2.7.0"
[bom.auto-reconfiguration.metadata]
name = "Spring Auto-reconfiguration"
sha256 = "0d524877db7344ec34620f7e46254053568292f5ce514f74e3a0e9b2dbfc338b"
stacks = ["io.buildpacks.stacks.bionic", "org.cloudfoundry.stacks.cflinuxfs3"]
uri = "https://repo.spring.io/release/org/cloudfoundry/java-buildpack-auto-reconfiguration/2.7.0.RELEASE/java-buildpack-auto-reconfiguration-2.7.0.RELEASE.jar"
[[bom.auto-reconfiguration.metadata.licenses]]
type = "Apache-2.0"
uri = "https://github.com/cloudfoundry/java-buildpack-auto-reconfiguration/blob/master/LICENSE"
[bom.executable-jar]
version = ""
[bom.executable-jar.metadata]
classpath = ["/workspace"]
main-class = "org.springframework.boot.loader.JarLauncher"
[bom.jvm-application]
version = ""
[bom.maven]
version = ""
[bom.openjdk-jdk]
version = "11.0.3"
[bom.openjdk-jdk.metadata]
name = "OpenJDK JDK"
sha256 = "23cded2b43261016f0f246c85c8948d4a9b7f2d44988f75dad69723a7a526094"
stacks = ["io.buildpacks.stacks.bionic", "org.cloudfoundry.stacks.cflinuxfs3"]
uri = "https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jdk_x64_linux_hotspot_11.0.3_7.tar.gz"
[[bom.openjdk-jdk.metadata.licenses]]
type = "GPL-2.0 WITH Classpath-exception-2.0"
uri = "https://openjdk.java.net/legal/gplv2+ce.html"
[bom.openjdk-jre]
version = "11.0.3"
[bom.openjdk-jre.metadata]
name = "OpenJDK JRE"
sha256 = "d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4"
stacks = ["io.buildpacks.stacks.bionic", "org.cloudfoundry.stacks.cflinuxfs3"]
uri = "https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz"
[[bom.openjdk-jre.metadata.licenses]]
type = "GPL-2.0 WITH Classpath-exception-2.0"
uri = "https://openjdk.java.net/legal/gplv2+ce.html"
[bom.spring-boot]
version = ""
[bom.spring-boot.metadata]
classes = "BOOT-INF/classes/"
classpath = ["/workspace/BOOT-INF/classes", "etc", "etc"]
lib = "BOOT-INF/lib/"
start-class = "io.buildpacks.example.sample.SampleApplication"
version = "2.1.3.RELEASE"
When exporter
runs
Then The BOM is added to a bom
key within a JSON structured value assigned to the io.buildpacks.build.metadata
label on the generated image
{"bom": {
"auto-reconfiguration": {
"metadata": {
"licenses": [
{
"type": "Apache-2.0",
"uri": "https://github.com/cloudfoundry/java-buildpack-auto-reconfiguration/blob/master/LICENSE"
}
],
"name": "Spring Auto-reconfiguration",
"sha256": "0d524877db7344ec34620f7e46254053568292f5ce514f74e3a0e9b2dbfc338b",
"stacks": [
"io.buildpacks.stacks.bionic",
"org.cloudfoundry.stacks.cflinuxfs3"
],
"uri": "https://repo.spring.io/release/org/cloudfoundry/java-buildpack-auto-reconfiguration/2.7.0.RELEASE/java-buildpack-auto-reconfiguration-2.7.0.RELEASE.jar"
},
"version": "2.7.0"
},
"executable-jar": {
"metadata": {
"classpath": [
"/workspace"
],
"main-class": "org.springframework.boot.loader.JarLauncher"
},
"version": ""
},
"jvm-application": {
"version": ""
},
"maven": {
"version": ""
},
"openjdk-jdk": {
"metadata": {
"licenses": [
{
"type": "GPL-2.0 WITH Classpath-exception-2.0",
"uri": "https://openjdk.java.net/legal/gplv2+ce.html"
}
],
"name": "OpenJDK JDK",
"sha256": "23cded2b43261016f0f246c85c8948d4a9b7f2d44988f75dad69723a7a526094",
"stacks": [
"io.buildpacks.stacks.bionic",
"org.cloudfoundry.stacks.cflinuxfs3"
],
"uri": "https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jdk_x64_linux_hotspot_11.0.3_7.tar.gz"
},
"version": "11.0.3"
},
"openjdk-jre": {
"metadata": {
"licenses": [
{
"type": "GPL-2.0 WITH Classpath-exception-2.0",
"uri": "https://openjdk.java.net/legal/gplv2+ce.html"
}
],
"name": "OpenJDK JRE",
"sha256": "d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4",
"stacks": [
"io.buildpacks.stacks.bionic",
"org.cloudfoundry.stacks.cflinuxfs3"
],
"uri": "https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz"
},
"version": "11.0.3"
},
"spring-boot": {
"metadata": {
"classes": "BOOT-INF/classes/",
"classpath": [
"/workspace/BOOT-INF/classes",
"etc",
"etc",
],
"lib": "BOOT-INF/lib/",
"start-class": "io.buildpacks.example.sample.SampleApplication",
"version": "2.1.3.RELEASE"
},
"version": ""
}
}}
During the ordering of buildpacks according to the groups, the optional bit is not correctly set. This line [1] takes the optional bit from the group deserialization (the Buildpack data structure is reused both for the backpack base definition and the group definition).
So, either
(1) line [1] is incorrect and should be removed (i.e., optional
can only be specified on the buildpack itself),
(2) it is allowed to override buildpack definition in the group definition, in which case we need to distinguish between "false" and "not specified" for optional
, or
(3) optional
should only be specified on the group.
In all cases, I think the spec needs to clarify as I couldn't find it.
[1] https://github.com/buildpack/lifecycle/blob/master/map.go#L54
Currently, if a buildpack has contributed to a staged application but not added any layers, environment setting on launch will fail. This is due to the fact that the lifecycle expects that a directory exists for each buildpack and this isn't true in the case where there are no contributed layers.
some base image (such as alpine and distroless) may not have bash installed. And
Error: failed to launch: exec: no such file or directory
error msg above is hard to locate the root cause until I read the source code.
In the spec, the bin/build
script should receive the relevant plan in arg $3
https://github.com/buildpack/spec/blob/master/buildpack.md#build
But I'm using lifecycle 0.4.0, and I'm not seeing my plan come from bin/detect
into bin/build
from neither STDIN (like before) nor via $3
argument.
I have a sample buildpack + builder for lifecycle 0.4.0 in https://github.com/starkandwayne/discovery-cnb/tree/lifecycle-0.4
The full output of pack build
is below, but I'll highlight the error at the end:
pack build discovery-app --builder starkandwayne/discovery-builder --path fixtures/ruby-sample-app --no-pull
Selected run image cloudfoundry/cnb-run:cflinuxfs3
Using build cache volume pack-cache-7296676022e3.build
Executing lifecycle version 0.4.0
===> DETECTING
[detector] ======== Output: [email protected] ========
[detector] $1 platform: /platform
[detector] /platform/:
[detector] total 12
[detector] drwxr-xr-x 1 root root 4096 Aug 30 00:02 .
[detector] drwxr-xr-x 1 root root 4096 Aug 30 00:02 ..
[detector] drwxr-xr-x 1 root root 4096 Aug 30 00:02 env
[detector]
[detector] /platform/env:
[detector] total 8
[detector] drwxr-xr-x 1 root root 4096 Aug 30 00:02 .
[detector] drwxr-xr-x 1 root root 4096 Aug 30 00:02 ..
[detector]
[detector] $2 plan: /tmp/plan.236935116/plan.toml
[detector] -rwxr-xr-x 1 vcap vcap 53 Aug 30 00:02 /tmp/plan.236935116/plan.toml
[detector] ======== Results ========
[detector] pass: [email protected]
[detector] Resolving plan... (try #1)
[detector] Success! (1)
===> RESTORING
===> ANALYZING
[analyzer] Analyzing image '7fb9bb15e2daa4232ba195d791788138ea7dbe254a980a74a8514a9e9a00cdb1'
[analyzer] Writing metadata for uncached layer 'com.starkandwayne.buildpacks.discovery:discovery-launch-true'
===> BUILDING
[builder] ---> Discovery Buildpack
[builder] $1 layers: /layers/com.starkandwayne.buildpacks.discovery
[builder] /layers/com.starkandwayne.buildpacks.discovery:
[builder] total 12
[builder] drwxr-xr-x 2 vcap vcap 4096 Aug 30 00:02 .
[builder] drwxr-xr-x 3 vcap vcap 4096 Aug 30 00:02 ..
[builder] -rw-r--r-- 1 vcap vcap 42 Aug 30 00:02 discovery-launch-true.toml
[builder]
[builder] $2 platform: /platform
[builder] /platform:
[builder] total 12
[builder] drwxr-xr-x 1 root root 4096 Aug 30 00:02 .
[builder] drwxr-xr-x 1 root root 4096 Aug 30 00:02 ..
[builder] drwxr-xr-x 1 root root 4096 Aug 30 00:02 env
[builder]
[builder] /platform/env:
[builder] total 8
[builder] drwxr-xr-x 1 root root 4096 Aug 30 00:02 .
[builder] drwxr-xr-x 1 root root 4096 Aug 30 00:02 ..
[builder]
[builder] $3 plan: /tmp/plan.136102280/com.starkandwayne.buildpacks.discovery/plan.toml
[builder] -rw-r--r-- 1 vcap vcap 0 Aug 30 00:02 /tmp/plan.136102280/com.starkandwayne.buildpacks.discovery/plan.toml
[builder]
[builder] STDIN:
[builder]
[builder] ---> Make some layers
[builder] ---> create bin/hello
[builder] ---> create tiny webapp with nc
===> EXPORTING
[exporter] Reusing layers from image with id '7fb9bb15e2daa4232ba195d791788138ea7dbe254a980a74a8514a9e9a00cdb1'
[exporter] Reusing layer 'app' with SHA sha256:4485fd00798ca2d1253b5ba46d4596acaf851d9d93de119efa7ce5de84243b06
[exporter] Reusing layer 'config' with SHA sha256:a5ce3242bd3e204d5ad4692625c245e648ac0a9d71c648bd8eaa7803e7349317
[exporter] Reusing layer 'launcher' with SHA sha256:ba90690cffad1f005f27ecc8d3a20bba7eeb7455bd2ec8ed584f14deb3e1a742
[exporter] Reusing layer 'com.starkandwayne.buildpacks.discovery:discovery-launch-true' with SHA sha256:1b1c0bd2e64912ea2731debd41c3bbaf2b918a0cb167b33a2ed4a99d8344aafc
[exporter] *** Images:
[exporter] index.docker.io/library/discovery-app:latest - succeeded
[exporter]
[exporter] *** Image ID: 787591007e664596c4401560f7dc487c0bfa64470376537e7c325bec0bfba418
===> CACHING
Successfully built image discovery-app
During [detector]
we create a non-empty plan.toml file:
[detector] $2 plan: /tmp/plan.236935116/plan.toml
[detector] -rwxr-xr-x 1 vcap vcap 53 Aug 30 00:02 /tmp/plan.236935116/plan.toml
It's contents should be:
discovery = { message = "hello", version = "1.2.3" }
But during the [builder]
phase we see that the incoming plan.toml is empty:
[builder] $3 plan: /tmp/plan.136102280/com.starkandwayne.buildpacks.discovery/plan.toml
[builder] -rw-r--r-- 1 vcap vcap 0 Aug 30 00:02 /tmp/plan.136102280/com.starkandwayne.buildpacks.discovery/plan.toml
Is there something I need to do, or some specific format for plan.toml for it to flow from bin/detect
into bin/build
?
When using the cflinuxfs3 stack & pack
, attempts to pack build
fail with the following error:
$ pack build node-app:cflinuxfs3 --builder node-builder:cflinuxfs3 --path . --no-pull
2018/12/14 00:24:30 Using user provided builder image 'node-builder:cflinuxfs3'
2018/12/14 00:24:30 Selected run image 'cfbuildpacks/cflinuxfs3-cnb-experimental:run' from stack 'org.cloudfoundry.stacks.cflinuxfs3'
*** DETECTING:
2018/12/14 05:24:33 Trying group of 3...
2018/12/14 05:24:33 Error: failed to write buildpack group: open /workspace/group.toml: permission denied
2018/12/14 05:24:33 ======== Output: Yarn Buildpack ========
no "yarn.lock" found at: /workspace/app/yarn.lock
2018/12/14 05:24:33 ======== Results ========
2018/12/14 05:24:33 Node.js Buildpack: pass
2018/12/14 05:24:33 NPM Buildpack: pass
2018/12/14 05:24:33 Yarn Buildpack: skip
Error: run detect container: failed with status code: 1
Here are the steps to repro:
scripts/package.sh
for each. Note the location where the buildpack is packaged.builder.toml
. Change the URI's to point to the paths from step #2 for each buildpack.$ cat builder.toml
[[buildpacks]]
id = "org.cloudfoundry.buildpacks.nodejs"
uri = "/tmp/nodejs-cnb_a3cbf36acd62c5cae21f3034"
[[buildpacks]]
id = "org.cloudfoundry.buildpacks.npm"
uri = "/tmp/npm-cnb_58a233920e0725d7a56d5d5b"
[[buildpacks]]
id = "org.cloudfoundry.buildpacks.yarn"
uri = "/tmp/yarn-cnb_398bd1f201f5f3a2668198af"
[[groups]]
[[groups.buildpacks]]
id = "org.cloudfoundry.buildpacks.nodejs"
version = "0.0.2"
[[groups.buildpacks]]
id = "org.cloudfoundry.buildpacks.npm"
version = "0.0.3"
[[groups.buildpacks]]
id = "org.cloudfoundry.buildpacks.yarn"
version = "0.0.1"
optional = true
Run pack add-stack org.cloudfoundry.stacks.cflinuxfs3 --run-image=cfbuildpacks/cflinuxfs3-cnb-experimental:run --build-image=cfbuildpacks/cflinuxfs3-cnb-experimental:build
. This adds the cflinuxfs3 stack.
Run pack create-builder node-builder:pack --builder-config ~/Downloads/builder.toml
and pack create-builder node-builder:cflinuxfs3 --builder-config ~/Downloads/builder.toml --stack org.cloudfoundry.stacks.cflinuxfs3
. This creates two builders, one using the default stack of io.buildpacks.stacks.bionic
and one using the org.cloudfoundry.stacks.cflinuxfs3
stack.
Create a small Node.js app. It can do anything, you just need something to push. I used this.
app.js:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
for (var env in process.env) {
res.write(env + '=' + process.env[env] + '\n');
}
res.end();
}).listen(process.env.PORT || 3000);
package.json
{
"name": "node-test",
"version": "0.0.0",
"description": "ERROR: No README.md file found!",
"main": "app.js",
"scripts": {
"start": "node app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"repository": "",
"author": "",
"license": "BSD",
"dependencies": {
"lodash": "^4.17.11"
}
}
Add those two files & run npm install
to build package-lock.json
.
From the app directory run pack build node-app:pack --builder node-builder:pack --path . --no-pull
. This builds the app using the pack builder that we created. It should succeed.
Also from the app director run pack build node-app:cflinuxfs3 --builder node-builder:cflinuxfs3 --path . --no-pull
. This builds the app using the cflinuxfs3 builder that we created, it fails with the error above.
Checking file permissions in the node-builder:cflinuxfs3
image look OK.
$ docker run -it node-builder:cflinuxfs3 ls -la /
...
drwxr-xr-x 1 vcap vcap 4096 Dec 13 20:18 workspace
This is very similar to node-builder:pack
:
$ docker run -it node-builder:pack ls -la /
...
drwxr-xr-x 1 pack pack 4096 Dec 13 15:50 workspace
The cfbuildpacks/cflinuxfs3-cnb-experimental:build
and cfbuildpacks/cflinuxfs3-cnb-experimental:run
images have the same owner/permissions on /workspace
as seen in the node-builder:cflinuxfs3
image above.
for a given layer where launch=true
,build=true
and the layer cache is missing or stale
analyzer
should not restore the metadata
analyzer
restores the metadata
Note: this bug affect lifecycle master but not the v3alpha2
images
Current Behavior:
missing buildpack is silently omitted from the group
Desired Behavior:
detector
prints an error message and fails
When doing a pack build with cloudfoundry's go-cnb the initial caching step is taking 8s. (23s with an image cache)
I added some basic instrumentation:
[cacher] layer org.cloudfoundry.go:go
[cacher] layer.path org.cloudfoundry.go:go tarring took 5.6140032s
[cacher] Caching layer 'org.cloudfoundry.go:go' with SHA sha256:0cae182e5311838f53a803700202fa427ea67b2e51d331b5a606d71fbdc5555c
[cacher] add layer org.cloudfoundry.go:go or file took 6.128091s
[cacher] layer org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2
[cacher] layer.path org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2 tarring took 794.4922ms
[cacher] Caching layer 'org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2' with SHA sha256:c150b6fc0fb625411552b47f50d0853a5efc7909998238c19fca83dcdb90f08c
[cacher] add layer org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2 or file took 956.3726ms
[cacher] layer org.cloudfoundry.dep-cnb:packages
[cacher] layer.path org.cloudfoundry.dep-cnb:packages tarring took 1.5993329s
[cacher] Caching layer 'org.cloudfoundry.dep-cnb:packages' with SHA sha256:60d5d917b7988ddc50cffc8979e6af51b3d0cc0b18432eb4ccc5bd663bbb0b67
[cacher] add layer org.cloudfoundry.dep-cnb:packages or file took 1.6405941s
[cacher] caching took 8.7351035s
After rebuilds when the cache layers can be reused it still takes a good amount of time as the sha of each layer still needs to be calculated. (7s)
It is even slower when reusing layers uses an image cache. (1m)
[cacher] layer org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2
[cacher] layer.path org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2 tarring took 771.2587ms
[cacher] Reusing layer 'org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2' with SHA sha256:c150b6fc0fb625411552b47f50d0853a5efc7909998238c19fca83dcdb90f08c
[cacher] Reusing layer org.cloudfoundry.go:f67ab420493b30d0d4f0d36a7526169c93de55ae5cf8ceb73c7647a252fcf0f2 took 771.3723ms
[cacher] layer org.cloudfoundry.go:go
[cacher] layer.path org.cloudfoundry.go:go tarring took 5.1851332s
[cacher] Reusing layer 'org.cloudfoundry.go:go' with SHA sha256:0cae182e5311838f53a803700202fa427ea67b2e51d331b5a606d71fbdc5555c
[cacher] Reusing layer org.cloudfoundry.go:go took 5.1852167s
[cacher] layer org.cloudfoundry.dep-cnb:packages
[cacher] layer.path org.cloudfoundry.dep-cnb:packages tarring took 1.3650242s
[cacher] Reusing layer 'org.cloudfoundry.dep-cnb:packages' with SHA sha256:60d5d917b7988ddc50cffc8979e6af51b3d0cc0b18432eb4ccc5bd663bbb0b67
[cacher] Reusing layer org.cloudfoundry.dep-cnb:packages took 1.3651047s
[cacher] caching took 7.3266809s
Currently when an application is built for the first time the following warnings are printed:
===> ANALYZING
Reading information from previous image for possible re-use
[analyzer] WARNING: image 'localhost:5000/java-compiled' not found or requires authentication to access
[analyzer] WARNING: image 'localhost:5000/java-compiled' has incompatible 'io.buildpacks.lifecycle.metadata' label
...
===> EXPORTING
[exporter] WARNING: image 'localhost:5000/java-compiled' not found or requires authentication to access
[exporter] WARNING: image 'localhost:5000/java-compiled' has incompatible 'io.buildpacks.lifecycle.metadata' label
...
In neither of these cases is this a non-typical behavior that should be warned about. These messages should be removed in the case of a non-existent previous image and should only be warned about in the case where the image exists.
We have cred helpers. They are very helpful if you are integrating with GCR, ECR or ACR as private repositories. These tools are invoked by analyze
if USE_CRED_HELPERS
is true.
Our cred helper code will pick up any existing config.json
, run the credential helpers for each of the private registries, then overwrite config.json
with their results.
This plays badly with tools like Knative Build in non-GKE/AKS/EKS environments. Knative Build can be configured to build and inject a config.json
using secrets provided via Kubernetes. But if the file is completely overwritten, this information is lost and authentication will fail.
It would be more helpful if some effort at merging was attempted instead of silently overwriting.
Cross-refs:
When a buildpack creates a directory at /workspace/<buildpack>/<layer>
, but does not create a <layer>.toml
, the directory will by cached by pack, which simply caches whatever is left after the export.
This happens because we iterate over the layers by globbing the *.toml
files. Thus, if there is no <layer>.toml
, the layer directory is "invisible" to the exporter.
We should probably add a step that checks for these "orphaned" layer dirs and removes them. Maybe, this should happen as part of a LocalVolumeCacher (because this is probably only a problem with that kind of caching mechanism).
When running pack
,
[detector] Trying group of 2...
should read
[detector] Trying group <some number> of 2...
Reproduction
pack
master (0.4.0) and lifecycle
master (0.4.0).pack build some/name -p . --buildpack ...
When pack
0.4.0 creates a builder it also creates a symlink of "latest" for backwards compatibility causing version resolution in lifecycle
to find multiple versions which are exactly the same but treats them as a conflict.
GIVEN I have a previous image without a "io.buildpacks.lifecycle.metadata" label
WHEN I run pack build
THEN I should receive a warning during analyze WARNING: previous image is missing metadata label, skipping analyze
THEN I should receive a warning during export WARNING: previous image is missing metadata label, layers will not be reused
GIVEN I have a previous image with a malformed "io.buildpacks.lifecycle.metadata" label
WHEN I run pack build
THEN I should receive a warning during analyze WARNING: could not read previous image metadata , skipping analyze
THEN I should receive a warning during export WARNING: could not read previous image metadata, layers will not be reused
GIVEN there is no previous image
WHEN I run pack build
THEN I should receive a warning during analyze remains unchanged
THEN I should receive a warning during export WARNING: image not found, not reusing layers from previous image
GIVEN I do not have authorization to read the previous image
WHEN I run pack build
THEN I should receive a warning during analyze remains unchanged
THEN I should fail with an error during export ERROR: unauthorized "authentication required"
NOTE: The behavior should be the same whether or not --publish
flag is specified
Add integration for notary
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.