Giter VIP home page Giter VIP logo

docker-compose-buildkite-plugin's Introduction

Docker Compose Buildkite Plugin Build status

A Buildkite plugin that lets you build, run and push build steps using Docker Compose.

  • Containers are built, run and linked on demand using Docker Compose
  • Containers are namespaced to each build job, and cleaned up after use
  • Supports pre-building of images, allowing for fast parallel builds across distributed agents
  • Supports pushing tagged images to a repository

Examples

You can learn a lot about how this plugin is used by browsing the documentation examples.

Configuration

Main Commands

You will need to specify at least one of the following to use this extension.

build

The name of a service to build and store, allowing following pipeline steps to run faster as they won't need to build the image. Either a single service or multiple services can be provided as an array.

If you do not specify a push option for the same services, the built image(s) will not be available to be used and may cause further steps to fail. If there is no run option, the step's command will be ignored.

run

The name of the service the command should be run within. If the docker-compose command would usually be docker-compose run app test.sh then the value would be app.

push

A list of services to push. You can specify just the service name to push or the format service:registry:tag to override where the service's image is pushed to. Needless to say, the image for the service must have been built in the very same step or built and pushed previously to ensure it is available for pushing.

Other options

None of the following are mandatory.

pull (run only, string or array)

Pull down multiple pre-built images. By default only the service that is being run will be pulled down, but this allows multiple images to be specified to handle prebuilt dependent images. Note that pulling will be skipped if the skip-pull option is activated.

run-image (run only, string)

Set the service image to pull during a run. This can be useful if the image was created outside of the plugin.

collapse-logs (boolean)

Whether to collapse or expand the log group that is created for the output of the main commands (run, build and push). When this setting is true, the output is collected into a --- group, when false the output is collected into a +++ group. Setting this to true can be useful to de-emphasize plugin output if your command creates its own +++ group.

For more information see Managing log output.

Default false

config

The file name of the Docker Compose configuration file to use. Can also be a list of filenames. If $COMPOSE_FILE is set, it will be used if config is not specified.

Default: docker-compose.yml

build-alias (push only, string or array)

Other docker-compose services that should be aliased to the service that was built. This is to have a pre-built image set for different services based off a single definition.

Important: this only works when building a single service, an error will be generated otherwise.

args (build only, string or array)

A list of KEY=VALUE that are passed through as build arguments when image is being built.

env or environment (run only, string or array)

A list of either KEY or KEY=VALUE that are passed through as environment variables to the container.

env-propagation-list (run only)

If you set this to VALUE, and VALUE is an environment variable containing a space-separated list of environment variables such as A B C D, then A, B, C, and D will all be propagated to the container. This is helpful when you've set up an environment hook to export secrets as environment variables, and you'd also like to programmatically ensure that secrets get propagated to containers, instead of listing them all out.

propagate-environment (run only, boolean)

Whether or not to automatically propagate all pipeline environment variables into the run container. Avoiding the need to be specified with environment.

Important: only pipeline environment variables will be propagated (what you see in the BuildKite UI, those listed in $BUILDKITE_ENV_FILE). This does not include variables exported in preceeding environment hooks. If you wish for those to be propagated you will need to list them specifically or use env-propagation-list.

propagate-aws-auth-tokens (run only, boolean)

Whether or not to automatically propagate aws authentication environment variables into the docker container. Avoiding the need to be specified with environment. This is useful for example if you are using an assume role plugin or you want to pass the role of an agent running in ECS or EKS to the docker container.

Will propagate AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_REGION, AWS_DEFAULT_REGION, AWS_STS_REGIONAL_ENDPOINTS, AWS_WEB_IDENTITY_TOKEN_FILE, AWS_ROLE_ARN, AWS_CONTAINER_CREDENTIALS_FULL_URI, AWS_CONTAINER_CREDENTIALS_RELATIVE_URI, and AWS_CONTAINER_AUTHORIZATION_TOKEN, only if they are set already.

When the AWS_WEB_IDENTITY_TOKEN_FILE is specified, it will also mount it automatically for you and make it usable within the container.

command (run only, array)

Sets the command for the Docker image, and defaults the shell option to false. Useful if the Docker image has an entrypoint, or doesn't contain a shell.

This option can't be used if your step already has a top-level, non-plugin command option present.

Examples: [ "/bin/mycommand", "-c", "test" ], ["arg1", "arg2"]

shell (run only, array or boolean)

Set the shell to use for the command. Set it to false to pass the command directly to the docker-compose run command. The default is ["/bin/sh", "-e", "-c"] unless you have provided a command.

Example: [ "powershell", "-Command" ]

skip-checkout (boolean)

Whether to skip the repository checkout phase. This is useful for steps that use a pre-built image and will fail if there is no pre-built image.

Important: as the code repository will not be available in the step, you need to ensure that any files used (like the docker compose files or scripts to be executed) are present in some other way (like using artifacts or pre-baked into the images used).

skip-pull (build and run only, boolean)

Completely avoid running any pull command. Images being used will need to be present in the machine from before or have been built in the same step. Could be useful to avoid hitting rate limits when you can be sure the operation is unnecessary. Note that it is possible other commands run in the plugin's lifecycle will trigger a pull of necessary images.

workdir (run only)

Specify the container working directory via docker-compose run --workdir. This option is also used by mount-checkout if it doesn't specify where to mount the checkout in the container.

Example: /app

user (run only)

Run as specified username or uid via docker-compose run --user.

propagate-uid-gid (run only, boolean)

Whether to match the user ID and group ID for the container user to the user ID and group ID for the host user. It is similar to specifying user: 1000:1000, except it avoids hardcoding a particular user/group ID.

Using this option ensures that any files created on shared mounts from within the container will be accessible to the host user. It is otherwise common to accidentally create root-owned files that Buildkite will be unable to remove, since containers by default run as the root user.

mount-ssh-agent (run only, boolean or string)

Whether to mount the ssh-agent socket (at /ssh-agent) from the host agent machine into the container or not. Instead of just true or false, you can specify absolute path in the container for the home directory of the user used to run on which the agent's .ssh/known_hosts will be mounted (by default, /root).

Default: false

mount-buildkite-agent (run only, boolean)

Whether to automatically mount the buildkite-agent binary and associated environment variables from the host agent machine into the container.

Default: false

mount-checkout (run only, boolean or string)

The absolute path where to mount the current working directory which contains your checked out codebase.

If set to true it will mount onto /workdir, unless workdir is set, in which case that will be used.

Default: false

buildkit-inline-cache (optional, build-only, boolean)

Whether to pass the BUILDKIT_INLINE_CACHE=1 build arg when building an image. Can be safely used in combination with args.

Default: false

pull-retries (run only, integer)

A number of times to retry failed docker pull. Defaults to 0.

push-retries (push only, integer)

A number of times to retry failed docker push. Defaults to 0.

cache-from (build only, string or array)

A list of images to attempt pulling before building in the format service:CACHE-SPEC to allow for layer re-use. Will be ignored if no-cache is turned on.

They will be mapped directly to cache-from elements in the build according to the spec so any valid format there should be allowed.

target (build only)

Allow for intermediate builds as if building with docker's --target VALUE options.

Note that there is a single build command run for all services so the target value will apply to all of them.

volumes (run only, string or array)

A list of volumes to mount into the container. If a matching volume exists in the Docker Compose config file, this option will override that definition.

Additionally, volumes may be specified via the agent environment variable BUILDKITE_DOCKER_DEFAULT_VOLUMES, a ; (semicolon) delimited list of mounts in the -v syntax. (Ex. buildkite:/buildkite;./app:/app).

expand-volume-vars (run only, boolean, unsafe)

When set to true, it will activate interpolation of variables in the elements of the volumes configuration array. When turned off (the default), attempting to use variables will fail as the literal $VARIABLE_NAME string will be passed to the -v option.

⚠️ Important: this is considered an unsafe option as the most compatible way to achieve this is to run the strings through eval which could lead to arbitrary code execution or information leaking if you don't have complete control of the pipeline

Note that rules regarding environment variable interpolation apply here. That means that $VARIABLE_NAME is resolved at pipeline upload time, whereas $$VARIABLE_NAME will be at run time. All things being equal, you likely want to use $$VARIABLE_NAME on the variables mentioned in this option.

graceful-shutdown (run only, boolean)

Gracefully shuts down all containers via 'docker-compose stop`.

The default is false.

leave-volumes (run only, boolean)

Prevent the removal of volumes after the command has been run.

The default is false.

no-cache (build and run only, boolean)

Build with --no-cache, causing Docker Compose to not use any caches when building the image. This will also avoid creating an override with any cache-from entries.

The default is false.

build-parallel (build only, boolean)

Build with --parallel, causing Docker Compose to run builds in parallel. Requires docker-compose 1.23+.

The default is false.

tty (run only, boolean)

If set to true, allocates a TTY. This is useful in some situations TTYs are required.

The default is false.

dependencies (run only, boolean)

If set to false, runs with --no-deps and doesn't start linked services.

The default is true.

pre-run-dependencies (run only, boolean)

If dependencies are activated (which is the default), you can skip starting them up before the main container by setting this option to false. This is useful if you want compose to take care of that on its own at the expense of messier output in the run step.

wait (run only, boolean)

Whether to wait for dependencies to be up (and healthy if possible) when starting them up. It translates to using [--wait in the docker-compose up] command.

Defaults to false.

ansi (run only, boolean)

If set to false, disables the ansi output from containers.

The default is true.

use-aliases (run only, boolean)

If set to true, docker compose will use the service's network aliases in the network(s) the container connects to.

The default is false.

verbose (boolean)

Sets docker-compose to run with --verbose

The default is false.

quiet-pull (run only, boolean)

Start up dependencies with --quiet-pull to prevent even more logs during that portion of the execution.

The default is false.

rm (run only, boolean)

If set to true, docker compose will remove the primary container after run. Equivalent to --rm in docker-compose.

The default is true.

run-labels (run only, boolean)

If set to true, adds useful Docker labels to the primary container. See Container Labels for more info.

The default is true.

build-labels (build only, string or array)

A list of KEY=VALUE that are passed through as service labels when image is being built. These will be merged with any service labels defined in the compose file.

compatibility (boolean)

If set to true, all docker compose commands will rum with compatibility mode. Equivalent to --compatibility in docker compose.

The default is false.

Note that the effect of this option changes depending on your docker compose CLI version:

entrypoint (run only)

Sets the --entrypoint argument when running docker compose.

service-ports (run only, boolean)

If set to true, docker compose will run with the service ports enabled and mapped to the host. Equivalent to --service-ports in docker-compose.

The default is false.

upload-container-logs (run only)

Select when to upload container logs.

  • on-error Upload logs for all containers when an error occurs
  • always Always upload logs for all container
  • never Never upload logs for all container

The default is on-error.

cli-version (string or integer)

If set to 1, plugin will use docker-compose (that is deprecated and unsupported) to execute commands; otherwise it will default to version 2, using docker compose instead.

buildkit (build only, boolean)

Assuming you have a compatible docker installation and configuration in the agent, activating this option would setup the environment for the docker compose build call to use BuildKit. Note that this should only be necessary if you are using cli-version 1 (version 2 already uses buildkit by default).

You may want to also add BUILDKIT_INLINE_CACHE=1 to your build arguments (args option in this plugin), but know that there are known issues with it.

ssh (build only, boolean or string)

It will add the --ssh option to the build command with the passed value (if true it will use default). Note that it assumes you have a compatible docker installation and configuration in the agent (meaning you are using BuildKit and it is correctly setup).

secrets (build only, array of strings)

All elements in this array will be passed literally to the build command as parameters of the --secrets option. Note that you must have BuildKit enabled for this option to have any effect and special RUN stanzas in your Dockerfile to actually make use of them.

Developing

To run the tests:

docker-compose run --rm tests bats tests tests/v1

License

MIT (see LICENSE)

docker-compose-buildkite-plugin's People

Contributors

amartani avatar asford avatar boomper-bot[bot] avatar christophe-scalepad avatar donbobka avatar dreyks avatar dubistkomisch avatar francoiscampbell avatar jj-bk avatar jmchuster avatar jquick avatar lox avatar nsuma8989 avatar pauldthomson avatar pda avatar pecigonzalo avatar puyo avatar pzeballos avatar renovate-bot avatar renovate[bot] avatar richafrank avatar sj26 avatar software-opal avatar songyu-wang avatar theden avatar tomowatt avatar toolmantim avatar toote avatar toothbrush avatar trvrnrth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-compose-buildkite-plugin's Issues

Show a message on how to re-run a step locally on failure

If the image has been pre-built and pushed, it'd be nice to show a message in a post-command hook for how to re-run the step locally.

For example:

docker pull "${image}" && \
echo -e "services:\n\t${service}:\n\t\timage: \"${image}\"" \
  > "/tmp/buildkite-job-${job_id}.docker-compose.yml" && \
docker-compose run \
  -it \
  --rm \
  -f "/tmp/${job_id}.docker-compose.yml" \
  --name "buildkite-job-${job_id}" \
  "${service}" "${command}";
rm "/tmp/buildkite-job-${job_id}.docker-compose.yml"

Force updated images to be pulled

Is their any way to force the plugin to always pull a fresh version of a pre-built image, I would normally do a docker-compose pull locally but I don't have the option of doing that here?

Some tools to smartly wait for compose services

This is a feature request .. just to be clear, not expecting this behavior, but would be nice since it seems like its needed by everyone just about ..

I had to write a bunch of code to ensure that linked services such as mysql were actually up before running code, it might make sense to automate this somehow.

I've been using https://github.com/betalo-sweden/await as a quick helper for this it seems to be a fairly small protocol aware waiter

Prebuilding for multiple services using the same image

I have multiple services in my docker-compose.yml with the same image: property. Only one of them has a build: property:

    services:
      app:
        image: my_company/app
        build: .
        ...
      test:
        image: my_company/app

If I just pre-build app then when I try to run the test service, it rebuilds it in the run step.

If I pre-build app and test as described here then it seems to fail to push test to the registry (ECR) and the run step fails because it can't pull the test image.

This is the log output I get for the build step:

    $ docker-compose -f docker-compose.yml -p buildkite12345678908243a0af4110f50924cb4e -f docker-compose.buildkite-488-override.yml push app test
    Pushing app (1234567890.dkr.ecr.ap-southeast-2.amazonaws.com/ape-ci/my_companyweb:my_companyweb-app-build-488)...
    The push refers to a repository [1234567890.dkr.ecr.ap-southeast-2.amazonaws.com/ape-ci/my_companyweb]
    0662cafed00d: Pushed
    f9afaae22bee: Pushed
    27d6313d90dc: Layer already exists
    ...
    bee4f8ad5631: Layer already exists
    cafed00d0415: Layer already exists
    deadbeefbcf9: Layer already exists
    my_companyweb-app-build-488: digest: sha256:8c61d757b9c74eca29d9aa74576fafd6ef8581f5e079af3e00f7a24423832138 size: 4098
    $ buildkite-agent meta-data set docker-compose-plugin-built-image-tag-app 1234567890.dkr.ecr.ap-southeast-2.amazonaws.com/ape-ci/my_companyweb:my_companyweb-app-build-488
    $ buildkite-agent meta-data set docker-compose-plugin-built-image-tag-test 1234567890.dkr.ecr.ap-southeast-2.amazonaws.com/ape-ci/my_companyweb:my_companyweb-test-build-488

Which to me looks like it's asking for test and app to be pushed but it's only pushing app.

Any idea how I can avoid having my run step rebuild my image?

Thanks

Plugin fails to acquire lock, then is unable to open it

Here's the raw log:

~~~ Setting up plugins
�[90m# Could not aquire lock on "/var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin.lock" (Locked by other process)�[0m
�[90m# Trying again in 1 second...�[0m
�[90m# Could not aquire lock on "/var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin.lock" (Locked by other process)�[0m
�[90m# Trying again in 1 second...�[0m
�[31m🚨 Buildkite Error: open /var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin.lock: no such file or directory�[0m
^^^ +++

And here are the details of the agent:

aws:ami-id=ami-f9d809ef aws:instance-id=i-08e499a4ba6771846 aws:instance-type=c4.xlarge buildkite-aws-stack=v2.0.0-rc5-2-g2f723b2 docker=1.13.1 queue=default stack=buildkite-ci-agents

I believe a bunch of jobs started on this node at the same time. I also noticed that one of the other jobs on the same node failed with this error:

declare -A breaks plugin on OSX

Since cache-from support was added in #82 you can no longer use this plugin on OSX’s default Bash, which is version 3.2, because of the addition of declare -A.

Perhaps we could remove the use of declare -A?

pre-command and post-command runs in docker container

Currently if I create a pre-command or post-command it runs outside of my docker container. Given that my test environment sits within docker, i would rather by default the hooks was run within the context of the same docker container my tests will run in. This will give me the chance to run setup tasks like installing assets and post command like caching assets for the next build.

thx.

Version 1.8.3+ strange error related to docker-compose versions

v1.8.3 and up of the plugin throw a strange error relating to Docker Compose versions.
There is no issue with v1.8.2 of this plugin.

Agent Version: v3.0-beta.39
Docker version: 17.12.1-ce
Docker Compose version: 1.19.0

Error Message:

/etc/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin-v1-8-4/hooks/../lib/shared.bash: line 70: ${#config_files[@]:-}: bad substitution
--
  | /etc/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin-v1-8-4/hooks/../lib/shared.bash: line 84: config[0]: unbound variable
  | The 'build' option can only be used with Compose file versions 2.0 and above.
  | For more information on Docker Compose configuration file versions, see:
  | https://docs.docker.com/compose/compose-file/compose-versioning/#versioning

pipeline.yml

steps:
  - name: ":docker: Build"
    agents:
      queue: docker-builder
    plugins:
      docker-compose#v1.8.4:
        build: app
        config:
          - docker-compose.ci.yml

docker-compose.ci.yml

version: '3.4'

services:
  app:
    build:
      context: .

Add support for multiple commands?

It looks like multiple commands aren't supported:

steps:
  - label: ':hammer: Test'
    command:
      - 'echo COMMAND_1'
      - 'echo COMMAND_2'
    plugins:
      docker-compose:
        run: tests
        config: docker-compose.buildkite.yml

I see that it tries to run: docker-compose -f docker-compose.buildkite.yml -p buildkite82973f2986fe406c9a67a03338bca53e run tests echo COMMAND_1 echo COMMAND_2

And my guess is that https://github.com/buildkite-plugins/docker-compose-buildkite-plugin/blob/master/hooks/commands/run.sh#L43 passes in $BUILDKITE_COMMAND as an array.

Would it be possible to have that line in run.sh iterate over each item of $BUILDKITE_COMMAND and run then sequentially?

As a workaround, I'm including the repeated step inside the first command (a shell script), but it'd be nice to specify multiple command to re-use e.g. a script that processes xunit.xml output files.

Error with multiple agents in one instance

I got a following error with multiple agents in one instance.

agent is v3.0-beta.32.

  | Setting up plugins | 5m 0s
-- | -- | --
  | # Could not aquire lock on "/var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin-v1-6-0.lock" (Locked by other process)
  | # Trying again in 1s...
  | # Skipping artifact upload, no checkout
  | 🚨 Error: context deadline exceeded

Support --user (custom value or buildkite-agent)

Hey there!

I've been wrestling with the best way to handle mounted volumes from the host. When Docker runs as root (id=0), the files it writes to the mounted volume are owned by root. Buildkite fails with a permissions error on any remaining files when it goes to git clean the repo on the next job.

On buildkite-agent@2, I was getting around this with a pre-command hook that ran export UID and then a user: $UID property in my docker-compose.yml file. I noticed that with the plugin, this variable is not exposed to the plugin script and compose treats it as missing.

I'd love to be able to either specify a user through the plugin or ideally instruct the plugin to pass the user ID who executed it.

I haven't thought about an API for this, just wanted to get my thoughts down and see if you'd considered this.

Thanks!

Feature request: implement static analysis against docker-compose

I saw an engineer today who was new to Buildkite have difficulty hooking up all the pieces when creating a pipeline. We can make the experience better for both new and experienced engineers by adding some form of static analysis.

I think there's a lot that we can do here, but as a first step, we could perform validation against the docker-compose file. At a minimum it would be nice to ensure that the service exists within docker-compose that the user is attempting to build/leverage in a build step.

STR

Create a pipeline that references a service not defined in docker-compose.yml (or whichever docker-compose file is leveraged). Example:

steps:
  - name: ':docker: :package:'
    plugins:
      'docker-compose#v1.8.4':
        build: some-invalid-docker-service
        image-repository: my.registry/place
  - wait
  - command: ./bin/test
    plugins:
      'docker-compose#v1.8.4':
        run: some-invalid-docker-service

And an example docker-compose.yml file:

version: '2'
services:
  my-default-service:
    build: .

Current Behavior

You see a big error like 🚨 Error: The plugin github.com/buildkite-plugins/docker-compose-buildkite-plugin#v1.8.4 command hook exited with an error: Error running /bin/bash -c /tmp/buildkite-agent-bootstrap-hook-runner-...: exit status 1

Expected Behavior

You're given an error message which will help you understand and track down the issue better within the default expanded section of the log output.

Docker-compose service with image and build breaks run

docker-compose recently introduced the ability to declare an image and a build:

https://docs.docker.com/compose/compose-file/#/image

If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.

So for an example like this:

  accounts: 
    image: my-image/llamas:latest
    build:
      context: .
      dockerfile: ops/Dockerfile
    ports:
      - "9005:9005"
    depends_on:
      - mysql

This doesn't work with https://github.com/buildkite-plugins/docker-compose-buildkite-plugin/blob/master/hooks/commands/build.sh#L34.

Multiple plugins or one?

We can roll this all into the one plugin…

steps:
  - agents:
      queue: docker-builder
    plugins:
      toolmantim/docker-compose:
        build-and-push:
          build-container: app

  - waiter

  - command: test.sh
    parallelism: 25
    agents:
      queue: docker-compose
    plugins:
      toolmantim/docker-compose:
        command-container: app

or split it out into two:

steps:
  - agents:
      queue: docker-builder
    plugins:
      toolmantim/docker-compose-builder:
        build-container: app

  - waiter

  - command: test.sh
    parallelism: 25
    agents:
      queue: docker-compose
    plugins:
      toolmantim/docker-compose:
        command-container: app

There'd have to be shared knowledge of what build meta-data key to look for to find the name of the built image… and both agents would need to ensure they have the correct docker credentials. But it'd split up the code, simplify the config, and force us to make image name that docker compose uses a public API (in case you wanted to pre-build the images some other way.

Build is not able to actually build now?

So I'm trying to use this plugin to set up a build then run chain .. and it seems like the recent changes that added the config override on build basically make that step entirely ineffective ..

I don't really understand why it is the way it is .. but its looking like it overrides the config setting an image name .. making it impossible to ever actually build it ..

screen shot 2016-11-22 at 11 37 20 am

The related build step is:

steps:
  - name: ":docker: :package:"
    plugins:
      docker-compose:
        build: app

BUILDKITE_PROJECT_SLUG has invalid character for Docker Image Tag

https://github.com/buildkite-plugins/docker-compose-buildkite-plugin/blob/master/hooks/commands/build.sh#L6

A tag name may contain lowercase and uppercase characters, digits, underscores, periods and dashes. A tag name may not start with a period or a dash and may contain a maximum of 128 characters.
Source: https://docs.docker.com/engine/reference/commandline/tag/

BUILDKITE_PROJECT_SLUG = [BUILDKITE_ORGANIZATION_SLUG, BUILDKITE_PIPELINE_SLUG].join('/')

Need to amend to use something like:

BUILDKITE_PLUGIN_DOCKER_COMPOSE_IMAGE_NAME="${BUILDKITE_PLUGIN_DOCKER_COMPOSE_IMAGE_NAME:-${BUILDKITE_ORGANIZATION_SLUG}-${BUILDKITE_PIPELINE_SLUG}-${BUILDKITE_PLUGIN_DOCKER_COMPOSE_BUILD}-build-${BUILDKITE_BUILD_NUMBER}}"

Add support for configuring retries for pull/push operations

We're currently seeing some flakiness with docker images when both uploading and downloading. It would be great if there was a configuration option to enable exponential backoff retrying.

I'm not sure if you have granular control over a single option, or if all of the docker steps would have to be retried. E.g., I'm pulling a lot of images during a build step and one fails. Would it be possible to retry just that package, or would we need to pull all images again?

bc breaks support for using this with buildkite/agent images

Since #87 you can't use this plugin with our default buildkite/agent Docker images because they don't include the command-line calc tool bc.

The Docker images were designed to work with this plugin though, hence why they include the docker and docker-compose clients.

Rather than adding it to the buildkite/agent image, perhaps we can do without using bc?

Images should be deleted after building

Since the build always pushes up to a registry and there isn't a way to execute a command after the build I think images should be cleaned up after building since steps are inherently stateless.

Not cleaning up on cancel

It seems like when a job is cancelled, either via manual cancel or though timeout, dependant services (like db, redis, etc), and sometimes the main test process if its stuck, get just left behind on the test hosts (this understandably severely impacts memory availability and such)

Using commands with this plugin doesn't appear to work

Using this plugin and this step:

  - label: build
    commands:
      - python setup.py sdist
      - buildkite-agent artifact upload dist/*.tar.gz
    plugins:
      docker-compose#v1.5.2:
        run: app
        config: docker-compose.buildkite.yml

I expected either two calls to docker-compose or a single call using bash eg:

docker-compose -f docker-compose.buildkite.yml run app bash -c "python setup.py sdist && buildkite-agent artifact upload dist/*.tar.gz"

Instead the command reported by buildkite is:

docker-compose -f docker-compose.buildkite.yml -p buildkited85f2a76afb84b34ba66557c6bd4dc25 run app python setup.py sdist buildkite-agent artifact upload dist/\*.tar.gz

Where the commands are concatenated without any handling.

Malformed image uri generated in try_image_restore_from_docker_repository()

The latest update seems to cause the docker pull command to look like this:

docker pull $'\E[90m$\E[0m buildkite-agent meta-data get docker-compose-plugin-built-image-tag-web\n123456789012.dkr.ecr.ap-southeast-2.amazonaws.com/my-ecr-repo:my-image'

This seems to be the cause:

if image=$(plugin_get_build_image_metadata "$run_service_name") 2>/dev/null; then

plugin_get_build_image_metadata uses plugin_prompt_and_must_run which returns \033[90m$\033[0m along with the actual command

Support for configurable Compose file version

I'm attempting to use this plugin to run a docker-compose pipeline. The application was configured to use docker-compose prior to the experiment with Buildkite. In order to support environment variable interpolation with defaults in the compose file, the application uses the 2.1 format. Unfortunately, it looks like this causes an error similar to the following:

ERROR: Version mismatch: file ./docker-compose.yml specifies version 2.1 but extension file ./docker-compose.buildkite-web-override.yml uses version 2.0

One potential solution is to support an environment variable that specifies the docker-compose version (which itself can default to 2.0 😄 ). I imagine this would make it easier for existing applications with significant compose configurations to get onboard.

Another solution is to add an environment variable by disables the override altogether. This solution might be trickier, since (judging from build.sh), there would be no guarantee that the image is correctly tagged for pushing.

Automatically propagate environment variables into the container

When setting pipeline environment variables it commonly catches me and other people in our company out that these variables are not shared with the container. We can specify them manually in the docker-compose file, but it would be nice if this was done automatically.

Logs on failure do not have any content

From PR #26, the logs are getting tailed and then uploaded after the command/test fails. However the logs that are uploaded do not have any content (zero byte)

This is occurring on a AWS Elastic Environment.

This could be some kind of configuration issue. Not sure what else to add here, let me know need more info

Log generation:

EOF
--
  | $ docker logs --timestamps --tail 500 /buildkited48fb259943147958a42a1b6a87f8dd8_dockerpostgres_1
  | EOF
  | EOF
  | EOF
  | $ docker logs --timestamps --tail 500 /buildkited48fb259943147958a42a1b6a87f8dd8_redis_1
  | EOF
  | EOF

Upload:

2017-10-25 01:02:56 INFO   Found 2 files that match "docker-compose-logs/*.log"
--
  | 2017-10-25 01:02:56 INFO   Creating (0-2)/2 artifacts
  | 2017-10-25 01:02:56 INFO   Uploading artifact e3cd97fc-dc34-4ebc-b583-71e9dd1fac5c docker-compose-logs/buildkited48fb259943147958a42a1b6a87f8dd8_redis_1.log (0 bytes)
  | 2017-10-25 01:02:56 INFO   Uploading artifact c6c4738d-7230-4a34-a2da-cb02b517c6ed docker-compose-logs/buildkited48fb259943147958a42a1b6a87f8dd8_dockerpostgres_1.log (0 bytes)

Incorrect parsing of config file strings in 1.8.3 release

After the 1.8.3 release attempts to run the docker-compose-buildkite plugin with multiple config strings has been raising this error (formatted for ease-of-reading);

$ docker-compose -f $'docker-compose.yml\ndocker-compose.test.yml' 
  -p buildkiteaaabbcc123123123123 -f docker-compose.buildkite-1430-override.yml 
  run --name buildkiteaaabbcc123123123123_api_build_1430 api bundle exec 
  rake knapsack:rspec

  ERROR: .IOError: [Errno 2] No such file or directory: 
  u'./docker-compose.yml\ndocker-compose.test.yml'

The pipeline step throwing this error looks like this;

  - name: ":rspec: %n"
    command: bundle exec rake knapsack:rspec
    parallelism: 8
    plugins:
      docker-compose:
        run: api
        config:
          - docker-compose.yml
          - docker-compose.test.yml

Have briefly discussed this with some work colleagues and we believe it is a regression introduced in this commit.

How do I push built images?

My steps roughly end up as build, test and finally push. Is there a way to do this currently with the docker-compose-plugin?

Dowload artifacts support

It is not possible to download artifacts when we use this plugin. Consider to add an option to pull down artifacts

Option for outputting service logs during execution

The problem:
Services which are spun up with depends_on will not display log output during execution.

We have a docker-compose service list as follows:

version: '2.1'
services:
  my-server:
    build: .
    command: yarn start
    expose:
      - 8080
    ports:
      - "8080:8080"
    network_mode: "host"
    healthcheck:
      test: ["CMD-SHELL", "curl -H \"Accept: text/html\" -f http://localhost:8080 || exit 1"]
      interval: 5s
      timeout: 10s
      retries: 5

  server-healthy:
    build: .
    network_mode: "host"
    depends_on:
      my-server:
        condition: service_healthy

  run-service:
    build: .
    network_mode: "host"
    depends_on:
      - server-healthy
    command: ./test.sh

It seems that this is either not documented, or it's not supported, but right now it seems impossible to get the log output of my-server with this plugin. (Docker-compose eats depends_on service log output by default).

Solution for local development:
Developers can manually view output in another terminal tab, via a command like: docker-compose logs -f my-service

Possible solutions for this plugin:

  • One possible solution would be a new configuration option allowing developers to tail docker-compose output of selected services. An example of the pipeline implementation:
  - name: 'run tests'
    command: ./test.sh
    plugins:
      'docker-compose#v1.8.0':
        run: run-service
        logs:
          - my-server
          - my-other-service
  • It might also be nice if we could have multiple UI tabs for output of different services in the Buildkite frontend, but that would likely require changes in the frontend product, and is not nearly as important as being able to see the logs in the first place.
  • If we can't have background tabs during execution, perhaps running docker-compose logs -f <services> & before the docker-compose run script would be sufficient.

Containers still running after canceled build.

Hey guys,

I am running build kite in kubernetes using docker in docker to run my tests and the docker compose plugin keeps the containers running on a canceled build since they don't stop that quickly.

I made my own script that seeps to have fixed the issue I do the following: docker-compose run --rm app run-specs

That ensures the linked containers die even when a build is canceled.

Would be good to have this added but I am not sure how it fit's in with your strategy

Pull down prebuilt images for dependent services

Currently only the service that is being run will be checked for prebuilt images. This means that in a situation where you have a system with dependent images, even if you prebuilt them, they will be rebuilt for run.

For example, if you prebuild nginx and php, and then run nginx (which depends on php), nginx will get pulled down, but php will be rebuilt.

Related to #86.

Change output to closed unless failed

Using +++ here means builds will have the docker build logs expanded by default. This is usually undesirable as you need to scroll through pages of output before you can see your tests.

Wasn't there some magic for "collapse unless failed", maybe ~~~?

Ability to pull multiple images in run step

Hi,

I'm attempting to run tests of our whole suite (our clients and servers are in separate repos).

To do this, I've created a parent repo with the other repos as submodules. I've created a pipeline and docker-compose files in the parent repo so that Buildkite can build and run the child project images.

I would like to pre-build the images but find that when it hits the run step, it only pulls the image that I'm running, and then proceeds to rebuild the other images specified in docker-compose.yml.

I had a quick look through the plugin code and found this comment:

We only look for a prebuilt image for the service being run. This means that
any other services that are dependencies that need to be built will be built
on-demand in this step, even if they were prebuilt in an earlier step.

Is this something that's likely to change?

Thanks
Ritchie

Environment variable interpolation does not work as expected

Given the following pipeline:

env:
  ECR_REPO: "my_ecr_repo.dkr.ecr.us-east-1.amazonaws.com/repo"

steps:
  - name: ":docker: build web"
    agents:
      queue: builders
    plugins:
      docker-compose-buildkite-plugin#v1.1:
        build: web
        config:
          - docker-compose.yml
          - docker-compose.ci.yml
        image-repository: "${ECR_REPO}"

When this runs, the BUILDKITE_PLUGINS environment variable does not include an image-repository key for the plugin.

Instead, I'm forced to add the literal to the config, like so:

steps:
  - name: ":docker: build web"
    agents:
      queue: builders
    plugins:
      docker-compose-buildkite-plugin#v1.1:
        build: web
        config:
          - docker-compose.yml
          - docker-compose.ci.yml
        image-repository: "my_ecr_repo.dkr.ecr.us-east-1.amazonaws.com/repo"

This is non-ideal because ECR_REPO is referenced in multiple subsequent steps throughout the pipeline. Is there a way to avoid the repetition with environment variables, or am I misunderstanding how the environment variable interpolation works?

A slightly different issue occurs when attempting to use the commit as the image-name. Given the following configuration:

steps:
  - name: ":docker: build web"
    agents:
      queue: builders
    plugins:
      docker-compose-buildkite-plugin#v1.1:
        build: web
        config:
          - docker-compose.yml
          - docker-compose.ci.yml
        image-repository: "my_ecr_repo.dkr.ecr.us-east-1.amazonaws.com/repo"
	image-name: "${BUILDKITE_COMMIT}"

The image that is pushed has a tag of HEAD. It appears that at the time of pushing, HEAD has not been resolved to an actual hash. However, under the environment tab, BUILDKITE_COMMIT does show up as a hash. If I trigger a build with a specific commit, the issue does not occur.

Infinite retries when push/pulls fail

It appears that if you don't configure any push/pull retry config value, and a push or pull fails, it just happens infinitely:

~~~ :docker: Pushing image myimagerepo:latest
$ docker tag myapp myimagerepo:latest
$ docker push myimagerepo:latest
The push refers to a repository [myimagerepo]
06aae5e797d6: Preparing
c915c11d0b10: Preparing
a5ea162f0dad: Preparing
6b34e9cb00a8: Preparing
e81b64dfd998: Preparing
ab90d83fa34a: Preparing
8ee318e54723: Preparing
e6695624484e: Waiting
da59b99bbd3b: Waiting
5616a6292c16: Waiting
f3ed6cb59ab0: Waiting
654f45ecb7e3: Waiting
2c40c66f7667: Waiting
no basic auth credentials
Exited with 1
Retrying -1 more times...
$ docker push myimagerepo:latest
The push refers to a repository [myimagerepo]
06aae5e797d6: Preparing
c915c11d0b10: Preparing
a5ea162f0dad: Preparing
6b34e9cb00a8: Preparing
e81b64dfd998: Preparing
ab90d83fa34a: Waiting
8ee318e54723: Waiting
e6695624484e: Waiting
da59b99bbd3b: Waiting
5616a6292c16: Waiting
f3ed6cb59ab0: Waiting
654f45ecb7e3: Waiting
2c40c66f7667: Waiting
no basic auth credentials
Exited with 1
Retrying -2 more times...
$ docker push myimagerepo:latest
The push refers to a repository [myimagerepo]
06aae5e797d6: Preparing
...

I don't think it handles the default value of "0" correctly in the retry logic.

Clashes when pre-building different docker-compose configs with the same service name

If you have the same service name in two different docker-compose config files, and try to pre-build them for later steps, only the last completed step will be used.

For example, given the following pipeline both of the test commands will run against the prod image, instead of running against their respective images:

steps:
  - name: ":docker:"
    plugins:
      docker-compose#v1.8.4:
        build: app
        image-repository: org/app-dev

  - label: ":docker::zap:"
    plugins:
      docker-compose#v1.8.4:
        config: docker-compose.prod.yml
        build: app
        image-repository: org/app-prod

  - wait

  - label: ":hammer:"
    command: make test
    plugins:
      docker-compose#v1.8.4:
        run: app

  - label: ":hammer::zap:"
    command: make test
    plugins:
      docker-compose#v1.8.4:
        config: docker-compose.prod.yml
        run: app

Perhaps the meta-data key we use to the store the image name should be scoped to the the docker-compose config file as well, and not just the service name?

Pre-building is suddenly inconsistently working

Hey,

I'm running the Elastic stack at the moment and I haven't re-provisioned it in months so literally nothing has changed in my configuration that I'm aware of.

However, starting this week, more than half of build steps don't use the pre-built image and instead start building their own, which shows as increased buildtime (8m -> 30m in my case, due to having more steps than agents in the project):

2017-04-28 at 3 08 pm

2017-04-28 at 3 10 pm

Any idea how I can debug this change? The only thing I can think is that my elastic stack is using different versions of the plugin when agent boxes spin down and come back up the next day. The actual base EC2 images shouldn't be changing though as I'm using the same Elastic stack I have been for months.

Is there any info I can give to try to get to the root cause of this?

v1.2 breaks steps that use prebuilt images

It seems this pr introduced a bug that breaks the plugin's hook.

Here's a failing build for buildkite staff if you need to inspect: https://buildkite.com/geckoboard/gecko-chef/builds/292#dfc2a011-7800-4b0a-8e72-106f660f426e

output from failing step
~~~ Setting up plugins
�[90m# Plugin "github.com/buildkite-plugins/docker-compose-buildkite-plugin" found�[0m
~~~ Running global environment hook
�[90m# Executing "/etc/buildkite-agent/hooks/environment"�[0m
~~~ Setting up the environment

Sourcing CloudFormation environment...

Starting an SSH Agent...

Agent pid 20257



~~~ Downloading secrets from geckoboard-ci-buildkite-secrets

Downloading ssh-key private_ssh_key

Identity added: /dev/fd/63 (/dev/fd/63)

~~~ Downloading env files

Waiting for Docker...

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES



Fixing permissions for 'buildkite-ci-agents-i-0fe5764ad0f34f302-3'...



~~~ Applying environment changes
�[90m# AWS_ECR_LOGIN changed�[0m
�[90m# AWS_DEFAULT_REGION changed�[0m
�[90m# SSH_AGENT_PID changed�[0m
�[90m# SSH_AUTH_SOCK changed�[0m
�[90m# AWS_REGION changed�[0m
~~~ Preparing build directory
�[90m# Changing working directory to "/var/lib/buildkite-agent/builds/buildkite-ci-agents-i-0fe5764ad0f34f302-3/geckoboard/gecko-chef"�[0m
�[90m# Host "github.com" already in list of known hosts at "/var/lib/buildkite-agent/.ssh/known_hosts"�[0m
�[90m$�[0m git remote set-url origin [email protected]:geckoboard/gecko-chef.git
�[90m$�[0m git clean -fxdq
�[90m$�[0m git submodule foreach --recursive git clean -fxdq
...
�[90m# Fetch and checkout pull request head�[0m
�[90m$�[0m git fetch -v origin refs/pull/771/head
From github.com:geckoboard/gecko-chef
 * branch            refs/pull/771/head -> FETCH_HEAD
�[90m$�[0m git checkout -f 71b7f563c30a3025977b7a5913bdf4af195f4542
HEAD is now at 71b7f56... Add a PostgreSQL data bag for staging
�[90m$�[0m git submodule sync --recursive
....
�[90m$�[0m git submodule update --init --recursive --force
....
�[90m$�[0m git submodule foreach --recursive git reset --hard
....
�[90m# Checking to see if Git data needs to be sent to Buildkite�[0m
�[90m$�[0m buildkite-agent meta-data exists buildkite:git:commit
~~~ Running global pre-command hook
�[90m# Executing "/etc/buildkite-agent/hooks/pre-command"�[0m
~~~ Authenticating with AWS ECR

Flag --email has been deprecated, will be removed in 1.14.

Login Succeeded

~~~ Running plugin github.com/buildkite-plugins/docker-compose-buildkite-plugin command hook
�[90m# Executing "/var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin/hooks/command"�[0m
�[90m$�[0m buildkite-agent meta-data get docker-compose-plugin-built-image-tag-0

�[90m$�[0m buildkite-agent meta-data get docker-compose-plugin-built-image-tag-1

2017-04-26 14:25:32 WARN   POST https://agent.buildkite.com/v3/jobs/dfc2a011-7800-4b0a-8e72-106f660f426e/data/get: 404 No key "docker-compose-plugin-built-image-tag-1" found (Attempt 1/10 Retrying in 5s)

2017-04-26 14:25:32 FATAL  Failed to get meta-data: POST https://agent.buildkite.com/v3/jobs/dfc2a011-7800-4b0a-8e72-106f660f426e/data/get: 404 No key "docker-compose-plugin-built-image-tag-1" found

~~~ :docker: Creating a modified docker-compose config for pre-built images

/var/lib/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin/hooks/../lib/shared.bash: line 123: $2: unbound variable

version: '2'

services:

  681925248577.dkr.ecr.us-east-1.amazonaws.com/buildkite-job-environments:gecko-chef-ci-build-292:

~~~ :docker: Cleaning up after docker-compose

�[90m$�[0m docker-compose -f docker-compose.yml -p buildkitedfc2a01178004b0a8e72106f660f426e kill

�[0m�[90m$�[0m docker-compose -f docker-compose.yml -p buildkitedfc2a01178004b0a8e72106f660f426e rm --force -v

No stopped containers

�[0m�[90m$�[0m docker-compose -f docker-compose.yml -p buildkitedfc2a01178004b0a8e72106f660f426e down --volumes

Removing network buildkitedfc2a01178004b0a8e72106f660f426e_default

�[33mWARNING�[0m: Network buildkitedfc2a01178004b0a8e72106f660f426e_default not found.

�[0m^^^ +++
~~~ Running global post-command hook
�[90m# Executing "/etc/buildkite-agent/hooks/post-command"�[0m
unset SSH_AUTH_SOCK;

unset SSH_AGENT_PID;

echo Agent pid 20257 killed;

~~~ Uploading artifacts
�[90m$�[0m buildkite-agent artifact upload .kitchen/**/* 
�[1;32m2017-04-26 14:25:33 INFO  �[0m �[0mNo files matched paths: .kitchen/**/*�[0m

Unable to pass Kubernetes volumeMount into container

I have a JSON file that's stored in a Kubernetes secret that's shared with my the Buildagent with the following config.

- mountPath: /root/docker.json
          name: docker-login
          subPath: docker.json

The file is a JSON key for a Kubernetes service account, so obviously not wanting to bake it into any of my images.

I'm using the file within a script to build/push an image using docker push as follows and this works fine.

docker login -u _json_key -p "$(cat /root/docker.json)" https://eu.gcr.io

But I have a docker compose step using this plugin where I load up a pre-built image that contains Gcloud and Kubectl so I can update the deployment to the latest build version, and I need this key file to be able to login, but I can't seem to get the volume to mount with my docker compose file, I'm using the following.

volumes:
      - /root/docker.json:/base/kube/docker.json

But it always throws and error about not being able to create the folder, so seems the docker-compose plugin is not able to see/access the mounted file.

Error response from daemon: error while creating mount source path '/root/docker.json': mkdir /root/docker.json: read-only file system

Not sure how to resolve this?

Build step is confusing

Having a build step without an image-repository step doesn't make a lot of sense. Based on discussion with @toolmantim it sounds like we should:

  • Deprecate build
  • Add a prebuild step that is what build was, but with image-repository mandatory
  • Clarify in the docs how it should be used

Environment variables defined by buildkite should be automatically injected into containers

I'm new to docker and buildkite's docker compose plugin so I was a bit confused as to why the environment variables that were showing up in the "Environment" tabs in my builds weren't available within the containers. Eventually I realised I need to explicitly whitelist them in the docker-compose.yml config, e.g.

services:
  my_container:
    environment:
      - BUILDKITE_BUILD_NUMBER
      - BUILDKITE_PROJECT_SLUG
      - ANOTHER_VAR_I_DEFINED_IN_PIPELINE.YML

It'd be nice if these variables were injected by default, so that the experience lines up with what the UI is suggesting will happen.

Proposal: script directive for executing a script rather than calling run/build

I find myself often writing scripts for calling docker-compose that pull the image from a previous build step. I'd love a way to automate some of the boiler plate of those. This is an example:

#!/bin/bash
set -eu

cleanup() {
  docker-compose kill || true
  docker-compose rm --force -v || true
  docker-compose down -v || true
}

trap cleanup EXIT

image_name=$(buildkite-agent meta-data get docker-compose-plugin-built-image-tag-blah)

export COMPOSE_PROJECT_NAME="buildkite$$" 
docker pull "$image_name"

... do docker-compose stuff

What if this could be:

steps:
  - command: test.sh
    plugins:
      docker-compose:
        script: my_custom_compose_script.sh

and the script could just be the docker-compose bits? Possibly even with a docker-compose-with-config function that calls the config files from the config.

Would save much pain, and allows extending on the plugin functionality without losing all of the nice bits.

Older docker-compose versions don't support `docker-compose config`

I ran into this issue with 1.2.0.

Running plugin github.com/buildkite-plugins/docker-compose-buildkite-plugin#v1.7.0 command hook | 0s
-- | --
  | # A hook runner was written to "/tmp/buildkite-agent-bootstrap-hook-runner-550291113" with the following:
  | /var/buildkite-agent/plugins/github-com-buildkite-plugins-docker-compose-buildkite-plugin-v1-7-0/hooks/command
  | # Executing "/tmp/buildkite-agent-bootstrap-hook-runner-550291113"
  | $ /bin/bash -c /tmp/buildkite-agent-bootstrap-hook-runner-550291113
  | $ docker-compose -f docker-compose.yml -p buildkitedca524ae891d484094422bf9f196331f config
  | No such command: config
  |  
  | Commands:
  | build     Build or rebuild services
  | help      Get help on a command
  | kill      Kill containers
  | logs      View output from containers
  | port      Print the public port for a port binding
  | ps        List containers
  | pull      Pulls service images
  | rm        Remove stopped containers
  | run       Run a one-off command
  | scale     Set number of containers for a service
  | start     Start services
  | stop      Stop services
  | restart   Restart services
  | up        Create and start containers
  | $ buildkite-agent meta-data get docker-compose-plugin-built-image-tag-statusbot

Clean up tags in the teardown?

Should this plugin clear out it's docker tags once done? ECR has a 1000 image limit and this will result in hitting it quickly.

docker-compose version parsing is very naïve

The code that parses the docker-compose file version

# Returns the version of the first docker compose config file
function docker_compose_config_version() {
sed -n "s/version: ['\"]\(.*\)['\"]/\1/p" < "$(docker_compose_config_file)"
}
will break if, for instance, the user adds an extra space before the version string. It’ll also fail if the version is not specified as a string, and has no protections against matching a string specified in any key ending in version.

It also lacks any sort of fallback or error message if it fails to find a version number.

Support parallel build steps in pipeline

We would like to build multiple services through docker-compose across machines to parallelize build time.

Given a pipeline like the following we would expect service 1 and service 2 to be built in parallel. I believe today we can build them, but for some reason further steps will be unable to download the first service and they will be built locally.

- name: "SERVICE 1"
  plugins:
    docker-compose:
      build: service1
      image-repository: <repo>
      config: service1/docker-compose.yml
- name: "SERVICE 2"
  plugins:
    docker-compose:
      build: service2
      image-repository: <repo>
      config: service2/docker-compose.yml
- wait
- name: "1st service script"
  command: "ci/script"
  plugins:
    docker-compose:
      run: service1
      config: webapp/docker-compose.yml
- name: "2nd service script"
  command: "ci/script"
  plugins:
    docker-compose:
      run: service2
      config: service2/docker-compose.yml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.