Giter VIP home page Giter VIP logo

crane's Introduction

Crane

Lift containers with ease - michaelsauter.github.io/crane

Overview

Crane is a Docker orchestration tool similar to Docker Compose with extra features and (arguably) smarter behaviour. It works by reading in some configuration (JSON or YAML) which describes how to run containers. Crane is ideally suited for development environments or continuous integration.

Features

  • Extensive support of Docker run flags
  • Simple configuration with 1:1 mapping to Docker run flags
  • docker-compose compatible
  • ultra-fast bind-mounts via Unison on Mac
  • Shortcut commands
  • Flexible ways to target containers (through groups and CLI flags to exclude/limit)
  • Smart detach / attach behaviour
  • Verbose output which shows exact Docker commands
  • Hooks
  • ... and much more!

Documentation & Usage

Please see michaelsauter.github.io/crane/docs.html.

Installation

The latest release is 3.6.1 and requires Docker >= 1.13. Please have a look at the changelog when upgrading.

bash -c "`curl -sL https://raw.githubusercontent.com/michaelsauter/crane/v3.6.1/download.sh`" && \
mv crane /usr/local/bin/crane

Copyright © 2013-2020 Michael Sauter. See the LICENSE file for details.


GoDoc Build Status

crane's People

Contributors

adrianhurt avatar bcicen avatar bivas avatar bjaglin avatar c-nv-s avatar cvrebert avatar dreamcat4 avatar eggtree avatar eitoball avatar gissehel avatar inthroxify avatar jesper avatar jsierles avatar lefeverd avatar mgrachev avatar michaelsauter avatar michaloo avatar mishak87 avatar pdalpra avatar scottmlikens avatar t-suwa avatar teabough avatar tmc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crane's Issues

Support for docker 0.12+

Since moby/moby#6130, first released in docker 0.12, docker inspect --format={{.ID}} <container> doesn't work anymore since Id is now CamelCase. That breaks container.Id() and container.status() internally, so all commands involving checking the running states of a container are broken. I guess we need some kind of check on the server API version and use:

  • ID for <0.12
  • Id for >=0.12

I haven't found any other breaking change, but if I do, I'll compile them here.

provision and --force

Let's have a separate place to discuss the provision command and the --force flag (see #38).

I would say that provision itself shouldn't need a --force flag, but when you call lift, I wouldn't necessarily think that the images are rebuilt.

Escaping `$` in configuration values

Since all values in the run configuration are expanded via os.ExpandEnv(), I haven't found a way to escape $ so that it's not expanded.

A typical use case for having non-expanded $ is the usage within the CMD passed to a container of the environment variables such as $API_PORT_8080_TCP_ADDR, injected by docker --link, which must be evaluated on the container and not the host.

Looking at http://golang.org/src/pkg/os/env.go?s=379:436#L3, I don't see any proper way to do it. My current workaround is to define an environment variable D=$, and use ${D}...

Any other idea? Could the YAML parsing step somehow help out, to differentiate between single and double quotes in values, and therefore applying the expansion only to double-quoted values?

lift does not seem to exit

I am running:

crane lift --recreate -v -t files

and the command does not seem to exit. I need to force termination and when I do so the container is still running.

Is this expected behaviour?

Change --kill / --force

Somehow, --kill and --force don't feel right to me at the moment. One of the reasons is that --kill without a --force does not make sense.
That's why (together with stop being safer) I wanted to remove the kill option in bbc15bf. However, seems like the option is pretty popular.

Maybe we can find a better solution here? Any suggestions welcome.

use a human config format

JSON is for data interchange. It doesn't support simple necessities such as comments, it was not designed to be a user configuration format.

Possible options are YAML or TOML.
I haven't used TOML, but with YAML I immediately convert it to JSON in my programs.

set PATH env variable

with docker I can do

-e PATH=dir:$PATH

When putting the same as a crane env, $PATH does not get expanded in the container environment.

consistent order when explicit order is not provided

Since the introduction of the optional order attribute in the config, crane commands against a set of containers defined in a config that do not provide an explicit order (i.e. when the automatic ordering kicks in) are not deterministic, due to the go behavior of shuffling randomly the entries when iterating over a map in ContainerMap::extractDependencies().

This is not a semantic problem as such, since dependencies should all be recognized, but it's very confusing to see the order jumping between each crane status for example, and makes lookup difficult. Providing an explicit order is of course a solution, but with a growing number of containers, it results in a lot of duplication (unless we allow order to use groups as in #52 maybe?).

To get a consistent order across runs, we could either use a simple ordering ContainerMap::extractDependencies() (alphabetical), or include the (sorted) list from the config, and use this collection to iterate (since we don't use the fact that containers are mapped by name anyway).

improvements to container.go Volume() method

In short I'd like to be able to specify one of the following in my crane.yaml:

volume:
- ~/.ssh:/root/.ssh_host:ro
volume:
- $HOME/.ssh:/root/.ssh_host:ro

Right now Volume() doesn't quite support what I'm looking for, which is to use it to get my ssh keys over to a container: e.g. docker run .... -v ~/.ssh:/root/.ssh_host:ro ....

I understand that the tilde is a shell-thing but it would be nice if it was possible to handle that case specifically e.g. http://stackoverflow.com/a/17617721

Failing that I see from the code that os.ExpandEnv is used but it seems to not being called in exactly the right place for me. i.e. !path.IsAbs(paths[0]) will return true on $HOME/.ssh but then the current working directory is prepended to the path (leaving something like /foo/bar/$HOME/.ssh) before os.ExpandEnv is called (resulting I think in something like /foo/bar//home/username/.ssh)

The current workaround is to use the full path:
volume:
- /home/username/.ssh:/root/.ssh_host:ro

Unknown parameters are silently ignored

crane run foobar produces the same output as crane run. I am thinking it should either:

  • print a warning or an error
  • or be a shortcut for crane run -g foobar, and print a warning or an error if such a group or container doesn't exist

Opt-in feature: block until exposed ports are bound when starting containers

Proposal

Allow new container flags hooks.run-start.pre & hooks.run-start.post in the config, which are (potentially blocking) shell commands to be executed in the foreground before and after the docker run or docker start command for that container. This allow for custom waiting/presence/sidekicking code. Return code from these commands is checked so they can interrupt the flow.

Use-case

For example, if I want foo to be started only if/after the port 1234 of bar is bound:

containers:
  foo
    image: foo
    run:
      tty: true
      interactive: true
      link:
        - bar:bar
    hooks:
      run-start:
        pre: "docker run -ti --rm --link bar:bar busybox sh -c 'for i in `seq 1 10`; do sleep 1 && nc bar 1234 && exit 0 || echo retrying...; done; exit 1'"

  bar
    image: bar
    run:
      expose:
        - 1234

To accomplish this currently, I have to define an extra container in between the producer/server bar and consumer/client foo, and define a dummy run.volumse-from from the consumer/client to that middle container to explicitly declare the dependency.

containers:
  foo
    image: foo
    run:
      tty: true
      interactive: true
      link:
        - bar:bar
      volumes-from:
        - wait-for-bar

  wait-for-bar
    image: busybox
    run:
      tty: true
      interactive: true
      cmd: ["sh", "-c", "for i in `seq 1 10`; do sleep 1 && nc bar 1234 && exit 0 || echo retrying...; done; exit 1"]
      link:
        - bar:bar

  bar
    image: bar
    run:
      expose:
        - 1234

I think this is a common use case for single-host orchestration, and this is generic enough to cover many use cases.

error gives docker help output

below the important message is at the top, then there is non-useful help output, and then the error status shows up in red in my terminal. Removing the help output would help me immediately see the error message.

sudo ~/bin/crane run
Running container e2e ... invalid value "mongodb" for flag --link: Invalid format to parse.  mongodb should match template name:alias

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

  -P, --publish-all=false: Publish all exposed ports to the host interfaces
  -a, --attach=[]: Attach to stdin, stdout or stderr.
  -c, --cpu-shares=0: CPU shares (relative weight)
  --cidfile="": Write the container ID to the file
  -d, --detach=false: Detached mode: Run container in the background, print new container id
  --dns=[]: Set custom dns servers
  -e, --env=[]: Set environment variables
  --entrypoint="": Overwrite the default entrypoint of the image
  --expose=[]: Expose a port from the container without publishing it to your host
  -h, --hostname="": Container host name
  -i, --interactive=false: Keep stdin open even if not attached
  --link=[]: Add link to another container (name:alias)
  --lxc-conf=[]: Add custom lxc options -lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
  -m, --memory="": Memory limit (format: <number><optional unit>, where unit = b, k, m or g)
  -n, --networking=true: Enable networking for this container
  --name="": Assign a name to the container
  -p, --publish=[]: Publish a container's port to the host (format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort) (use 'docker port' to see the actual mapping)
  --privileged=false: Give extended privileges to this container
  --rm=false: Automatically remove the container when it exits (incompatible with -d)
  --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
  -t, --tty=false: Allocate a pseudo-tty
  -u, --user="": Username or UID
  -v, --volume=[]: Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)
  --volumes-from=[]: Mount volumes from the specified container(s)
  -w, --workdir="": Working directory inside the container
ERROR: exit status 2

Deal with "Cannot destroy container ... Device is Busy" error

Currently docker sometimes throw an error while removing container. Usually, such error should be gone up on retry. I think crane should be given the ability to retry the removal (at least for 1 times). Otherwise, sometime the upgrading of container (crane run CONTAINER -r) can be a pain

use a proper extension rather than a Loudfile

The Makefile naming is a bit silly. My understanding of the origin is that 4 letter extensions were not allowed and .mak was already taken at the time.

Loudfile sort of works because it is designed to have only one file per project and it is a new format. In this case, we are using an existing format (JSON), so we should use that extension.

So how about crane.json instead of Cranefile? That helps people and editors know what is going on. It will also make it easier to add support for YAML by having a crane.yaml instead of crane.json

Append to the entrypoint command during run

Hello,
This might be "undocumented" rather than an "issue", I'm a new user...

Here's what I would expect (basic untested example):

# crane.yml
containers:
    hello:
        dockerfile: jessie
        image: debian:jessie
        run:
            entrypoint: ["echo"]

Then

crane run hello world

Expected result:
fetching debian etc etc...
world

Actual result:
ERROR: No group or container matching 'world'

Is there a way to achieve what I'm trying to do?

I've been using fig in this way for npm, bower etc and it's very effective.

Reorganise commands / flags

Based on #38 and #39, here's my proposal what should change:

  • provision: builds or pulls the image
  • lift: consists of 2 steps: 1) provisionOrSkip which will call provision if image does not exist and 2) runOrStart which will either run or start a container
    • --update or --recreate: Will kill running containers, provision images and run the containers. This is the equivalent of executing rm --kill, provision and run
  • rm: removes the containers
    • --force: execute docker rm --force
    • --kill: Kill the containers first, then execute docker rm

Does that sound sensible?
Again, my goals are to a) be safe by default and b) resemble normal Docker commands as closely as possible

Support for referencing groups within group definition in crane.yaml

Rather than writing this:

groups:
  fullstack:
    - memcached
    - redis
    - mysql
    - cassandra
    - kestrel

  secondary:
    - memcached
    - redis
  primary:
    - mysql
    - cassandra
    - kestrel

... i'd like to be able to write:

groups:
  fullstack:
    - secondary
    - primary

  secondary:
    - memcached
    - redis
  primary:
    - mysql
    - cassandra
    - kestrel

This is just an example, we now have 16 containers, so this gets quite messy and error-prone when adding new ones.

Implementation-wise, I'd see an extra fallback compared to the current behavior: if one of the value in a group definition is not a container, try looking up if a group of that name exists. That should work no matter in which orders the groups are defined, and bwarf nicely on a cycle.

As usual, just opening that for discussion - I should be able to find some time sooner or later to implement it.

ordering of condtainer dependencies

It seems that there is no dependency solver and that the deepest container dependencies should be placed in the bottom of the file.

Is this correct? Should I add it to the README?

BTW, both gaudi & fig failed to run my container cluster (I didn't try decking yet though), but crane can, so thanks!

Improvements to dependency handling

#47 introduced dependency resolution to Crane. It does need some adjustments however, for example as mentioned in #50.

Here are my proposed changes:

  • volumes-from needs to be taken into account as well as a dependency
  • In accordance with docker stop/kill/rm, these commands do not need to resolve the dependencies. Crane should try to resolve the dependencies in reverse order. If not all dependencies can be resolved, the unresolved ones should just get appended. This will at the same time remove the need to call Reversed() on the containers. When the order is set, it will be in the correct order for that command.
  • lift/run/start should be possible if the target does not include all dependencies, but these do exist. Crane would handle that by trying to resolve the dependencies, and, failing that, skip unresolved dependencies if they already exist

cmd string does not allow spaces

putting /bin/bash or /bin/ls in the cmd: field works, but /bin/echo hello gives:

Running container container-name ... 2014/03/12 18:03:22 Unable to locate /bin/echo hello

Handle dependencies for commands triggering containers to stop/restart

Assuming we have a config where container a depends on container b, and b depends on c (declared via links), one could expect the following semantics:

  • crane kill -t c
    • a.kill()
    • b.kill()
    • c.kill()
  • crane stop -t c
    • a.stop()
    • b.stop()
    • c.stop()
  • crane lift -r -t c
    • c.provisionOrSkip()
    • c.kill(); c.runOrStart()
    • if (b.exists())
      • b.rm(kill=true); b.runOrStart()
      • if (a.exists())
        • a.rm(kill=true); a.runOrStart()
  • crane run -r -t c
    • c.kill(); c.runOrStart()
    • if (b.exists())
      • b.rm(kill=true); b.runOrStart()
      • if (a.exists())
        • a.rm(kill=true); a.runOrStart()

In plain words:

  • any command that triggers the stop of a container would also stop all the containers that depend on it
  • any command that triggers the restart of a container would restart all the existing containers that depend on it. One could argue that only running containers should be restarted, but existing, non-running ones would still be a tickling bomb because the --link target is gone. Ideally, we would like recreate these containers without starting it, but I think that's possible only with the API (POST /containers/create), so that would be for later... Instead of restarting them, the existing, non-running ones could also simply be removed, but I must say that in my particular workflow, where some containers are one-time tools massaging other containers after their restart, I'd prefer running them automatically.

This behavior would probably be controlled by a flag, the question is whether it should be an opt-in (-c/--cascade-dependencies) or opt-out (-i/--ignore-dependencies).

I'd be willing to start looking at that if we agree it makes sense.

Share config between containers...

Would like your input on this:

I have a django app deployed with crane. So, one container runs the django app and another the webserver (nginx), etc. The django app has various environment variables specified in the crane manifest file. Sometimes, hopefully in development rather than production, someone has a need to execute various tasks associated with the app, like manage.py shell or manage.py schemamigration to create a new migration, etc. These commands can be run in a separate container, but should have the same environment as the appserver itself. Three possible approaches are:

  1. The "do nothing" approach would have a separate crane manifest file with most everything duplicated (especially the environment) and just specify whatever entrypoint or command you want. I don't like this because when something needed to be changed, the user must remember to change in both manifests.
  2. Another approach that might almost "just work" would be to define in the same manifest file both the app container and another container for the command (maybe just a bash shell where the user can then manually run whatever manage.py commands they want). But, yaml has a way to share a node (search for 'repeated nodes' in http://jessenoller.com/blog/2009/04/13/yaml-aint-markup-language-completely-different, for example). So, could use that and I expect the go yaml parser would handle it. Then, the only thing to add to crane (I think) would be a feature to run or not run a specific container from within the manifest. That is, normally you'd want to bring up the appserver and the webserver, but not the shell. But, once in a while you'd want to just run the shell.
  3. Another approach would be to allow templating within the manifest. IN this way you might allow environment variables to alter the resulting manifest for crane, changing the command or entrypoint, for example, and skipping certain containers. This is probably more powerful in general, but also perhaps ugly and more difficult for the use case i describe.

I'm somewhat inclined toward 2, even though i wouldn't be surprised to want templating at some point anyway :)

Thoughts/suggestions?

Oh, another use case that's occurred to me, but i don't yet need, is I expect to want to use crane to build images and push them to a docker registry. I can easily imagine wanting to choose between multiple registries without having to edit the manifest file (ie to change the tag given to the image). One registry might be local to my computer, another might be for the team. Potentially this could be done using an environment variable in the value of the image name. Adding a push command to crane would certainly be pretty simple. Anyway, i think this use case is handled pretty easily without affecting the choices above.

Status code should not always be 0

In the case of, for example, a missing configuration file, a yaml parsing error, or a failure to start a container, the crane command returns 0, making it impossible to use it within a script.

Errors internal to crane should be mapped to proper status codes, and errors when running the docker client should propagate.

Binary compiled with wrong version

https://github.com/michaelsauter/crane/releases/download/v0.7.0/crane_linux version returns v0.6.1 (even though support for groups is present).

Which brings the question: would it be possible for you to commit the build scripts (ideally within a container to load the toolchain easily) so that people unfamiliar with go can build their own binaries?

--config flag not working in 0.8.0

Hello,

I'm trying to run crane and specify a different config file, but I'm getting an error:

[root@maintenance-dev ~]# crane status --config=/var/tmp/crane.json
ERROR:
Error in line 1: invalid character '/' looking for beginning of value
/var/tmp/crane.json
^

Am I doing something wrong, or is this a bug?

Thanks!

YAML inherited attributes syntax doesn't work

Since I use a lot of environment variables/links between my container, would love to have attributes being inherited (like this http://stackoverflow.com/questions/14184971/more-complex-inheritance-in-yaml)

I tried this YAML but crane didn't evaluate the yaml with inheritance in mind:

containers:
  web:
    image: phuongnd08/rails:web
    dockerfile: .
    run:
      cmd:
        - bundle
        - exec
        - thin
        - "-C"
        - "config/thin.yml"
      env: &env
        - RACK_ENV=production
        - RAILS_ENV=production
        - REDIS_URL=redis://redis:6379/
        - MEMCACHE_SERVERS=memcached
        - POSTGRESQL_HOST=db
      link: &link
        - redis:redis
        - db:db
        - memcached:memcached
      detach: false
  sidekiq:
    image: phuongnd08/rails:web
    dockerfile: .
    run:
      cmd:
        - bundle
        - exec
        - sidekiq
        - "-C"
        - "config/sidekiq.yml"
      env:
        <<: *env
      link:
        <<: *link
      detach: true
$ crane run -v -c crane/web-server.yml --recreate -t sidekiq
Command will be applied to: sidekiq

Killing container sidekiq ...
--> docker kill sidekiq
sidekiq
Removing container sidekiq ...
--> docker rm sidekiq
sidekiq
Running container sidekiq ...
--> docker run --interactive --tty --name sidekiq phuongnd08/giasu:web bundle exec sidekiq -C config/sidekiq.yml

(So no links and environment variables being passed to sidekiq container)

providing order breaks target functionality

For current version from master if I provide "order" I won't be able to provide target.

crane.yml:

containers:
    a:
        dockerfile: a
        image: a

    b:
        dockerfile: b
        image: b

    c:
        dockerfile: c
        image: c

order: ["a", "b", "c"]    

groups:
    default: ["a", "b"]
    g1: ["a"]
    g2: ["b", "c"]

result of status option for g1 group (same result for any other group):

crane status --target="g1"
NAME    IMAGE   ID      UP TO DATE      IP      PORTS   RUNNING
c       c       -       -               -       -       -
b       b       -       -               -       -       -
a       a       -       -               -       -       -

expected result would be to show just one container "a" and it works that way if I comment out "order"

Allow passing array to `env-file` parameter

Hey, thanks for the crane. It works fine and it's very helpful.

Yet I have found one case when I cannot use all Docker features.
In Docker reference there is an information about run command flags related to the environmental variables:

All three flags, -e, --env and --env-file can be repeated

While env parameter in crane is an array type, the env-file is string only, so I can't pass multiple env-files in a container definition.

I have only basic golang knowledge, but it looks that /crane/container.go file needs small changes around RunParameters.RawEnvFile variable to make it working on array of strings.

Thanks again for all the work :)

crane run cannot execute a container, but docker run can

So I configured my custom rails docker:

containers:
  redis:
    image: redis:2.6
    run:
      volume:
        - /data/redis:/data
      detach: true
  memcached:
    image: sylvainlasnier/memcached
    run:
      detach: true
  db:
    image: postgres:9.3
    run:
      volume:
        - /data/pgsql/9.3/data:/var/lib/postgresql/data
      detach: true
  web:
    image: phuongnd08/rails:web
    dockerfile: .
    run:
      cmd: /bin/bash -c "bundle exec rails c"
      tty: true
      interactive: true
      env:
        - REDIS_URL=redis://redis:6379/
        - MEMCACHE_SERVERS=memcached
      publish:
        - "3000:3000"
      link:
        - redis:redis
        - db:db
        - memcached:memcached
      detach: false

Then execute it with:

$ crane run -v -c crane/web-server.yml --recreate
Command will be applied to: db, memcached, redis, web

Killing container db ...
--> docker kill db
db
Killing container memcached ...
--> docker kill memcached
memcached
Killing container redis ...
--> docker kill redis
redis
Removing container db ...
--> docker rm db
db
Removing container memcached ...
--> docker rm memcached
memcached
Removing container redis ...
--> docker rm redis
redis
Removing container web ...
--> docker rm web
web
Running container db ...
--> docker run --detach --volume /data/pgsql/9.3/data:/var/lib/postgresql/data --name db postgres:9.3
9f31cb731c5aabc8b2435546f6c7c2cfd9f62b90f683bb9de3216ba2f04e5ea0
Running container memcached ...
--> docker run --detach --name memcached sylvainlasnier/memcached
18189a05561796a1a08b3dbbef96080fd84533ef69c0c037c552317c4ec56760
Running container redis ...
--> docker run --detach --volume /data/redis:/data --name redis redis:2.6
f7fa200eb5136f263b568114450bd395a7dddddc3b68e9aa93328c506abae2ea
Running container web ...
--> docker run --env REDIS_URL=redis://redis:6379/ --env MEMCACHE_SERVERS=memcached --interactive --link redis:redis --link db:db --link memcached:memcached --publish 3000:3000 --tty --name web phuongnd08/rails:web /bin/bash -c "bundle exec rails c"
2014/09/02 12:34:18 Error response from daemon: Cannot start container 2c7e8ef548962dbc9b41bc7fcd5b2f87c0bdcf61fc6cfbb0d1cac1c33984b141: exec: "/bin/bash -c \"bundle exec rails c\"": stat /bin/bash -c "bundle exec rails c": no such file or directory

But if I take the [docker run] command produced by crane and run from the current shell, it ran successfully:

$ docker run --env REDIS_URL=redis://redis:6379/ --env MEMCACHE_SERVERS=memcached --interactive --link redis:redis --link db:db --link memcached:memcached --publish 3000:3000 --tty --name web phuongnd08/giasu:web /bin/bash -c "bundle exec rails c"

2014-09-02T16:34:56Z 1 TID-ork4l9dt8 INFO: Sidekiq client with redis options {:namespace=>"sidekiq", :url=>"redis://redis:6379/"}
Loading development environment (Rails 4.0.5)
[1] pry(main)>

Could it be some thing funny with crane exec command method?

Configurable stop/kill behavior on `crane run -r`, `crane lift -r`, `crane rm -f` & `crane stop`

Just a suggestion regarding the now-default kill behavior of run & lift on --recreate: could this be configurable per container, utilizing the crane config? After all, it's up to the config maintainer to decide whether a graceful stop is required (mounted volumes or other side effects outside the container, ongoing requests, etc) or a kill is enough.

To implement that, rm would have a --force instead of --kill (mapping docker semantics), and a new per-container option stop-grace-period (potentially set to 0) would control the default timeout passed to docker stop (used by all code paths potentially stopping a container).

lift [-r] would then become [docker stop -t stop-grace-period && docker rm &&] provision && run. If someone really wants to kill a container despite what the config says, crane kill is still there.

Crane cause docker to re-pull remote image every time provision being run

I'm using dockerfile/redis as one of my container and every couple of day when I run crane provision, I see new layer of redis (around 80MB, which is a pain to download over my internet connection over and over) because crane invoke docker pull dockerfile/redis. This really cause the provision a pain in the ass. I temporary workaround this by explicitly create Dockerfile which contain a single FROM definition for any of the remote images.

Error when builtin OSX sed used.

Error:

ERROR: Error when parsing Docker's version Docker version 1: strconv.ParseInt: parsing "Docker version 1": invalid syntax

I used homebrew's coreutils to fix this issue, dunno if something more is required but I thought I'd bring it to people's attention:

brew install coreutils
sudo mv /usr/bin/sed /usr/bin/sed.bak
sudo ln -s /usr/local/bin/gsed /usr/bin/sed

TestExplicitlyTargeted sometimes fails

In most cases test TestExplicitlyTargeted from config_test.go works fine but in some cases (like one every 5th time I try) there is an error:

--- FAIL: TestExplicitlyTargeted (0.00 seconds)
config_test.go:330: Expected [a b c], got [b c a]

"crane" command does not do anything returning no error.

I have a vagrant machine that used crane 0.6.4 correctly. The problem was when a container did not start and it was not so easy to recover:

vagrant@bbdev-environment:/vagrant (master u=)$ crane stop
Stopping container observation_db ... Error: Cannot stop container observation_db: no such process
2014/06/16 10:08:36 Error: failed to stop one or more containers
ERROR: exit status 1

vagrant@bbdev-environment:/vagrant (master u=)$ crane status
Name            Running ID                                  IP  Ports
observation     false   29b0a8446f8adc1f7728abfa300a731b828b7fd6652b20c7e22b2f895b7b298b    -
observation_db      true    cd9bf1f6074564ae5f074f46436c7c9f806c9f01a4c0294b33debd6dedc190ed    -
observation_fixture_db  true    c83700cbef7130a55db769de78a3453e419b586ba0369752a83cf7d7e6db1e60    -
observation_test_db true    b75b598ae92f50fe4a984cb049af574814c60bae44837b77341d56cbb54f3ad0    -
catalog_db      true    bcafa2329069911aa62e956bd3958f6d7216e99b2c01e88e0f9204dda21bcb21    -
serf            true    5246031b13ef1b48c6e1b5c133606e2dee8306164af969ae1a4e795bb36e0e7a    -

vagrant@bbdev-environment:/vagrant (master u=)$ crane kill observation_fixture_db
Killing container observation_db ... Error: Cannot kill container observation_db: no such process
2014/06/16 10:12:14 Error: failed to kill one or more containers
ERROR: exit status 1

To try to improve the situation I tried to install crane 0.8.0 however now I do not get any message at all.

vagrant@bbdev-environment:/vagrant (master u=)$ crane run
vagrant@bbdev-environment:/vagrant (master u=)$ crane lift
vagrant@bbdev-environment:/vagrant (master u=)$ crane stop
vagrant@bbdev-environment:/vagrant (master u=)$ crane stop -v
vagrant@bbdev-environment:/vagrant (master u=)$

I am not sure how to recover except destroy the vagrant machine and recreate one. I had this situation a few times. Sometimes "-f" helps, sometimes it does not.

Config unmarshalling is broken

I hadn't actually built the main binary since b6963aa was introduced (tests of my latest PR were added after a rebase), but I am afraid the lack of concrete type for RawContainerMap breaks the config unmarshalling...

I had a quick attempt at fixing it today, but I couldn't do it nicely, i.e. preserving the nice stubbing abilities I used in 74f5790. I will probably give it a second try one of these days, but I just wanted to give a heads up since master is effectively broken - I guess adding a few test for readConfig would be trivial and should come along with the fix.

Push containers to registries

We could extend Crane with a push command and the manifest with registries:

"registries": {
  "one": {},
  "two": {}
}

And then you could use it like that: crane push --registry one.

Fix invalid results for tagged images in container.ImageExists() and improve performance

  • When a container's image container a tag (mysql:5.5.37 for example), ImageExists() always return false, as the grep -wF mysql:5.5.37 fails to match the mysql<spaces>5.5.37 line prefix returned by docker images --no-trunc.
  • Checking if the image of a declared container exists can be extremely slow on a machine with many images in the engine graph, in the order of seconds, as it tries to find the image within the list of all available images. That means an extra overhead of a few seconds for every single container when doing a lift. For orchestration of projects with many containers, that means that the provision step of lift takes two order of magnitude longer (~100s) than the raw provision (~1s).

Using docker inspect container.Image() would solve both problem. We shoud also extend container.Image() to append :latest in case a tag is not provided, so that docker inspect does not end up inspecting a container of the same name (which cannot contain :) rather than an image.

Feature: visualize dependency graph

Proposal: adding a crane dot or crane graph command that will output the graph in the dot format, visualizing all dependencies (links, net, volumes-from). All containers are present, but targeted ones are displayed in a different colors. This will help understanding dependencies and the effect of cascading flags.

Implementation-wise, it's not much more than a go template, so it should be fairly concise.

-rm is deprecated, use --rm instead

When using the "rm" option in run, docker says:

Warning: '-rm' is deprecated, it will be replaced by '--rm' soon. See usage.

So we should replace -rm with --rm.

Warn user when a command might corrupt dependent containers

When -a/--cascade-affected=none, commands such as lift -r, run -r, stop, pause might affect containers that directly or indirectly depend on them: side effects can be silent stale data in case of volumes dependency or verbose crashes in case of linked/net dependency. In that case, it would be interesting to check targeted containers as if --cascade-affected=all had been passed, and if it doesn't match the explicitly targeted one, show the missing containers in a warning, introducing the user to the flag as a way to safely run commands that alter the state of a container that others depend on.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.