Giter VIP home page Giter VIP logo

docker-vault's Introduction

DEPRECATION NOTICE

Upcoming in Vault 1.14, we will stop publishing official Dockerhub images and publish only our Verified Publisher images. Users of Docker images should pull from hashicorp/vault instead of vault. Verified Publisher images can be found at https://hub.docker.com/r/hashicorp/vault.

About this Repo

This is the Git repo of the Vault official image for vault. See the Docker Hub page for the full readme on how to use this Docker image and for information regarding contributing and issues.

The full readme is generated over in docker-library/docs, specificially in docker-library/docs/vault.

See a change merged here that doesn't show up on the Docker Hub yet? Check the "library/vault" manifest file in the docker-library/official-images repo, especially PRs with the "library/vault" label on that repo. For more information about the official images process, see the docker-library/official-images readme.

docker-vault's People

Contributors

bastiaanb avatar briankassouf avatar chrishoffman avatar claire-labry avatar colinhebert avatar divyapola5 avatar ericfrederich avatar hashicorp-ci avatar hc-github-team-secure-vault-core avatar hghaf099 avatar jasonodonnell avatar jefferai avatar jfreda avatar kubawi avatar ldilalla-hc avatar malnick avatar mdeggies avatar meirish avatar mjarmy avatar mladlow avatar mpalmi avatar ncabatoff avatar rculpepper avatar ryansch avatar samsalisbury avatar sarahethompson avatar slackpad avatar stevendpclark avatar tvoran avatar violethynes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-vault's Issues

Add gosu vault only when necessary

We are trying to run the Vault Image as vault user.

But it seems to work only when we're enabling gosu for the vault user.

chmod +s /bin/gosu

But this enables the vault user also to execute commands as root, e.g. gosu root id.

Would it be possible to add a check to the docker-entrypoint.sh Script (here) that gosu vault is only added to the command when the current user isn't already the vault user?

So we don't have to enable gosu for the vault user just to run vault as vault user.

Empty response from server

Hello,
I am trying to use vault in docker but I am not able to authenticate via APIs with any users I create. I get an "empty response from server".

This is the sequence of commands I run

docker run -p 8200:8200 --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -e 'VAULT_DEV_LISTEN_ADDRESS=127.0.0.1:8200’ vault docker exec -it DOCKER_VAULT_ID vault status -address=http://127.0.0.1:8200 3. docker exec -it DOCKER_VAULT_ID vault auth -address=http://127.0.0.1:8200 4. docker exec -it DOCKER_VAULT_ID vault auth-enable -address=http://127.0.0.1:8200 userpass 5. docker exec -it DOCKER_VAULT_ID vault write -address=http://127.0.0.1:8200 auth/userpass/users/new_user password=new_password policies=admins

If I try to authenticate locally via the CLI for new_user it works, but with the REST APIs I get an empty response.

Please note that If I run however vault locally, on my machine, and I repeat exactly all the steps above (skipping point 3) I get good results.

Any suggestion?
Thanks!

issues with logging into container

when I try to log into the vault container, I am getting the following error...
I tried this command docker exec -it vault bash

rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"bash\": executable file not found in $PATH"\n"

container does not run

I cannot run the container with the commands given in the documentation. Here is my output

➜  swarm-cluster git:(master) ✗ docker run -d -p 8200:8200 --name=dev-vault vault:0.6.0 -e 'VAULT_DEV_ROOT_TOKEN_ID=swarm-join'
e89eea1505ee2a56960a5bfdc4ef3267a38dc4929ef00115d299edcfbff6ae48
➜  swarm-cluster git:(master) ✗ docker logs e89eea1505ee2a56960a5bfdc4ef3267a38dc4929ef00115d299edcfbff6ae48
usage: vault [-version] [-help] <command> [args]

Common commands:
    delete           Delete operation on secrets in Vault
    path-help        Look up the help for a path
    read             Read data or secrets from Vault
    renew            Renew the lease of a secret
    revoke           Revoke a secret.
    server           Start a Vault server
    status           Outputs status of whether Vault is sealed and if HA mode is enabled
    unwrap           Unwrap a wrapped secret
    write            Write secrets or configuration into Vault

All other commands:
    audit-disable    Disable an audit backend
    audit-enable     Enable an audit backend
    audit-list       Lists enabled audit backends in Vault
    auth             Prints information about how to authenticate with Vault
    auth-disable     Disable an auth provider
    auth-enable      Enable a new auth provider
    capabilities     Fetch the capabilities of a token on a given path
    generate-root    Promotes a token to a root token
    init             Initialize a new Vault server
    key-status       Provides information about the active encryption key
    list             List data or secrets in Vault
    mount            Mount a logical backend
    mount-tune       Tune mount configuration parameters
    mounts           Lists mounted backends in Vault
    policies         List the policies on the server
    policy-delete    Delete a policy from the server
    policy-write     Write a policy to the server
    rekey            Rekeys Vault to generate new unseal keys
    remount          Remount a secret backend to a new path
    rotate           Rotates the backend encryption key used to persist data
    seal             Seals the vault server
    ssh              Initiate a SSH session
    step-down        Force the Vault node to give up active duty
    token-create     Create a new auth token
    token-lookup     Display information about the specified token
    token-renew      Renew an auth token if there is an associated lease
    token-revoke     Revoke one or more auth tokens
    unmount          Unmount a secret backend
    unseal           Unseals the vault server
    version          Prints the Vault version

I looked at the CMD and the docker_entrypoint.sh and not sure how the vault server -dev command gets executed.

Specifying listener in config file and -dev-listen-address leads to bind error

Using this docker-compose.yml file will repoduce:

vault:
  image: vault
  environment:
    VAULT_LOCAL_CONFIG: '{"backend": {"inmem": {}}, "listener": { "tcp": { "address": "0.0.0.0:8200", "tls_disable": 1 } }, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
    VAULT_DEV_ROOT_TOKEN_ID: 'insecure-development-root-token'
    command: ['server', '-dev']

Running this gives the following output:

> docker-compose -f docker-compose/base.yml -f docker-compose/development.yml up vault
Recreating dockercompose_vault_1
Attaching to dockercompose_vault_1
vault_1         | Error initializing listener of type tcp: listen tcp 0.0.0.0:8200: bind: address already in use
dockercompose_vault_1 exited with code 1

output of uname -a:

>uname -a
Darwin MacBook-Pro.local 15.5.0 Darwin Kernel Version 15.5.0: Tue Apr 19 18:36:36 PDT 2016; root:xnu-3248.50.21~8/RELEASE_X86_64 x86_64

> docker-machine ssh default uname -a
Linux default 4.4.8-boot2docker #1 SMP Mon Apr 25 21:57:27 UTC 2016 x86_64 GNU/Linux

If I remove listener section from VAULT_LOCAL_CONFIG then I no longer get the error

vault:
  image: vault
  environment:
    VAULT_LOCAL_CONFIG: '{"backend": {"inmem": {}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
    VAULT_DEV_ROOT_TOKEN_ID: 'insecure-development-root-token'
    command: ['server', '-dev']

Can not longer mount /vault/config

#13 no longer allows using bind volume mounts for /vault

When starting a container with /vault/config mounted the following error is given:

chown: /vault/config/vault.hcl: Operation not permitted
chown: /vault/config: Operation not permitted
chown: /vault/config: Operation not permitted

The docs do recommend using the environment variable VAULT_LOCAL_CONFIG but that is not always the best solution when trying to use complex configuration.

Cannot use image directly for HA in docker

When running in docker, there is no way to anticipate the IP address of the container until the entry point is executed. When redirect and cluster addresses are left unspecified, the discovery process fails and the wrong IP address it used.

The only solution I see is to create organization's own image based on this image that sets the IP address in the entry point. I would rather avoid doing that.

A proposed solution is to provide the name of the network adapter that is accessible to other nodes using an environment variable, such as VAULT_CLUSTER_INTERFACE and VAULT_REDIRECT_INTERFACE. This is the way consul works with the CONSUL_BIND_INTERFACE AND CONSUL_CLIENT_INTERFACE environment variables.

Confused on how to use in `server` mode

I was able to experiment with dev mode, but now I want a little persistence so I'm running the container in "server" mode, but I am unable to interact with the container. I added "disable_mlock": true to the local configuration, and after running the container it outputs:

Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK
==> Vault server configuration:

                     Cgo: disabled
               Log Level:
                   Mlock: supported: true, enabled: false
                 Storage: file
                 Version: Vault v0.9.1
             Version Sha: 87b6919dea55da61d7cd444b2442cabb8ede8ab1

==> Vault server started! Log data will stream in below:

It looks like it's running but now I am not sure how I am supposed to interact with this server. When running it in dev mode it tells you the vault token, so I am able to use that to authenticate with the cli. However, running in server mode it is unclear where this token is, how to generate this token and have vault use it, or even how to get started using vault. I've read the vault documentation, and I apologize for my newness, but it's unclear what the steps are to get this working.

"Working", as in being able to execute commands against Vault. Currently I can shell into the container (as I did when it was in dev mode), but executing vault commands fails:

/ # vault auth-enable userpass
Error: Post https://127.0.0.1:8200/v1/sys/auth/userpass: dial tcp 127.0.0.1:8200: getsockopt: connection refused
/ # export VAULT_ADDR=http://127.0.0.1:8200
/ # vault auth-enable userpass
Error: Post http://127.0.0.1:8200/v1/sys/auth/userpass: dial tcp 127.0.0.1:8200: getsockopt: connection refused
/ # export VAULT_ADDR=http://0.0.0.0:8200
/ # vault auth-enable userpass
Error: Post http://0.0.0.0:8200/v1/sys/auth/userpass: dial tcp 0.0.0.0:8200: getsockopt: connection refused

I'm assuming the "connection refused" error is because I haven't authenticated, but I am not sure how I'm supposed to do this without knowing the token.

Starting Vault (Hashicorp vault) in docker container on AWS EC2

I have been trying to start hashicorp vault in a docker container connecting to file backend. The docker images was created but I can't run it. I kept getting error with the config file not found. Any help is most appreciated.

docker run -p 8200:8200 --name vault -e 'VAULT_DEV_ROOT_TOKEN_ID=' -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200' vault server -config=file.hcl

Error loading configuration from file.hcl: stat file.hcl: no such file or directory

file.hcl :
backend "file" {
path = "sfiles"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
disable_mlock=true

ARM support?

The current image doesn't build on ARM. Is there any plan to support it?

executable file not found in $PATH"

Build the image from the Dockerfile.
Getting error below when ran the command:
"docker run --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:1234' vault"

Error:
/usr/local/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: "docker-entrypoint.sh": executable file not found in $PATH": unknown

ECS TaskARN Role causes panic

Version: 0.8.0

When assigning an ECS TaskARN role to provide access to the s3 backend, the Vault container fails on start up with kernel panic:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x66a126]

goroutine 1 [running]:
net/http.(*Client).deadline(0x0, 0xc42000c1e8, 0xc4204e8aa0, 0x1)
#011/goroot/src/net/http/client.go:186 +0x26
net/http.(*Client).Do(0x0, 0xc4204c2900, 0xc4204f8388, 0xc4204f8380, 0xc4204459c0)
#011/goroot/src/net/http/client.go:497 +0x89
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/corehandlers.sendFollowRedirects(0xc4204c5c00, 0x1c18c18, 0xc4204c5c00, 0xc4204c2800)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go:134 +0x3b
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/corehandlers.glob..func3(0xc4204c5c00)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/corehandlers/handlers.go:126 +0x85
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run(0xc4204c5d90, 0xc4204c5c00)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go:195 +0x87
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request.(*Request).Send(0xc4204c5c00, 0x0, 0x0)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:480 +0x191
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds.(*Provider).getCredentials(0xc420445d40, 0xc4204e9240, 0x7fc4d9d6a000, 0x0)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go:156 +0x12f
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds.(*Provider).Retrieve(0xc420445d40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7fc4d9d1a9c0, ...)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go:114 +0x5e
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials.(*ChainProvider).Retrieve(0xc4204d9d10, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x19b8ee0, ...)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go:77 +0xc9
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials.(*Credentials).Get(0xc4204a7d40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go:208 +0x13a
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/signer/v4.Signer.signWithBody(0xc4204a7d40, 0x0, 0x27de660, 0xc42000c1b0, 0x10100, 0x1c1c078, 0x0, 0xc4204c2700, 0x27e7320, 0xc4204e9220, ...)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:338 +0x259
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/signer/v4.signSDKRequestWithCurrTime(0xc4204c5800, 0x1c1c078, 0x0, 0x0, 0x0)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:472 +0x2f4
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/signer/v4.SignSDKRequest(0xc4204c5800)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:416 +0x52
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run(0xc4204c5970, 0xc4204c5800)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go:195 +0x87
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request.(*Request).Sign(0xc4204c5800, 0x1c18c98, 0xc4204c5800)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:337 +0xb0
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request.(*Request).Send(0xc4204c5800, 0x0, 0x0)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:473 +0x13d
github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/service/s3.(*S3).ListObjects(0xc42000c1c0, 0xc420445e00, 0x0, 0x0, 0x0)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/aws/aws-sdk-go/service/s3/api.go:3887 +0x4d
github.com/hashicorp/vault/physical/s3.NewS3Backend(0xc4204d9b00, 0x27f8020, 0xc420445900, 0x2, 0xc42006ac00, 0x1, 0x0)
#011/gopath/src/github.com/hashicorp/vault/physical/s3/s3.go:98 +0x501
github.com/hashicorp/vault/command.(*ServerCommand).Run(0xc420398240, 0xc42000e110, 0x3, 0x3, 0x0)
#011/gopath/src/github.com/hashicorp/vault/command/server.go:215 +0xcf6
github.com/hashicorp/vault/vendor/github.com/mitchellh/cli.(*CLI).Run(0xc420399200, 0xc4204d8e70, 0x27, 0x1c18598)
#011/gopath/src/github.com/hashicorp/vault/vendor/github.com/mitchellh/cli/cli.go:235 +0x2d1
github.com/hashicorp/vault/cli.RunCustom(0xc42000e100, 0x4, 0x4, 0xc4204d8e40, 0x0)
#011/gopath/src/github.com/hashicorp/vault/cli/main.go:44 +0x4ea
github.com/hashicorp/vault/cli.Run(0xc42000e100, 0x4, 0x4, 0xc4200001a0)
#011/gopath/src/github.com/hashicorp/vault/cli/main.go:11 +0x56
main.main()
#011/gopath/src/github.com/hashicorp/vault/main.go:10 +0x64

Removing the ECS TaskARN role fixes the issue, but it would be nice to use IAM permissions assigned to the ECS task definition instead of the EC2 instance.

Adding the ECS TaskARN role adds this environment variable "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/1235340c-b2e9-4f9b-8431-64acbd6addef" which, I believe, is required by the AWS SDK, which it then uses to attempt to fetch credentials.

Config:

{ "backend": {"s3": { "bucket": "my-vault-bucket", "region": "us-east-1" } }, "ha_backend": { "consul": { "address": "127.0.0.1:8500" } }, "listener": { "tcp": { "address": "0.0.0.0:8200", "cluster_address": "0.0.0.0:8201", "tls_cert_file": "/etc/vault/ssl/my-cert.pem", "tls_key_file": "/etc/vault/ssl/my-cert.key" } } }

issues in writing the secret file

I created Docker containers for vault when I tried the vault init...It shows the below error

  • Vault is already initialized
    / # vault init
    Error initializing Vault: Error making API request.

URL: PUT http://0.0.0.0:1234/v1/sys/init
Code: 400. Errors:
I cannot write the secret file...I will get the above the error....Can anyone help me with this issue?

Problem with wrapper docker container with versions 8+

Hi,

We have a docker wrapper container for running Vault in our Mesosphere cluster. It's been working pretty well with Docker 0.7.x but when I change my dockerfile FROM to 0.8.0, or 0.8.1, the container does not start and does not provide any feedback as to why not. The container just stops right away. Docker logs does not show anything for the stopped container and there is no stdout/stderr output.

Dockerfile:

# Vault wrapper container to inject env variables, etc

FROM vault:0.8.1
MAINTAINER Motus DevOps

# Install curl and jq 
# (this is based on Alpine Linux - apk is the package manager)
RUN apk update
RUN apk add curl jq openssh

ADD vaultwrapper.sh /usr/local/bin/

ENTRYPOINT ["/usr/local/bin/vaultwrapper.sh"

vaultwrapper.sh:

#!/bin/sh
# Vault wrapper to set hostname/port in our marathon.

# If you have a borked vault znode
if [ -z "$VAULT_ZK_PATH" ]; then
    export VAULT_ZK_PATH="vault/"
    exit 1
fi

# Get instance ID using AWS's api.
INSTANCEID=`curl http://169.254.169.254/latest/meta-data/instance-id`

# Log level defaults to info
if [[ -z "$LOG_LEVEL" ]]; then
    LOG_LEVEL=info
fi

# Find the ZK string by querying Marathon
# We are passing the Marathon Master URL in via ansible.
ZK_URL=`curl -s -X GET $MARATHON_MASTER/v2/info | \
	jq --raw-output '.marathon_config.master' | \
	sed -e 's$zk://$$g' | sed -e 's$/mesos$$g'`

# Create a config file
# Note, we need to mount the certs at /etc/tls/.
CFG_FILE=$(mktemp)
cat > ${CFG_FILE} << EOF
backend "zookeeper" {
  address = "$ZK_URL"
  redirect_addr = "$VAULT_ADVERTISE_ADDR"
  cluster_addr = "$HOST:$PORT1"
  path = "$VAULT_ZK_PATH"
}

listener "tcp" {
  address = "0.0.0.0:8200"
  cluster_address = "0.0.0.0:8201"
  tls_cert_file = "/etc/tls/motusclouds_com.crt"
  tls_key_file = "/etc/tls/privkey.pem"
}

disable_mlock = true
EOF

# Call the vault executable now. Pass the configfile we just created and any other args.
/bin/vault server --log-level=$LOG_LEVEL -config=$CFG_FILE "$@"

I checked out https://github.com/hashicorp/docker-vault and the Dockerfile/entrypoint script appear to be the same except for the version. Is there something in the application that changed that would silently fail that changed between 0.7.3 and 0.8.0?

Thanks
Matt

Ability to enable Audit logs

Hi,

It'd be nice if there was someway to enable the audit logs functionality and have that printed to stdout.

At the minute I can't figure out an easy way of using the log volume and monitoring the file, as well as rotating it.

vault Operation not permitted

When running the latest vault container or 0.6.4 I receive the error Operation not permitted when trying to run the vault command as if noexec were set on the binary. Moving back to 0.6.3 fixes the issue.

Vault Enterprise Docker image with trial license

Request to add Vault Enterprise edition docker image to DockerHub's library including a default trial license to test out enterprise features and API.

This is what Elastic.co does with their Elasticsearch X-Pack platinum edition docker image. You can then enter the license before the in-built trial expires.

Docker image has multiple issues that prevent it from being used out-of-the-box

We're using docker run to start the image. Our Docker version is 17.03.01-ce.

We get the following error:

Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK

With all of the following docker run permutations:

  • docker run vault
  • docker run vault --cap-add IPC_LOCK
  • docker run vault --privileged
  • docker run vault server
  • docker run vault server --cap-add IPC_LOCK
  • docker run vault server --privileged

We then pulled the Vault Docker image locally. We set the SKIP_SETCAP env var to 1 and updated the CMD property at the end. We tried the following commands and received the following errors:

  • Running the Docker image with CMD set to ["server"]:

    Error initializing core: Failed to lock memory: cannot allocate memory

  • Running the Docker image with CMD set to ["server", "--cap-add", "IPC_LOCK"]:

    flag provided but not defined: -cap-add

  • Running the Docker image with CMD set to ["vault", "server", "--cap-add", "IPC_LOCK"]:

    flag provided but not defined: -cap-add

  • Running the Docker image with CMD set to ["server", "--privileged"]:

    flag provided but not defined: -privileged

  • Running the Docker image with CMD set to ["vault", "server", "--privileged"]:

    flag provided but not defined: -privileged

There are various closed GitHub issues regarding these problems and other tangential ones - #24, #17, #18 - but none of those solutions apply to this situation (plus, they apply to an earlier version of the image, not 0.7.0).

These issues have rendered this image useless for our team; we've begun rolling our own in the meantime.

Startup tasks?

Typically more configurable docker images would make use of runparts and a /entrypoint.d/* or /docker-entrypoint.d/ directory, allowing users of a docker image to run pre-flight tasks before launching the images star actor (vault in this case).

Directing users to prefix their entrypoint.d/files with guards that bail early makes the distinction between always runs and only on first run

Obvious candidates for pre-flight tasks might be:

  • create collection of known users
  • import data
  • cancel container startup or continue depending on state of external thing
  • notify something

In order to keep the base clean of superflous features, this approach helps image users to customise per scenario without forcing us to create and upload yet-another-image to dockerhub or private registry assuming a circumstances even allow for such a tedious task.

example of how such a thing might work is as follows:

#!/bin/dumb-init /bin/sh

...

shopt -s nullglob
count=${#(/docker-entrypoint.d/*)[@]}
if [[ $count -gt 0 ]]; then
  runparts /docker-entrypoint.d/
fi
shopt -u nullglob

exec "$@"

Running Vault on Docker for Mac

I'm trying to run a dev instance of Vault on Docker for Mac. When I try running:
docker run -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -e 'VAULT_DEV_LISTEN_ADDRESS=127.0.0.1:8200' vault
and get its status:
vault status
I get the following error.
Error checking seal status: Get http://127.0.0.1:8200/v1/sys/seal-status: dial tcp 127.0.0.1:8200: getsockopt: connection refused.

When I tried changing the listener address to 0.0.0.0, I got the same results.

Failing to mount plugin

Attempting to mount my custom plugin in our production environment is failing with the following error:

vault mount -path=gitlab -plugin-name=gitlab plugin
Mount error: Error making API request.

URL: POST https://borgias.foo.com/v1/sys/mounts/gitlab
Code: 400. Errors:

* plugin exited before we could connect

Testing it running the same Docker image on my local workstation, running the Vault server in dev mode works fine.

The only two major differences that I can think is that

  1. I have to use docker cp to copy the plugin binary into the container as on production the filesystem is mounted with the noexec flag.

  2. The host environment:
    Linux borgias 4.4.79+ #1 SMP Wed Nov 8 17:00:14 PST 2017 x86_64 Intel(R) Xeon(R) CPU @ 2.60GHz GenuineIntel GNU/Linux

exec format error for custom plugins

Attempting to enable the (vault-auth-example)[https://github.com/hashicorp/vault-auth-plugin-example] plugin on Docker vault image 0.8.3 results in the following error:

Error: Error making API request.

URL: POST http://127.0.0.1:8200/v1/sys/auth/example
Code: 400. Errors:

* fork/exec /vault/file/vault-plugins/vault-auth-example: exec format error

My configuration is:

VAULT_LOCAL_CONFIG='{ "plugin_directory": "/vault/file/vault-plugins" }'

And the plugin is bind mounted from the host into the directory, other than that there are no special parameters provided to the Docker host.

--cap-add=IPC_LOCK unavailable in docker swarm

In the documentation for the vault container, it states..

The container will attempt to lock memory to prevent sensitive values from being swapped to disk and as a result must have --cap-add=IPC_LOCK provided to docker run.

I'm trying to run the vault container in a docker swarm. I'm running into the problem that the cap-add functionality is not available for docker swarm. See this issue.

While there is some talk on a potential solution that would address, among other things, the missing cap-add functionality in docker swarm, I was wondering if you could provide some guidance as to a method of running vault within a docker swarm. Thanks!

Vault in production is already initialized and sealed

I'm trying to set up Vault in production mode. I am passing one variable to the setup -- VAULT_LOCAL_CONFIG, which specifies a Postgres storage backend.

When I try to run vault operator init, Vault says it's already been initialized -- and sealed! How is this possible if I haven't created a seal key? How do I get the seal key that, presumably, has been autogenerated?

Error initializing core: Failed to lock memory: cannot allocate memory

I ran in to #24 trying to set up the Vault Docker image. When I try the suggested workaround ("-e SKIP_SETCAP=true") I get this error:

ubuntu@svs-vault:~/vault-docker/vault$ sudo docker run -e SKIP_SETCAP=true --name vaultc -v $PWD:/vault --expose 8200 --cap-add IPC_LOCK vault server
Error initializing core: Failed to lock memory: cannot allocate memory

This usually means that the mlock syscall is not available.
Vault uses mlock to prevent memory from being swapped to
disk. This requires root privileges as well as a machine
that supports mlock. Please enable mlock on your system or
disable Vault from using it. To disable Vault from using it,
set the `disable_mlock` configuration option in your configuration
file.

Same problem if I use --privileged too...

I'm running on Ubuntu Xenial.

uname -a:

Linux svs-vault 4.4.0-83-generic #106-Ubuntu SMP Mon Jun 26 17:54:43 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Docker version (from official Docker repository):

Docker version 17.06.2-ce, build cec0b72

Vault docker image not starting

Heya guys!

Any ideas on what's wrong?

I pulled the latest version of vault from dockerhub, and get the following error when starting:

Failed to set capabilities on file `/bin/vault' (Not supported)
usage: setcap [-q] [-v] (-r|-|) [ ... (-r|-|) ]

Note must be a regular (non-symlink) file.

Here is my docker info

root  ~  docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 14
Server Version: 17.12.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 115
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.74-2-MANJARO
Operating System: Manjaro Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.53GiB
Name: pc
ID: QDTM:RESB:ZLFA:BVS7:2RSH:IUOD:VOU6:NH2B:NIMU:P3BU:WM3E:KSYJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username:
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
10.1.135.10:5000
127.0.0.0/8
Live Restore Enabled: false

Dependency on docker-base

Hi,

[This is a more generic question on Hashicorp's docker-base than a specific consul issue, but there's no issue tracker in the docker-base repo. Please LMK if there's a better forum for this]

I've been toying with creating a docker image for the arm platform, and noted that according to Vault #29 that one of the problems with multi-platform official images is the dependency on docker-base.

That got me wondering as to the necessity for docker-base today. From my understanding the only things needed from docker-base are dumb-init and gosu. Both of these seem to be supported natively by docker at runtime today using --init and --user flags.

Yes, these require run-time options, but so does mlock support anyway...

Would it make sense to use those as the defaults instead of requiring docker-base?

Example Vault Listen Address Issue

On the Vault Docker Hub page, the example commands under "Running Vault for Development" this example listen address:

VAULT_DEV_LISTEN_ADDRESS=127.0.0.1:1234

However, the 127.0.0.1 IP will not work (tested on both Ubuntu 16.04 and MacOS Sierra) and will result in refused connections to the Vault container. Instead, users should use something like:

VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:9999

It's unfortunate that what was obviously intended to be an example port config is in fact invalid and will result in a number of obscure issues, none of which really hint at the underlying issue. Please update the page on Docker Hub to use this or a similar valid example for the VAULT_DEV_LISTEN_ADDRESS parameter.

Thank you!

Consul backend on macOS

I'm fairly new to Docker so I'm confident I'm making some dumb mistakes, but I wanted to setup the vault container with a docker consul cluster. My goal eventually is to setup one instance of vault (for now) and 3 instances of consul with 1 client node.

Just to get things up and running, the vault config looks like:

# config/local.hcl
backend "consul" {
  address = "172.17.0.5:8500"
  scheme = "http"
  tls_skip_verify = 1
}

listener "tcp" {
  address = "127.0.0.1:8200"
  tls_disable = 1
}

Then my docker startup script looks like this:

docker run -d --name consul1 -h consul1 consul agent -server -bootstrap-expect 3
JOIN_IP=$(docker inspect -f '{{.NetworkSettings.IPAddress}}' consul1)

docker run -d --name consul2 -h consul2 consul agent -server -retry-join $JOIN_IP
docker run -d --name consul3 -h consul3 consul agent -server -retry-join $JOIN_IP
docker run -d --name consul-client -h consul-client \
   -p 8500:8500 consul agent -client=0.0.0.0 -retry-join $JOIN_IP -ui

docker run -d -h vault --name vault -p 8200:8200 \
  --cap-add IPC_LOCK --link consul-client --volume $PWD/config:/vault/config \
  vault server

When I try to initialize vault, I get the following error:

$ vault init --key-shares=1 --key-threshold=1
Error initializing Vault: Put http://127.0.0.1:8200/v1/sys/init: EOF

To test out if I messed something up with the networking, I did the following:

$ docker exec -it vault /bin/sh
/ # apk add --no-cache curl
/ # curl http://172.17.0.5:8500/v1/status/peers
["172.17.0.2:8300","172.17.0.3:8300","172.17.0.4:8300"]

I also made sure that my localhost is listening on port 8200.

$ lsof -i :8200
COMMAND    PID USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
com.docke 2237 ryan   20u  IPv4 0xc9990e2d78428f6f      0t0  TCP *:trivnet1 (LISTEN)
com.docke 2237 ryan   23u  IPv6 0xc9990e2d77729b9f      0t0  TCP localhost:trivnet1 (LISTEN)

Which means (I think) that I linked the containers properly. I'm really confused why things aren't working and any help would be appreciated.

Vault audit-enable on Windows Docker gives error

I installed the Vault Docker on Windows.
When i run

docker exec <container name> vault audit-enable file file_path=/var/logs/vault_audit.log

It gives the following error

Error enabling audit backend: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/sys/audit/file
Code: 400. Errors:
* missing client token

even when I run the vault audit enable inside the docker bash, it still gives me the same error. Does it mean all the CLI command get translated to a web request to API in VAULT docker? Any help will be appreciated.

Configuration Discrepancy

Comparing the documentation on the Vault site, configuring the storage backend for a server is inconsistent with this image. While this documentation hints at the key that works ("backend" as opposed to "storage"), the image does not seem to want to load and parse HCL files either. A few things I had to tweak to get it to work with JSON:

  • "storage" key is invalid. Had to use "backend" ("backend": {"consul": {}})
  • Does not parse boolean. Had to use 0 or 1 ("tls_skip_verify": 1)
  • Time does not recognize "d" as a unit. Had to convert to hours ("default_lease_ttl": "168h")

Add jq

I'm using vault to generate pki certs and jq will be really helpful to split vault response into individual cert files

Failed to unseal vault

HI,
Vault failed to unseal, get an error:
failed to update entity in MemDB: alias "xxxxx-f6fd-6288-f715-yyyyyyy" in already tied to a different

==> Vault server configuration:

                 Cgo: disabled
          Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", tls: "enabled")
           Log Level:
               Mlock: supported: true, enabled: true
             Storage: mysql
             Version: Vault v0.9.1
         Version Sha: 87b6919dea55da61d7cd444b2442cabb8ede8ab1

==> Vault server started! Log data will stream in below:

2017/12/24 22:20:21.950047 [INFO ] core: vault is unsealed
2017/12/24 22:20:21.950515 [INFO ] core: post-unseal setup starting
2017/12/24 22:20:21.950922 [INFO ] core: loaded wrapping token key
2017/12/24 22:20:21.950940 [INFO ] core: successfully setup plugin catalog: plugin-directory=
2017/12/24 22:20:21.951935 [INFO ] core: successfully mounted backend: type=generic path=secret/
2017/12/24 22:20:21.952073 [INFO ] core: successfully mounted backend: type=system path=sys/
2017/12/24 22:20:21.952593 [INFO ] core: successfully mounted backend: type=identity path=identity/
2017/12/24 22:20:21.952616 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2017/12/24 22:20:21.955337 [INFO ] expiration: restoring leases
2017/12/24 22:20:21.955431 [INFO ] rollback: starting rollback manager
2017/12/24 22:20:21.965934 [INFO ] core: pre-seal teardown starting
2017/12/24 22:20:21.965952 [INFO ] core: cluster listeners not running
2017/12/24 22:20:22.108049 [INFO ] rollback: stopping rollback manager
2017/12/24 22:20:22.108205 [INFO ] core: pre-seal teardown complete
2017/12/24 22:20:22.108273 [ERROR] core: post-unseal setup failed: error=failed to update entity in MemDB: alias "xxxxx-9937-8e3f-ab4e-xxxxx" in already tied to a different entity "yyyyyyy-0adc-a56a-7b4b-yyyyyyy"
2017/12/24 22:20:22.108313 [WARN ] core: vault is sealed

Unable to initalise Vault - no decent error message to put here.

As you can see in my attempt to initialise the Vault tool below you can see I have issues but no real error message.

$ vault operator init -address=http://127.0.0.1:8200
Error initializing: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/sys/init
Code: 400. Errors:

* failed to check for initialization: context deadline exceeded

This is a Docker Swarm deployment with the following:

version: '3.5'
services:
  vault:
    image: vault:0.9.5
    command: vault server -config /config.hcl
    configs:
      - source: vault_hcl
        target: /config.hcl
        mode: 0440
    environment:
      SKIP_SETCAP: "true"
    ports:
      - 8200:8200

configs:
  vault_hcl:
    file: ./config.hcl

./config.hcl

disable_mlock = true
default_lease_ttl = "168h"
max_lease_ttl = "720h"

listener "tcp" {
  address = "0.0.0.0:8200"
  tls_disable = true
}

storage "etcd" {
    address = "http://any_swarm_ip:2379"
    etcd_api = "v3"
}

Logs

vault_1  | ==> Vault server configuration:
vault_1  |
vault_1  |                      Cgo: disabled
vault_1  |               Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", tls: "disabled")
vault_1  |                Log Level: info
vault_1  |                    Mlock: supported: true, enabled: false
vault_1  |                  Storage: etcd (HA disabled)
vault_1  |                  Version: Vault v0.9.5
vault_1  |              Version Sha: 36edb4d42380d89a897e7f633046423240b710d9
vault_1  |
vault_1  | ==> Vault server started! Log data will stream in below:
vault_1  |
vault_1  | 2018/03/08 11:33:35.916606 [ERROR] core: barrier init check failed: error=failed to check for initialization: context deadline exceeded
vault_1  | 2018/03/08 11:35:00.395117 [ERROR] core: barrier init check failed: error=failed to check for initialization: context deadline exceeded
vault_1  | 2018/03/08 11:39:14.567394 [ERROR] core: barrier init check failed: error=failed to check for initialization: context deadline exceeded
vault_1  | 2018/03/08 11:39:26.870346 [ERROR] core: barrier init check failed: error=failed to check for initialization: context deadline exceeded
vault_1  | 2018/03/08 11:44:39.518830 [ERROR] core: barrier init check failed: error=failed to check for initialization: context deadline exceeded
vault_1  | 2018/03/08 11:45:11.857957 [ERROR] core: barrier init check failed: error=failed to check for initialization: context deadline exceeded

I've exec'd the container pulled down etcdctl and tested I can do a member list and it works fine.

Any help would be appreciated :)

Container creates it's own volumes for the filesystem backend and the logging backend.

From the Docker host, I am mapping (with -v) /var/lib/vault to /vault on the container. Under this directory I have config, file and log directories. In the HCL under the config directory, I am configuring the filesystem backend with:

storage "file" {
  path = "/vault/file"
}

But the files Vault writes there in the container don't show up on the host.

bash-4.2# docker exec vault ls -l /vault/file
total 0
drwxr-xr-x    4 vault    vault          190 Oct 16 21:02 core
drwxr-xr-x    4 vault    vault           33 Oct 16 21:02 sys
bash-4.2# ls -l /var/lib/vault/file
total 0
bash-4.2#

When I look at docker inspect I see that the container is over-riding the mount with it's own volume:

        "Mounts": [
            {
                "Source": "/var/lib/vault",
                "Destination": "/vault",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Name": "501e025f5b60bb2507cbbc2d68798e6c88f5c4e7fa8dfc17b1235cef092ba07c",
                "Source": "/var/lib/docker/volumes/501e025f5b60bb2507cbbc2d68798e6c88f5c4e7fa8dfc17b1235cef092ba07c/_data",
                "Destination": "/vault/file",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },

How am I supposed to persist the Vault data if it's creating it's own volume every time?

Running vault as a user

Comparing Consul & Vault Dockerfiles, and their running containers, I noticed Vault didn't have its own user to drop the root privileges on start.

Is there a specific capability required by vault beside IPC_LOCK?
Could we simply use setcap on the filesystem or add_cap from docker as a work around to make the container more secure?

cheers!

setcap fails on aufs systems

The setcap command itself fails on some systems. I'm seeing this when trying to run a shell as the command (without changing the entrypoint) in 0.6.4):

Failed to set capabilities on file `/bin/vault' (Not supported)
usage: setcap [-q] [-v] (-r|-|<caps>) <filename> [ ... (-r|-|<capsN>) <filenameN> ]

 Note <filename> must be a regular (non-symlink) file.

I'm even seeing this when I pass --cap-add IPC_LOCK to the run command, running on kernel 3.16.0-4-amd64 (Debian), OS filesystems are ext4 and using the docker storage driver is aufs. Spinning up the container with a shell as my entrypoint, I can also see:

/ # getcap /bin/vault
Failed to get capabilities of file `/bin/vault' (Not supported)

Since the entrypoint starts with a set -e, I believe the script will immediately error out on the first setcap command even if PR#18 is included. Best solution may be to wrap that section of code with an if check, but I haven't had a chance to test this:

if getcap $(readlink -f $(which vault)) >/dev/null 2>&1; then
...
fi

vault:latest fails, possibly due to bad setcap call in docker-entrypoint.sh

Steps to Reproduce

$ docker run -d --name=dev-vault vault
6b46031c562b6935d80f4e8336fdeb7d90e5795bfb1aee29243a24e369a6616f
$ aws docker logs dev-vault
error: exec failed: operation not permitted

Version Info

docker version: docker for mac 1.12.3
image: vault:latest

Additional Info

The Apcera platform, which also runs Docker containers, points the finger at setcap().

Failed to set capabilities on file `/bin/vault' (Operation not permitted)
usage: setcap [-q] [-v] (-r|-|<caps>) <filename> [ ... (-r|-|<capsN>) <filenameN> ]
Note <filename> must be a regular (non-symlink) file.

Fail to start 0.6.4 (failed to set capabilities)

Hi

Using the following docker-compose file

  vault:
    image: "vault:0.6.4"
    cap_add:
      - IPC_LOCK

I'm getting the following error in Linux mint

vault_1         | Failed to set capabilities on file `/bin/vault' (Not supported)
vault_1         | usage: setcap [-q] [-v] (-r|-|<caps>) <filename> [ ... (-r|-|<capsN>) <filenameN> ]
vault_1         | 
vault_1         |  Note <filename> must be a regular (non-symlink) file.

Docker version 1.12.5

I can add the IPC cap to files outside docker without isssues if that helps narrow it down.

Can you please advise on next steps to debug the issue?

Thanks!

Nico

Docker-vault on Openshift

I am trying to run the docker-vault on Openshift but by default it does not allow me to run a container as a root. I believe this is causing the error:
/usr/local/bin/docker-entrypoint.sh: line 18: can't create /vault/config/local.json: Permission denied
Is it possible to change the files and folders permissions so we can run it as an "ordinary" user?

As an another issue (if it is the case I can create another one) I changed the Openshift configurations in order to run the container as root, but I got the following error:
Error loading configuration from /vault/config: Error loading /vault/config/local.json: At -: root.disable_mlock: unknown type *ast.LiteralType

The command I used to create an application in both cases was:
oc new-app vault 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": "true"}'

Thanks in advance,

Openshift

I'm getting this message, when I'm trying to deploy vault image on openshift:

can't create /vault/config/local.json: Permission denied

My configmap is:

{
	"backend": {
		"file": {
			"path": "/vault/file"
		}
	},
	"default_lease_ttl": "168h",
	"max_lease_ttl": "720h",
	"disable_mlock": true,
	"listener": {
		"tcp": {
			"address": "0.0.0.0:8200",
			"tls_cert_file": "/var/run/secrets/kubernetes.io/certs/tls.crt",
			"tls_key_file": "/var/run/secrets/kubernetes.io/certs/tls.key"
		}
	}
}

The DeploymentConfig is:

apiVersion: v1
kind: Service
metadata:
  name: vault
  annotations:
    service.alpha.openshift.io/serving-cert-secret-name: vault-cert
  labels:
    app: vault
spec:
  ports:
  - name: vault
    port: 8200
  selector:
    app: vault
---
apiVersion: v1
kind: DeploymentConfig
metadata:
  labels:
    app: vault
  name: vault
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vault
    spec:
      containers:
      - image: vault:latest
        name: vault
        ports:
        - containerPort: 8200
          name: vaultport
          protocol: TCP
        args:
        - server
        - -log-level=debug    
        env:
        - name: SKIP_SETCAP
          value: 'true' 
        - name: VAULT_LOCAL_CONFIG
          valueFrom:
            configMapKeyRef:
              name: vault-config
              key: vault-config
        volumeMounts:      
        - name: vault-file-backend
          mountPath: /vault/file
          readOnly: false
        - name: vault-cert
          mountPath: /var/run/secrets/kubernetes.io/certs
        livenessProbe:
          httpGet:
            path: 'v1/sys/health?standbyok=true&standbycode=200&sealedcode=200&uninitcode=200'
            port: 8200
            scheme: HTTPS
        readinessProbe:
          httpGet:
            path: 'v1/sys/health?standbyok=true&standbycode=200&sealedcode=200&uninitcode=200'
            port: 8200
            scheme: HTTPS                                        
      volumes:
      - name: vault-file-backend
        persistentVolumeClaim:
          claimName: vault-file-backend
      - name: vault-cert
        secret:
          secretName: vault-cert          
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: vault-file-backend
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Any ideas?

Was working, now I get "Permission denied" errors, how do I get it to work again?

I spun up the container as mentioned on the docker hub page: docker run --cap-add=IPC_LOCK -d --name=dev-vault vault

It has been working for over a week. Today I go to use it and get permission denied errors every time I try to do something. I have no idea why it's telling me this, how to diagnose the root issue, and how to get back to authenticating. This is only for my local testing and I'd really not like to lose the configuration values I setup.

I still have access to the original root token but it doesn't seem to be valid, e.g. I run a command like vault auth <original root token> and it still responds with permission denied.

Vault docker image disabling IPC_LOCK

docker --version
Docker version 1.12.5, build 7392c3b
vault --version
Vault v0.6.4 ('f4adc7fa960ed8e828f94bc6785bcdbae8d1b263')

docker run --cap-add=IPC_LOCK -d --name=dev-vault vault

docker logs dev-vault
Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK
error: exec failed: text file busy
root@ip-10-52-140-72:~# docker info
Containers: 3
 Running: 2
 Paused: 0
 Stopped: 1
Images: 2
Server Version: 1.12.5
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 15
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-53-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.952 GiB
Name: ip-10-52-140-72
ID: 4C2S:V7AN:A6VX:27EW:G4BI:5FQD:XZN4:U3UD:QOWS:MJJ3:G4C5:6OIY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

/vault/file is no longer readable by vault

With the update 0.6.3 to run vault as the user vault, files in a volume under /vault/file (and likely /vault/logs) are no longer readable or writable by the vault user. This results in errors attempting to start vault referencing "failed to read seal configuration: error=open /vault/file/core/_seal-config: permission denied".

This can be reproduced by running:

$ mkdir config-test
$ cat >config-test/local.json <<EOF
{
  "disable_mlock": true,
  "default_lease_ttl": "168h",
  "max_lease_ttl": "2160h",
  "backend": {"file": {"path":"/vault/file"}},
  "listener": {
    "tcp": {
      "address": "0.0.0.0:8200",
      "tls_disable": "true"
    }
  }
}
EOF
$ docker run -d -v vault-test:/vault/file -v `pwd`/config-test:/vault/config:ro --name vault-test vault:0.6.2 server
$ docker exec -it vault-test vault init -address=http://127.0.0.1:8200
$ docker stop vault-test
$ docker rm vault-test
$ docker run -d -v vault-test:/vault/file -v `pwd`/config-test:/vault/config:ro --name vault-test vault:0.6.4 server
$ docker logs vault-test # see error from /vault/config being read only
$ docker rm vault-test
$ docker run -d -v vault-test:/vault/file -v `pwd`/config-test:/vault/config --name vault-test vault:0.6.4 server
$ docker logs vault-test # see operation not permitted error from missing IPC_LOCK capability
$ docker rm vault-test
$ docker run -d -v vault-test:/vault/file -v `pwd`/config-test:/vault/config --name vault-test --cap-add=IPC_LOCK vault:0.6.4 server
$ docker exec -it vault-test vault status -address=http://127.0.0.1:8200 # see error from /vault/file/core/_seal-config: permission denied

I'd like to see support for read only config volumes return, it shouldn't matter who owns these files as long as they can be read by the vault user. But more importantly, files in /vault/file and /vault/log should have their ownership adjusted

vault can no longer be started without escalated privileges

Since the addition of this snippet:

# Allow mlock to avoid swapping Vault memory to disk
setcap cap_ipc_lock=+ep $(readlink -f $(which vault))

Vault can't be started without having to obtain privileges. Which means that the container can't be used for any operation that doesn't interact with the memory at all (using it as a client for example) without being able to do IPC_LOCK.

The good part is that the entrypoint script might offer a solution. If we can detect whether or not the vault executable is indeed still executable after the setcap, we should be able to revert the setcap (with a big warning saying that IPC_LOCK isn't available within the container).

If the automatic detection isn't good enough of a solution, we could have an env variable which, if set, prevents the setcap from being executed in the first place.

My preference would be for the first option as it makes it easier to people to use (instead of failing, trying to understand why the vault executable stopped working, etc.), but the second one would work as well.

Running dev server should automatically set VAULT_ADDR to http address

I would expect that if you run the docker container with no configuration, you would be able to immediately use the vault CLI to connect with the dev server inside the container, but I had to set VAULT_ADDR to the http address.

Or is this by design so it forces a developer to learn to change VAULT_ADDR?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.