Giter VIP home page Giter VIP logo

concourse-up's Introduction

Deprecated - use Control Tower instead

Concourse-Up has been replaced with Control Tower. First-time users should deploy using control-tower and raise issues under that project.


Concourse-Up (Deprecated)

asciicast

A tool for easily deploying Concourse in a single command.

TL;DR

AWS
$ AWS_ACCESS_KEY_ID=<access-key-id> \
  AWS_SECRET_ACCESS_KEY=<secret-access-key> \
  concourse-up deploy <your-project-name>
GCP
$ GOOGLE_APPLICATION_CREDENTIALS=<path/to/googlecreds.json> \
  concourse-up deploy --iaas gcp <your-project-name> 

Why Concourse-Up?

The goal of Concourse-Up is to be the world's easiest way to deploy and operate Concourse CI in production.

In just one command you can deploy a new Concourse environment for your team, on either AWS or GCP. Your Concourse-Up deployment will upgrade itself and self-heal, restoring the underlying VMs if needed. Using the same command-line tool you can do things like manage DNS, scale your environment, or manage firewall policy. CredHub is provided for secrets management and Grafana for viewing your Concourse metrics.

You can keep up to date on Concourse-Up announcements by reading the EngineerBetter Blog

Feature Summary

  • Deploys the latest version of Concourse CI on any region in AWS or GCP
  • Manual upgrade or automatic self-upgrade
  • Access your Concourse over https access by default, with auto-generated or self-provided cert.
  • Deploy on your own domain, if you have a zone in Route53 or Cloud DNS.
  • Scale your workers horizontally or vertically
  • Scale your Concourse database
  • Presents workers on a single public IP to simplify external security policy
  • Database encryption enabled by default
  • Includes Grafana metrics dashboard (check http://your-concourse-url:3000)
  • Includes CredHub for secret management (see: https://concourse-ci.org/creds.html)
  • Saves you money by using AWS spot or GCP preemptible instances where possible, restarting them when needed
  • Idempotent deployment and operations
  • Easy destroy and cleanup

Feature Table

Feature AWS GCP
Concourse IP whitelisting + +
Credhub + +
Custom domains + +
Custom tagging BOSH only BOSH only
Custom TLS certificates + +
Database vertical scaling + +
GitHub authentication + +
Grafana + +
Interruptable worker support + +
Letsencrypt integration + +
Namespace support + +
Region selection + +
Retrieving deployment information + +
Retrieving deployment information as shell exports + +
Retrieving deployment information in JSON + +
Retrieving director NATS cert expiration + +
Rotating director NATS cert + +
Self-Update support + +
Teardown deployment + +
Web server vertical scaling + +
Worker horizontal scaling + +
Worker type selection + N/A
Worker vertical scaling + +
Zone selection + +
Customised networking + +

Prerequisites

  • One of:
    • The environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set.
    • Credentials for the default profile in ~/.aws/credentials are present.
    • Credentials for a profile in ~/.aws/credentials are present.
    • The environment variable GOOGLE_APPLICATION_CREDENTIALS_CONTENTS set to the path to a GCP credentials json file
  • Ensure your credentials are long lived credentials and not temporary security credentials
  • Ensure you have the correct local dependencies for bootstrapping a BOSH VM

Install

Download the latest release and install it into your PATH

Usage

Global flags

  • --region value AWS or GCP region (default: "eu-west-1" on AWS and "europe-west1" on GCP) [$AWS_REGION]
  • --namespace value Any valid string that provides a meaningful namespace of the deployment - Used as part of the configuration bucket name [$NAMESPACE].

    Note that if namespace has been provided in the initial deploy it will be required for any subsequent concourse-up calls against the same deployment.

Choosing an IAAS

The default IAAS for Concourse-Up is AWS. To choose a different IAAS use the --iaas flag. For every IAAS provider apart from AWS this flag is required for all commands.

Supported IAAS values: AWS, GCP

  • --iaas value (optional) IAAS, can be AWS or GCP (default: "AWS") [$IAAS]

Deploy

Deploy a new Concourse with:

concourse-up deploy <your-project-name>

eg:

$ concourse-up deploy ci

...
DEPLOY SUCCESSFUL. Log in with:
fly --target ci login --insecure --concourse-url https://10.0.0.0 --username  --password

Metrics available at https://10.0.0.0:3000 using the same username and password

Log into credhub with:
eval "$(concourse-up info ci --env)"

A new deploy from scratch takes approximately 20 minutes.

Flags

All flags are optional. Configuration settings provided via flags will persist in later deployments unless explicitly overriden.

  • --domain value Domain to use as endpoint for Concourse web interface (eg: ci.myproject.com) [$DOMAIN]

    $ concourse-up deploy --domain chimichanga.engineerbetter.com chimichanga

    In the example above concourse-up will search for a hosted zone that matches chimichanga.engineerbetter.com or engineerbetter.com and add a record to the longest match (chimichanga.engineerbetter.com in this example).

  • --tls-cert value TLS cert to use with Concourse endpoint [$TLS_CERT]

  • --tls-key value TLS private key to use with Concourse endpoint [$TLS_KEY]

    By default concourse-up will generate a self-signed cert using the given domain. If you'd like to provide your own certificate instead, pass the cert and private key as strings using the --tls-cert and --tls-key flags respectively. eg:

    $ concourse-up deploy \
      --domain chimichanga.engineerbetter.com \
      --tls-cert "$(cat chimichanga.engineerbetter.com.crt)" \
      --tls-key "$(cat chimichanga.engineerbetter.com.key)" \
      chimichanga
  • --workers value Number of Concourse worker instances to deploy (default: 1) [$WORKERS]

  • --worker-type Specify a worker type for aws (m5 or m4) (default: "m4") [$WORKER_TYPE] (see comparison table below). Note: this is an AWS-specific option

AWS does not offer m5 instances in all regions, and even for regions that do offer m5 instances, not all zones within that region may offer them. To complicate matters further, each AWS account is assigned AWS zones at random - for instance, eu-west-1a for one account may be the same as eu-west-1b in another account. If m5s are available in your chosen region but not the zone Concourse-Up has chosen, create a new deployment, this time specifying another --zone.

  • --worker-size value Size of Concourse workers. Can be medium, large, xlarge, 2xlarge, 4xlarge, 10xlarge, 12xlarge, 16xlarge or 24xlarge depending on the worker-type (see above) (default: "xlarge") [$WORKER_SIZE]

    --worker-size AWS m4 Instance type AWS m5 Instance type* GCP Instance type
    medium t2.medium t2.medium n1-standard-1
    large m4.large m5.large n1-standard-2
    xlarge m4.xlarge m5.xlarge n1-standard-4
    2xlarge m4.2xlarge m5.2xlarge n1-standard-8
    4xlarge m4.4xlarge m5.4xlarge n1-standard-16
    10xlarge m4.10xlarge n1-standard-32
    12xlarge m5.12xlarge
    16xlarge m4.16xlarge n1-standard-64
    24xlarge m5.24xlarge

    * m5 instances not available in all regions and all zones. See --worker-type for more info.

  • --web-size value Size of Concourse web node. Can be small, medium, large, xlarge, 2xlarge (default: "small") [$WEB_SIZE]

    --web-size AWS Instance type GCP Instance type
    small t2.small n1-standard-1
    medium t2.medium n1-standard-2
    large t2.large n1-standard-4
    xlarge t2.xlarge n1-standard-8
    2xlarge t2.2xlarge n1-standard-16
  • --db-size value Size of Concourse Postgres instance. Can be small, medium, large, xlarge, 2xlarge, or 4xlarge (default: "small") [$DB_SIZE]

    Note that when changing the database size on an existing concourse-up deployment, the SQL instance will scaled by terraform resulting in approximately 3 minutes of downtime.

    The following table shows the allowed database sizes and the corresponding AWS RDS & CloudSQL instance types

    --db-size AWS Instance type GCP Instance type
    small db.t2.small db-g1-small
    medium db.t2.medium db-custom-2-4096
    large db.m4.large db-custom-2-8192
    xlarge db.m4.xlarge db-custom-4-16384
    2xlarge db.m4.2xlarge db-custom-8-32768
    4xlarge db.m4.4xlarge db-custom-16-65536
  • --allow-ips value Comma separated list of IP addresses or CIDR ranges to allow access to (default: "0.0.0.0/0") [$ALLOW_IPS]

    Note: allow-ips governs what can access Concourse but not what can access the control plane (i.e. the BOSH director).

  • --github-auth-client-id value Client ID for a github OAuth application - Used for Github Auth [$GITHUB_AUTH_CLIENT_ID]

  • --github-auth-client-secret value Client Secret for a github OAuth application - Used for Github Auth [$GITHUB_AUTH_CLIENT_SECRET]

  • --add-tag key=value Add a tag to the VMs that form your concourse-up deployment. Can be used multiple times in a single deploy command.

  • --spot=value Use spot instances for workers. Can be true/false. Default is true.

    Concourse Up uses spot instances for workers as a cost saving measure. Users requiring lower risk may switch this feature off by setting --spot=false.

  • --preemptible=value Use preemptible instances for workers. Can be true/false. Default is true.

    Be aware the preemptible instances will go down at least once every 24 hours so deployments with only one worker will experience downtime with this feature enabled. BOSH will ressurect falled workers automatically.

    spot and preemptible are interchangeable so if either of them is set to false then interruptible instances will not be used regardless of your IaaS. i.e:

    # Results in an AWS deployment using non-spot workers
    concourse-up deploy --spot=true --preemptible=false <your-project-name>
    # Results in an AWS deployment using non-spot workers
    concourse-up deploy --preemptible=false <your-project-name>
    # Results in a GCP deployment using non-preemptible workers
    concourse-up deploy --iaas gcp --spot=false <your-project-name>
  • --zone Specify an availability zone [$ZONE] (cannot be changed after the initial deployment)

If any of the following 5 flags is set, all the required ones from this group need to be set

  • --vpc-network-range value Customise the VPC network CIDR to deploy into (required for AWS) [$VPC_NETWORK_RANGE]

  • --public-subnet-range value Customise public network CIDR (if IAAS is AWS must be within --vpc-network-range) (required) [$PUBLIC_SUBNET_RANGE]

  • --private-subnet-range value Customise private network CIDR (if IAAS is AWS must be within --vpc-network-range) (required) [$PRIVATE_SUBNET_RANGE]

  • --rds-subnet-range1 value Customise first rds network CIDR (must be within --vpc-network-range) (required for AWS) [$RDS_SUBNET_RANGE1]

  • --rds-subnet-range2 value Customise second rds network CIDR (must be within --vpc-network-range) (required for AWS) [$RDS_SUBNET_RANGE2]

    All the ranges above should be in the CIDR format of IPv4/Mask. The sizes can vary as long as vpc-network-range is big enough to contain all others (in case IAAS is AWS). The smallest CIDR for public and private subnets is a /28. The smallest CIDR for rds1 and rds2 subnets is a /29

Info

To fetch information about your concourse-up deployment:

$ concourse-up info --json <your-project-name>

To load credentials into your environment from your concourse-up deployment:

$ eval "$(concourse-up info --env <your-project-name>)"

To check the expiry of the BOSH Director's NATS CA certificate:

$ concourse-up info --cert-expiry <your-project-name>

Warning: if your deployment is approaching a year old, it may stop working due to expired certificates. For information please see this issue #81.

Flags

All flags are optional

--json Output as json [$JSON] --env Output environment variables --cert-expiry Output the expiry of the BOSH director's NATS certificate

Destroy

To destroy your Concourse:

$ concourse-up destroy <your-project-name>

Maintain

Handles maintenance operations in concourse-up

Flags

All flags are optional

  • --renew-nats-cert Rotate the NATS certificate on the director

    Note that the NATS certificate is hardcoded to expire after 1 year. This command follows the istructions on bosh.io to rotate this certificate. This operation will cause downtime on your Concourse as it performs multiple full recreates.

  • --stage value Specify a specific stage at which to start the NATS certificate renewal process. If not specified, the stage will be determined automatically. See the following table for details.

    Stage Description
    0 Adding new CA (create-env)
    1 Recreating VMs for the first time (recreate)
    2 Removing old CA (create-env)
    3 Recreating VMs for the second time (recreate)
    4 Cleaning up director-creds.yml

Self-update

When Concourse-up deploys Concourse, it now adds a pipeline to the new Concourse called concourse-up-self-update. This pipeline continuously monitors our Github repo for new releases and updates Concourse in place whenever a new version of Concourse-up comes out.

This pipeline is paused by default, so just unpause it in the UI to enable the feature.

Upgrading manually

Patch releases of concourse-up are compiled, tested and released automatically whenever a new stemcell or component release appears on bosh.io.

To upgrade your Concourse, grab the latest release and run concourse-up deploy <your-project-name> again.

Metrics

Concourse-up now automatically deploys Influxdb, Riemann, and Grafana on the web node. You can access Grafana on port 3000 of your regular concourse URL using the same username and password as your Concourse admin user. We put in a default dashboard that tracks

  • Build times
  • CPU usage
  • Containers
  • Disk usage

Credential Management

Concourse-up deploys the credhub service alongside Concourse and configures Concourse to use it. More detail on how credhub integrates with Concourse can be found here. You can log into credhub by running $ eval "$(concourse-up info --env --region $region $deployment)".

Firewall

Concourse-up normally allows incoming traffic from any address to reach your web node. You can use the --allow-ips flag to add firewall rules to prevent this. For example to deploy Concourse-up and only allow traffic from your local machine, you could use the command concourse-up deploy --allow-ips $(dig +short myip.opendns.com @resolver1.opendns.com). --allow-ips takes a comma seperated list of IP addresses or CIDR ranges.

Estimated Cost

By default, concourse-up deploys to the AWS eu-west-1 (Ireland) region or the GCP europe-west1 (Belgium) region, and uses spot instances for large and xlarge Concourse VMs. The estimated monthly cost is as follows:

AWS

Component Size Count Price (USD)
BOSH director t2.small 1 18.30
Web Server t2.small 1 18.30
Worker m4.xlarge (spot) 1 ~50.00
RDS instance db.t2.small 1 28.47
NAT Gateway - 1 35.15
gp2 storage 20GB (bosh, web) 2 4.40
gp2 storage 200GB (worker) 1 22.00
Total 176.62

GCP

Component Size Count Price (USD)
BOSH director n1-standard-1 1 26.73
Web Server n1-standard-1 1 26.73
Worker n1-standard-4 (preemptible) 1 32.12
DB instance db-g1-small 1 27.25
NAT Gateway n1-standard-1 1 26.73
disk storage 20GB (bosh, web) + 200GB (worker) - 40.80
Total 180.35

What it does

concourse-up first creates an S3 or GCS bucket to store its own configuration and saves a config.json file there.

It then uses Terraform to deploy the following infrastructure:

  • AWS
    • Key pair
    • S3 bucket for the blobstore
    • IAM user that can access the blobstore
      • IAM access key
      • IAM user policy
    • IAM user that can deploy EC2 instances
      • IAM access key
      • IAM user policy
    • VPC
    • Internet gateway
    • Route for internet_access
    • NAT gateway
    • Route table for private
    • Subnet for public
    • Subnet for private
    • Route table association for private
    • Route53 record for Concourse
    • EIP for director, ATC, and NAT
    • Security groups for director, vms, RDS, and ATC
    • Route table for RDS
    • Route table associations for RDS
    • Subnets for RDS
    • DB subnet group
    • DB instance
  • GCP
    • A DNS A record pointing to the ATC IP
    • A Compute route for the nat instance
    • A Compute instance for the nat
    • A Compute network
    • Public and Private Compute subnetworks
    • Compute firewalls for director, nat, atc-one, atc-two, vms, atc-three, internal, and sql
    • A Service account for for bosh
    • A Service account key for bosh
    • A Project iam member for bosh
    • Compute addresses for the ATC and Director
    • A Sql database instance
    • A Sql database
    • A Sql user

Once the terraform step is complete, concourse-up deploys a BOSH director on an t2.small/n1-standard-1 instance, and then uses that to deploy a Concourse with the following settings:

  • One t2.small/n1-standard-1 for the Concourse web server
  • One m4.xlarge spot/n1-standard-4 preemptible instance used as a Concourse worker
  • Access via over HTTP and HTTPS using a user-provided certificate, or an auto-generated self-signed certificate if one isn't provided.

Using a dedicated AWS IAM account

If you'd like to run concourse-up with it's own IAM account, create a user with the following permissions:

Using a dedicated GCP IAM member

A IAM Primitive role of roles/owner for the target GCP Project is required

Project

CI Pipeline (deployed with Concourse Up!)

Development

Pre-requisites

To build and test you'll need:

  • Golang 1.11+
  • to have installed github.com/mattn/go-bindata

Building locally

concourse-up uses golang compile-time variables to set the release versions it uses. To build locally use the build_local.sh script, rather than running go build.

You will also need to clone concourse-up-ops to the same level as concourse-up to get the manifest and ops files necessary for building. Check the latest release of concourse-up for the appropriate tag of concourse-up-ops

Tests

Tests use the Ginkgo Go testing framework. The tests require you to have set up AWS authentication locally.

Install ginkgo and run the tests with:

go get github.com/onsi/ginkgo/ginkgo
ginkgo -r
$ go get github.com/onsi/ginkgo/ginkgo
$ ginkgo -r

Go linting, shell linting, and unit tests can be run together in the same docker image CI uses with ./run_tests_local.sh. This should be done before committing or raising a PR.

Bumping Manifest/Ops File versions

The pipeline listens for new patch or minor versions of manifest.yml and ops/versions.json coming from the concourse-up-ops repo. In order to pick up a new major version first make sure it exists in the repo then modify tag_filter: X.*.* in the concourse-up-ops resource where X is the major version you want to pin to.

concourse-up's People

Contributors

crsimmons avatar danyoung avatar dark5un avatar engineerbetterci avatar evadinckel avatar irbekrm avatar jessicastenning avatar jpluscplusm avatar peterellisjones avatar saphmb avatar takeyourhatoff avatar will-gant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

concourse-up's Issues

re-deploying a new SSL cert

Hi, first of all, congrats with concourse-up, it's brilliant software, infrastructure built as it should be.

We are in a bit of an awkward situation, I've created the cluster with a wrong certificate, which is expired now and I'd like to replace it with a new one.

Ideally, I'd use the AWS one, but as I read here #24 it's not straightforward, so I am fine creating a new one with let's encrypt or use a self-signed.

But how? Do I have to tear down the whole thing and start it again?

Will I have to setup again all the pipelines or there's a way to just tear down the web worker?

Thanks, and sorry if I missed some docs explaining this, please point me to those!

Support other instance types

Currently, concourse-up is hard coded to use m3 instances. m3's are not available in some of the newer regions (e.g. us-east-2 only has m4's) Need some ability to pass in the preferred types.

Make the internal configuration region configurable

The documentation states that the internal configuration is always stored in eu-west-1. This should be configurable as some organizations have regulatory requirements on where they can store data.

I was able to change the hard coded region to my preferred region for the moment.

Support using AWS certificates

It'd be really handy to have a version where we use an ELB with an AWS issued certificate. I may try and build that functionality out and submit a PR if that's something you'd be open to.

As far as I can tell I'd need to update the Bosh manifest to use http on the actual web server and grafana then just add an ELB and redo the terraform so that the ELB is what we expose to the web.

credhub cli timeout breaks security

We are experiencing a 30 second timout for the credhub login.

This behaviour encourages all developers to set the password in their bashrc/zshrc files, meaning security is flawed as the password is suddenly exposed in files on every users computer.
This behaviour is now being observed after upgrading to 0.8.4, which includes credhub 1.7.2

Is is possible to bump the token timeout to a more reasonable length?

See cloudfoundry/credhub#32 for related info.

Uploading stemcell 'bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3468'... Failed (00:00:00)

// errors out and exits without finishing

GENERATING BOSH DIRECTOR CERTIFICATE (34.234.219.169, 10.0.0.6)
Deployment manifest: '/tmp/concourse-up593898250/director.yml'
Deployment state: '/tmp/concourse-up593898250/director-state.json'

Started validating
Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh'... Finished (00:00:01)
Downloading release 'bosh-aws-cpi'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh-aws-cpi'... Finished (00:00:00)
Validating cpi release... Finished (00:00:00)
Validating deployment manifest... Finished (00:00:00)
Downloading stemcell... Skipped [Found in local cache] (00:00:00)
Validating stemcell... Finished (00:00:00)
Finished validating (00:00:01)

Started installing CPI
Compiling package 'ruby_aws_cpi/dc02a5fa6999e95281b7234d4098640b0b90f1e6'... Finished (00:01:35)
Compiling package 'bosh_aws_cpi/04ca340b3d64ea01aa84bd764cc574805785e97c'... Finished (00:00:02)
Installing packages... Finished (00:00:00)
Rendering job templates... Finished (00:00:00)
Installing job 'aws_cpi'... Finished (00:00:00)
Finished installing CPI (00:01:38)

Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3468'... Failed (00:00:00)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

creating stemcell (bosh-aws-xen-hvm-ubuntu-trusty-go_agent 3468):
CPI 'create_stemcell' method responded with error: CmdError{"type":"Unknown","message":"undefined method `set_api' for #\u003cClass:0x00562927945308\u003e","ok_to_retry":false}

Exit code 1
exit status 1

Automatically add the web instance IP to `allow-ips`

Hey, not sure if this is something that is very specific to the way we are working, so I though it best to ask.

We have a self-pausing pipeline in concourse. To do so it uses fly to login to concourse and check the status of a few things before pausing the pipeline or not. This was fine in testing until we added a firewall to the web instance using --allow-ips.

Would it be useful to automatically add the IP of the web instance when using this option? This means that I would only need to concourse-up once when creating a new instance rather than do it once to get everything up and running, then again to add the firewall settings.

Alternatively, are we doing it wrong? Is there another way to set the target on fly instead of using the external URL when it's running locally, or maybe not even use fly?

Thanks a lot!

NoMethodError

build does not complete

concourse-up deploy --region us-east-1 ci

gives errors -- part of output below

In file included from /usr/include/stdio.h:27:0,
from ./include/ruby/defines.h:26,
from ./include/ruby/ruby.h:29,
from dln.c:13:
/usr/include/features.h:330:4: warning: #warning _FORTIFY_SOURCE requires compiling with optimization (-O) [-Wcpp]

warning _FORTIFY_SOURCE requires compiling with optimization (-O)

^~~~~~~

In file included from /usr/include/stdio.h:27:0,
from ./include/ruby/defines.h:26,
from ./include/ruby/ruby.h:29,
from ./include/ruby.h:33,
from internal.h:15,
from localeinit.c:12:
/usr/include/features.h:330:4: warning: #warning _FORTIFY_SOURCE requires compiling with optimization (-O) [-Wcpp]

warning _FORTIFY_SOURCE requires compiling with optimization (-O)

^~~~~~~

In file included from /usr/include/stdio.h:27:0,
from ./include/ruby/defines.h:26,
from ./include/ruby/ruby.h:29,
from loadpath.c:13:
/usr/include/features.h:330:4: warning: #warning _FORTIFY_SOURCE requires compiling with optimization (-O) [-Wcpp]

warning _FORTIFY_SOURCE requires compiling with optimization (-O)

^~~~~~~

In file included from /usr/include/stdio.h:27:0,
from ./include/ruby/defines.h:26,
from ./include/ruby/ruby.h:29,
from prelude.c:6:
/usr/include/features.h:330:4: warning: #warning _FORTIFY_SOURCE requires compiling with optimization (-O) [-Wcpp]

warning _FORTIFY_SOURCE requires compiling with optimization (-O)

^~~~~~~
  • make install
    Using built-in specs.
    COLLECT_GCC=gcc
    COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-amazon-linux/6.4.1/lto-wrapper
    Target: x86_64-amazon-linux
    Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --disable-libgcj --with-default-libstdcxx-abi=gcc4-compatible --with-isl --enable-libmpx --enable-libsanitizer --enable-libcilkrts --enable-libatomic --enable-libquadmath --enable-libitm --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 --build=x86_64-amazon-linux
    Thread model: posix
    gcc version 6.4.1 20170727 (Red Hat 6.4.1-1) (GCC)
  • popd
  • echo 'Installing rubygems'
  • tar zxvf ruby_aws_cpi/rubygems-2.6.2.tgz
  • pushd rubygems-2.6.2
  • /root/.bosh/installations/bd909676-712e-438e-5362-54f0f9f4a9d6/packages/ruby_aws_cpi/bin/ruby setup.rb
  • popd
  • /root/.bosh/installations/bd909676-712e-438e-5362-54f0f9f4a9d6/packages/ruby_aws_cpi/bin/gem install ruby_aws_cpi/bundler-1.15.0.gem --local --no-ri --no-rdoc
    ERROR: Loading command: install (LoadError)
    cannot load such file -- zlib
    ERROR: While executing gem ... (NoMethodError)
    undefined method `invoke_with_build_args' for nil:NilClass
    ':
    exit status 1

Exit code 1
exit status 1

Supporting Remote Workers

Hi,

I need to be able to deploy to a local resource, and instead of opening up an inbound connection to our network I'm looking at using a concourse worker to facilitate the deployment.

In order to get this to work, I had to add inbound port 2222 to the concourse-up-ant-ci-atc aws security group.

However, the self-update just blasted this change away. ;-)

What's the reason this port hasn't been exposed and is there a way I can have this change persisted across releases?

Cheers,
Adam

Concourse died - what's the best procedure to bring it back up?

Hi, more a question than an issue, perhaps some docs on the subject would be great.

our monitoring says that our concourse went down

Pingdom DOWN alert:
Concourse (xxx) is down since 16/12/2017 00:51:15.
Reason: Network is unreachable

I didn't bother investigating and I've just ran a concourse-up deploy

Is that a 'decent' approach?

Is it possible that a new release of concourse-up broke the web worker?

Thanks.

Accessing concourse-up's bosh

I'm still going through the configuration and set-up but is it possible to connect to concourse-up's bosh instance to pull logs from the jobs?

Creating concourse in wrong region

Hi,

I set AWS_DEFAULT_REGION=us-east-1, yet it created everything in "eu-west-1".

Shouldn't it create the bosh director and concourse in AWS_DEFAULT_REGION that I set?

I probably made some mistake somewhere. Can you please help me point in the right direction?

Thanks,
--ajay

Support other IaaSes

Given this is built on top of Terraform, one would imagine it is feasible to support other IaaSes, e.g.:

  • vSphere
  • OpenStack
  • GCP
  • Azure

Are you open to this kind of evolution? I would imagine it would have implications for the CLI flags you currently support, which appear to be mildly coupled with AWS. Thoughts?

fresh install logs multiple error in DB logs

We had troubles with our install (web instance got gradually slower). Due to lots of errors in the db-log we wiped the installation and setup a new fresh one. However, even before login into our new Concourse instance, we observed lots of errors in the log.

concourse-up version 0.8.4.

Is this related to concourse, or the product itself?

2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “users” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE TABLE USERS ( id char(36) not null primary key, created TIMESTAMP default current_timestamp, lastModified TIMESTAMP default current_timestamp, version BIGINT default 0, username VARCHAR(255) not null, password VARCHAR(255) not null, email VARCHAR(255) not null, authority BIGINT default 0, givenName VARCHAR(255) not null, familyName VARCHAR(255) not null )
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “unique_uk_1_1" already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE UNIQUE INDEX unique_uk_1_1 on users (LOWER(username))
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: column “active” of relation “users” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: ALTER TABLE USERS ADD COLUMN active BOOLEAN default true
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: column “phonenumber” of relation “users” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: ALTER TABLE USERS ADD COLUMN phoneNumber VARCHAR(255)
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: column “authorities” of relation “users” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: ALTER TABLE USERS ADD COLUMN authorities VARCHAR(1024) default ‘uaa.user’
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: column “verified” of relation “users” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: ALTER TABLE USERS ADD COLUMN VERIFIED BOOLEAN
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “sec_audit” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE TABLE SEC_AUDIT ( principal_id char(36) not null, event_type INTEGER not null, origin VARCHAR(255) not null, event_data VARCHAR(255), created TIMESTAMP default current_timestamp )
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “audit_principal” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE INDEX audit_principal ON SEC_AUDIT (principal_id)
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “audit_created” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE INDEX audit_created ON SEC_AUDIT (created)
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “oauth_client_details” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE TABLE OAUTH_CLIENT_DETAILS ( client_id VARCHAR(256) PRIMARY KEY, resource_ids VARCHAR(1024), client_secret VARCHAR(256), scope VARCHAR(256), authorized_grant_types VARCHAR(256), web_server_redirect_uri VARCHAR(1024), authorities VARCHAR(256), access_token_validity INTEGER )
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: column “refresh_token_validity” of relation “oauth_client_details” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: ALTER TABLE OAUTH_CLIENT_DETAILS ADD COLUMN refresh_token_validity INTEGER default 0
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: column “additional_information” of relation “oauth_client_details” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: ALTER TABLE OAUTH_CLIENT_DETAILS ADD COLUMN additional_information VARCHAR(4096)
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “groups” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE TABLE GROUPS ( id VARCHAR(36) not null primary key, displayName VARCHAR(255) not null, created TIMESTAMP default current_timestamp not null, lastModified TIMESTAMP default current_timestamp not null, version BIGINT default 0 not null, constraint unique_uk_2 unique(displayName) )
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “group_membership” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE TABLE GROUP_MEMBERSHIP ( group_id VARCHAR(36) not null, member_id VARCHAR(36) not null, member_type VARCHAR(8) not null default ‘USER’, authorities VARCHAR(255) not null default ‘READ’, added TIMESTAMP default current_timestamp not null, primary key (group_id, member_id) )
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “oauth_code” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: create table oauth_code ( code VARCHAR(256), authentication BYTEA )
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “authz_approvals” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE TABLE AUTHZ_APPROVALS ( userName VARCHAR(36) not null, clientId VARCHAR(36) not null, scope VARCHAR(255) not null, expiresAt TIMESTAMP default current_timestamp not null, status VARCHAR(50) default ‘APPROVED’ not null, lastModifiedAt TIMESTAMP default current_timestamp not null, primary key (userName, clientId, scope) )
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:ERROR: relation “external_group_mapping” already exists
2018-03-27 11:00:05 UTC:10.0.0.7(40282):adminhr7wdjl3mhlw43o7jwlw@uaa:[7085]:STATEMENT: CREATE TABLE external_group_mapping ( group_id VARCHAR(36) not null, external_group VARCHAR(255) not null, added TIMESTAMP default current_timestamp not null, primary key (group_id, external_group) )
2018-03-27 11:01:01 UTC:10.0.0.7(40546):adminhr7wdjl3mhlw43o7jwlw@credhub:[7344]:WARNING: there is already a transaction in progress
2018-03-27 11:01:10 UTC::@:[3739]:LOG: checkpoint starting: time
2018-03-27 11:01:35 UTC:10.0.0.7(41666):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[7411]:ERROR: relation “migration_version” does not exist at character 21
2018-03-27 11:01:35 UTC:10.0.0.7(41666):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[7411]:STATEMENT: SELECT version FROM migration_version
2018-03-27 11:01:36 UTC:10.0.0.7(41684):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[7415]:ERROR: relation “migration_version” does not exist at character 21
2018-03-27 11:01:36 UTC:10.0.0.7(41684):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[7415]:STATEMENT: SELECT version FROM migration_version
2018-03-27 11:02:03 UTC::@:[3739]:LOG: checkpoint complete: wrote 531 buffers (0.2%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=53.441 s, sync=0.004 s, total=53.452 s; sync files=305, longest=0.003 s, average=0.000 s; distance=18165 kB, estimate=18165 kB
2018-03-27 11:06:10 UTC::@:[3739]:LOG: checkpoint starting: time
2018-03-27 11:07:09 UTC::@:[3739]:LOG: checkpoint complete: wrote 586 buffers (0.2%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=58.977 s, sync=0.002 s, total=58.986 s; sync files=249, longest=0.002 s, average=0.000 s; distance=13641 kB, estimate=17713 kB
2018-03-27 11:11:10 UTC::@:[3739]:LOG: checkpoint starting: time
2018-03-27 11:11:19 UTC::@:[3739]:LOG: checkpoint complete: wrote 92 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=9.261 s, sync=0.004 s, total=9.271 s; sync files=71, longest=0.004 s, average=0.000 s; distance=16420 kB, estimate=17584 kB
2018-03-27 11:16:10 UTC::@:[3739]:LOG: checkpoint starting: time
2018-03-27 11:16:20 UTC::@:[3739]:LOG: checkpoint complete: wrote 100 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=10.067 s, sync=0.003 s, total=10.076 s; sync files=70, longest=0.003 s, average=0.000 s; distance=16340 kB, estimate=17459 kB
2018-03-27 11:21:10 UTC::@:[3739]:LOG: checkpoint starting: time
2018-03-27 11:21:23 UTC::@:[3739]:LOG: checkpoint complete: wrote 130 buffers (0.1%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=13.086 s, sync=0.003 s, total=13.094 s; sync files=87, longest=0.003 s, average=0.000 s; distance=16667 kB, estimate=17380 kB
2018-03-27 11:23:26 UTC:10.0.0.6(45890):adminhr7wdjl3mhlw43o7jwlw@bosh:[11328]:ERROR: database “concourse_atc” already exists
2018-03-27 11:23:26 UTC:10.0.0.6(45890):adminhr7wdjl3mhlw43o7jwlw@bosh:[11328]:STATEMENT: CREATE DATABASE concourse_atc
2018-03-27 11:23:26 UTC:10.0.0.6(45890):adminhr7wdjl3mhlw43o7jwlw@bosh:[11328]:ERROR: database “uaa” already exists
2018-03-27 11:23:26 UTC:10.0.0.6(45890):adminhr7wdjl3mhlw43o7jwlw@bosh:[11328]:STATEMENT: CREATE DATABASE uaa
2018-03-27 11:23:26 UTC:10.0.0.6(45890):adminhr7wdjl3mhlw43o7jwlw@bosh:[11328]:ERROR: database “credhub” already exists
2018-03-27 11:23:26 UTC:10.0.0.6(45890):adminhr7wdjl3mhlw43o7jwlw@bosh:[11328]:STATEMENT: CREATE DATABASE credhub
2018-03-27 11:23:53 UTC:10.0.0.7(40544):adminhr7wdjl3mhlw43o7jwlw@credhub:[7343]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40546):adminhr7wdjl3mhlw43o7jwlw@credhub:[7344]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40540):adminhr7wdjl3mhlw43o7jwlw@credhub:[7341]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40542):adminhr7wdjl3mhlw43o7jwlw@credhub:[7342]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40538):adminhr7wdjl3mhlw43o7jwlw@credhub:[7340]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40536):adminhr7wdjl3mhlw43o7jwlw@credhub:[7339]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40532):adminhr7wdjl3mhlw43o7jwlw@credhub:[7337]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40534):adminhr7wdjl3mhlw43o7jwlw@credhub:[7338]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40530):adminhr7wdjl3mhlw43o7jwlw@credhub:[7336]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:23:53 UTC:10.0.0.7(40528):adminhr7wdjl3mhlw43o7jwlw@credhub:[7335]:LOG: could not receive data from client: Connection reset by peer
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:ERROR: duplicate key value violates unique constraint “oauth_client_details_pkey”
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:DETAIL: Key (client_id, identity_zone_id)=(admin, uaa) already exists.
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:STATEMENT: insert into oauth_client_details (client_secret, resource_ids, scope, authorized_grant_types, web_server_redirect_uri, authorities, access_token_validity, refresh_token_validity, additional_information, autoapprove, lastmodified, required_user_groups, client_id, identity_zone_id, created_by) values ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15)
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:ERROR: duplicate key value violates unique constraint “oauth_client_details_pkey”
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:DETAIL: Key (client_id, identity_zone_id)=(credhub_cli, uaa) already exists.
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:STATEMENT: insert into oauth_client_details (client_secret, resource_ids, scope, authorized_grant_types, web_server_redirect_uri, authorities, access_token_validity, refresh_token_validity, additional_information, autoapprove, lastmodified, required_user_groups, client_id, identity_zone_id, created_by) values ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15)
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:ERROR: duplicate key value violates unique constraint “oauth_client_details_pkey”
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:DETAIL: Key (client_id, identity_zone_id)=(atc_to_credhub, uaa) already exists.
2018-03-27 11:24:36 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:STATEMENT: insert into oauth_client_details (client_secret, resource_ids, scope, authorized_grant_types, web_server_redirect_uri, authorities, access_token_validity, refresh_token_validity, additional_information, autoapprove, lastmodified, required_user_groups, client_id, identity_zone_id, created_by) values ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15)
2018-03-27 11:24:37 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:ERROR: duplicate key value violates unique constraint “external_group_unique_key”
2018-03-27 11:24:37 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:DETAIL: Key (origin, external_group, group_id)=(ldap, cn=test_org,ou=people,o=springsource,o=org, f90855f9-bce6-4827-bc8a-b07dc65895d4) already exists.
2018-03-27 11:24:37 UTC:10.0.0.7(52032):adminhr7wdjl3mhlw43o7jwlw@uaa:[11517]:STATEMENT: insert into external_group_mapping ( group_id,external_group,added,origin,identity_zone_id ) values ($1,lower($2),$3,$4,$5)
2018-03-27 11:26:00 UTC:10.0.0.7(54468):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11821]:ERROR: relation “migration_version” does not exist at character 21
2018-03-27 11:26:00 UTC:10.0.0.7(54468):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11821]:STATEMENT: SELECT version FROM migration_version
2018-03-27 11:26:00 UTC:10.0.0.7(54478):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11825]:ERROR: relation “migration_version” does not exist at character 21
2018-03-27 11:26:00 UTC:10.0.0.7(54478):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11825]:STATEMENT: SELECT version FROM migration_version
2018-03-27 11:26:10 UTC::@:[3739]:LOG: checkpoint starting: time
2018-03-27 11:26:30 UTC:10.0.0.7(54482):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11827]:ERROR: update or delete on table “volumes” violates foreign key constraint “volumes_parent_id_fkey” on table “volumes”
2018-03-27 11:26:30 UTC:10.0.0.7(54482):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11827]:DETAIL: Key (id, state)=(10, created) is still referenced from table “volumes”.
2018-03-27 11:26:30 UTC:10.0.0.7(54482):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11827]:STATEMENT: UPDATE volumes SET state = $1 WHERE (id = $2 AND (state = $3 OR state = $4))
2018-03-27 11:26:30 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:ERROR: update or delete on table “volumes” violates foreign key constraint “volumes_parent_id_fkey” on table “volumes”
2018-03-27 11:26:30 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:DETAIL: Key (id, state)=(21, created) is still referenced from table “volumes”.
2018-03-27 11:26:30 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:STATEMENT: UPDATE volumes SET state = $1 WHERE (id = $2 AND (state = $3 OR state = $4))
2018-03-27 11:26:30 UTC:10.0.0.7(54482):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11827]:ERROR: update or delete on table “volumes” violates foreign key constraint “volumes_parent_id_fkey” on table “volumes”
2018-03-27 11:26:30 UTC:10.0.0.7(54482):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11827]:DETAIL: Key (id, state)=(1, created) is still referenced from table “volumes”.
2018-03-27 11:26:30 UTC:10.0.0.7(54482):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11827]:STATEMENT: UPDATE volumes SET state = $1 WHERE (id = $2 AND (state = $3 OR state = $4))
2018-03-27 11:26:43 UTC::@:[3739]:LOG: checkpoint complete: wrote 328 buffers (0.1%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=33.010 s, sync=0.003 s, total=33.022 s; sync files=220, longest=0.002 s, average=0.000 s; distance=16404 kB, estimate=17282 kB
2018-03-27 11:27:00 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:ERROR: update or delete on table “volumes” violates foreign key constraint “volumes_parent_id_fkey” on table “volumes”
2018-03-27 11:27:00 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:DETAIL: Key (id, state)=(10, created) is still referenced from table “volumes”.
2018-03-27 11:27:00 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:STATEMENT: UPDATE volumes SET state = $1 WHERE (id = $2 AND (state = $3 OR state = $4))
2018-03-27 11:27:00 UTC:10.0.0.7(57454):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[12003]:ERROR: update or delete on table “volumes” violates foreign key constraint “volumes_parent_id_fkey” on table “volumes”
2018-03-27 11:27:00 UTC:10.0.0.7(57454):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[12003]:DETAIL: Key (id, state)=(21, created) is still referenced from table “volumes”.
2018-03-27 11:27:00 UTC:10.0.0.7(57454):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[12003]:STATEMENT: UPDATE volumes SET state = $1 WHERE (id = $2 AND (state = $3 OR state = $4))
2018-03-27 11:27:00 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:ERROR: update or delete on table “volumes” violates foreign key constraint “volumes_parent_id_fkey” on table “volumes”
2018-03-27 11:27:00 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:DETAIL: Key (id, state)=(1, created) is still referenced from table “volumes”.
2018-03-27 11:27:00 UTC:10.0.0.7(54484):adminhr7wdjl3mhlw43o7jwlw@concourse_atc:[11828]:STATEMENT: UPDATE volumes SET state = $1 WHERE (id = $2 AND (state = $3 OR state = $4))

AWS_PROFILE support

I'm trying to setup a new concourse instance but I need to switch AWS role to get full access to the target env.

AWS_PROFILE="alternate_profile" AWS_ACCESS_KEY_ID="****" \
AWS_SECRET_ACCESS_KEY="******" \
concourse-up deploy --region us-east-1  concourse

Profile does not seem to take effect :

Forbidden: Forbidden
        status code: 403, request id: ******

Also specifying AWS_SDK_LOAD_CONFIG=1 did not help.

It would be nice if concourse-up used the default profile credentials or those from the specified AWS_PROFILE so that KEYs vars do no need to be specified.

Upgrade concourse-up to 3.13

concourse-up is falling behind the concourse releases.
Recently 3.13.0 was released, concourse-up is still on 3.9.2

We are seriously affected by performance issues due to the Docker overlay driver, which has been fixed/reverted back to btrfs in 3.10 and upwards.

Can you provide som insight into your roadmap for upgrading concourse-up?

Installation/Startup instructions

I think I must be missing something!

I am trying out concourse for our team, and am new to concourse, bosh and go. This project seemed like the easiest way to get up and running in AWS, but I can't work out how to get it started.

Do I need to run build_local.sh or go build? Do I need to set a GOPATH? Out of the box running concourse-up deploy xyz doesn't match up to any command in the repo from what I can see.

Also, if I am running this on Bash for Windows, am I going to have a Bad Time? :)

This is the error message I get with both build_local.sh and go build:

main.go:9:2: cannot find package "github.com/EngineerBetter/concourse-up/bosh" in any of:
        /usr/lib/go/src/pkg/github.com/EngineerBetter/concourse-up/bosh (from $GOROOT)
        ($GOPATH not set)

Thanks. Really keen to see how Concourse can help visualise our pipelines!

add IAM rule to workers

This isn't a problem with concourse-up, I was just hoping someone familiar with it may be able to help me quickly.
I need to be able to set a IAM role to my workers so they have access to certain resources easily. I have gone through bosh docs and found iam_instance_profile which I believe I need to set to the role I want. But I can not for the life of me figure out which section to add it. I have tried many and none work. If anyone can help me out I would appreciate it.

thanks!

Can not build locally

Current error on build:

./main.go:28:15: cannot use commands.Commands (type []"github.com/EngineerBetter/concourse-up/vendor/gopkg.in/urfave/cli.v1".Command) as type []"gopkg.in/urfave/cli.v1".Command in assignment
./main.go:29:12: cannot use commands.GlobalFlags (type []"github.com/EngineerBetter/concourse-up/vendor/gopkg.in/urfave/cli.v1".Flag) as type []"gopkg.in/urfave/cli.v1".Flag in assignment

I am having to build locally because I want to use my own VPC and subnets for this. I have edited the terraform main.tf to accommodate this.

Any help would be greatly appreciated.

BTW: For those new to go(me), you should probably list the dependancies needed for the build including all the go get github.com/* as it took me a while to figure it out.

Thanks

Trouble connecting to existing deployment from new computer

Hello, about a month ago I deployed a concourse ci environment with concourse-up, but now that I am on a new computer, when I run the concourse-up info command, I get an error saying it can't find the bucket in s3. I have set up my access credentials for aws correctly, and can even see the bucket when I run "aws s3api list-buckets". Is there a way I can get concourse-up to recognize my deployment?

can not hijack a job

when i run
fly -t test hijack -j api-dev/build-api-image it returns a list of containers i can hijack but when i choose one it just sits there. eventually dying with the error.

websocket: unexpected reserved bits 0x40

everything else works, i just can't hijack a job.

Improve error message when S3 bucket name is taken

Was struggling for a while with running concourse-up deploy <projectname>. And got stuck with this error message.

Forbidden: Forbidden
	status code: 403, request id: 1234567890123456

After alot of headscratching we found out this was because the S3 bucket name was already taken. It looks like the bucket name is constructed like this.

concourse-up-<projectname>-<region>-blobstore
concourse-up-<projectname>-<region>-config

Probably a lot of people will try to use "ci" or something similar as project name and get the same error as I did. A better error message would've saved me some time on debugging.

Cheers,
Marius

credhub login broken

We recently upgraded our setup using concourse-up deploy. This resulted in the credhub CLI being unable to login to the credhub server any more (it fails with "The provided credentials are incorrect. Please validate your input and retry your request."). Assuming it was an upgrade glitch, we deployed an entirely new concourse-upd instance in a different AWS region; alas, even this virgin Concourse will not allow credhub logins.

Digging a little deeper, we can see the credhub-cli user in UAA; indeed, we can login using the credentials if we hit the UAA port using a browser. Changing the password with uaac does nothing; moreover, the user doesn't appear to be locked, or inactive. It seems to be part of the credhub groups.

Debugging the UAA login within the credhub code, all I can see is UAA returning 401 to the (seemingly sensible) request to login to the credhub_cli client.

Any advice would be most welcome.

RDS filled up

We use concourse-up to manage our Concourse. Our usage is, I believe, fairly mundane. A dozen or so pipelines, with a reasonable amount of credentials managed by the bundled credhub. We have three teams (including main), authenticated via Github Oauth.

Our setup failed this morning. Manifestations were credhub-cli rejecting logins with “bad credentials”, and git resource checks failing. The git resource was complaining about pgsql disk space usage; alas, I did not keep the exact error.

Checking RDS, the Postgres disk had indeed filled up - all 10 gigs. I resized it to restore service, and tunnelled in to find database usage of:

     name      |           owner           |   size    
---------------+---------------------------+-----------
 rdsadmin      | rdsadmin                  | No Access
 credhub       | adminby6djcbv1rdm3k63n7j7 | 8945 MB
 concourse_atc | adminby6djcbv1rdm3k63n7j7 | 181 MB
 bosh          | adminby6djcbv1rdm3k63n7j7 | 12 MB
 uaa           | adminby6djcbv1rdm3k63n7j7 | 8935 kB
 template1     | adminby6djcbv1rdm3k63n7j7 | 7343 kB
 template0     | rdsadmin                  | 7233 kB
 postgres      | adminby6djcbv1rdm3k63n7j7 | 7233 kB

The relation size in credhub was:

               relation                |  size   
---------------------------------------+---------
 public.request_audit_record           | 5085 MB
 public.event_audit_record             | 2465 MB
 public.event_audit_record_pkey        | 693 MB
 public.request_audit_record_pkey      | 691 MB
 public.auth_failure_audit_record      | 832 kB
 pg_toast.pg_toast_2618                | 376 kB
 pg_toast.pg_toast_2619                | 72 kB
 public.encrypted_value                | 72 kB
…truncated…

I’m not a credhub expert. Things I guess might be useful in diagnosing this:

  1. select count (distinct uaa_url) from request_audit_record gives 1; the record is https://an_ip:8443/oauth/token
  2. select count(*) from request_audit_record; gives 17735751
  3. A random selection the rows in request_audit_record gives entries similar to:


18c85002-8fae-4d7e-9aa4-bad4610f9e43 | 127.0.0.1 | 1516213545469 | /api/v1/data | 127.0.0.1 | 1516210813 | 1516214413 | https://an_ip:8443/oauth/token | | | | credhub.write,credhub.read | client_credentials | atc_to_credhub | GET | 200 | path=<73 characters redacted> | uaa


4. select count(*) from event_audit_record gives 17740039
5. select operation, count(*) from event_audit_record group by operation; gives

     operation     |  count  
-------------------+---------
 credential_update |     101
 credential_delete |      23
 credential_find   | 8872644
 acl_update        |     255
 credential_access | 8872480
  1. Records in event_audit_record have the form:

b2b1aa8a-10e1-4777-b742-07df841918fb | 7d6d796e-d391-496f-90bd-253ed2cc55c0 | 1516111765973 | credential_update | <redacted 73 characters of credential path> | uaa-user:94d61c71-12e4-42ce-9d59-03292aa2c382 | t

Evidently, something about our setup is causing an unexpectedly large number of credhub uses (perhaps the constant git polling?). I will leave the tables intact for a few days in case they are useful for further diagnostics, but will have to truncate them sooner rather than later.

Let me know what I can do to help!

CC @jpluscplusm

Permission denied

I'm really not sure if this belongs here, with Concourse itself, or if I did something wrong, so sorry in advance.

We have concourse running on AWS with concourse-up, but pipelines are failing with this:

runc create: exit status 1: container_linux.go:264: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:56: mounting \\\"/var/vcap/data/baggageclaim/volumes/live/31a2cf2b-96d8-4a45-7f54-678bc14b60ee/volume\\\" to rootfs \\\"/var/vcap/data/garden/graph/aufs/mnt/8da3238024d4ca66aa84fba1c718807b4cd727eb8ea11728325b3de7bd53e8ff\\\" at \\\"/var/vcap/data/garden/graph/aufs/mnt/8da3238024d4ca66aa84fba1c718807b4cd727eb8ea11728325b3de7bd53e8ff/scratch\\\" caused \\\"mkdir /var/vcap/data/garden/graph/aufs/mnt/8da3238024d4ca66aa84fba1c718807b4cd727eb8ea11728325b3de7bd53e8ff/scratch: permission denied\\\"\""

The permission denied error here seems to indicate some kind of setup failure, but I don't really know where to look. Any ideas?

Support Vault for credential management

Hi
Thanks for a great tool. We have just adopted this tool instead of managing everything by ourselves as docker images in AWS ECS. (as we do not want to invest in BOSH just for Concourse, so this project fits us perfectly!)

However, we would like to see Vault support for credential management, either as config parameters pointing to an existing vault installation, or even better, an optional flag to install Vault using bosh. There is already a boshrelease

We would be happy to assist in testing and providing feedback

Using AWS managed certificate. (acm)

We want to use an AWS managed SSL certificate (e.g. generated by ACM), and as far as I can tell, there is no way for us to get our hands on this certificate's private key. Do you know if it's possible to run concourse-up with a ACM requested certificate (not imported)? If so, how?

Supporting or documenting backups (also credhub)

Hi
We're using the concourse-up tool with success, nice work!

However, as concourse becomes more and more business critical, we would like to ask if there are any guidelines for backing up the system.
This is also important for Credhub, given keys and secrets are stored there.
We'll schedule a backup of the database, but is this enough?
I'm not too familiar with the inner workings of credhub, so I'm not sure how the keys/secrets are actually stored.

In the long term it would be nice if the tool automatically enabled backups to S3 of everything vital, so that the tool could provision a new concourse instance from the backups instead of from scratch.

Alternatively, supporting hot or cold standby in a another region would be nice.

So, is it possible for you to provide som insight into what needs backing up and how to restore, and any thoughts on long-term plans for supporting backup/restore in the tool?

Allow EBS encryption

It would be great if concourse-up would support EBS encryption (for persistent and potentially also for ephemeral disks)

Ability to specify subnet and custom tags

A couple questions/suggestions:

  1. We tag everything for cost allocation, so it'd be great to be able to specify these tags as part of the deploy process, e.g.:

    --add-tag "Key=Team,Value=CI" --add-tag "Key=Environment,Value=staging"

    Right now, this is just a manual process, which means tags are not automatically applied with instance changes. Am I missing this somewhere?

  2. Specifying existing VPCs/Subnets to deploy into (in addition to just region) seems pretty crucial for most use cases

Detailed error messages

Related to #16, if the AWS error conditions were relayed to the console with verbose context it would help debugging.

Forbidden: Forbidden
        status code: 403, request id: *****

might become

Could not create S3 bucket (blah blah)

Support for Multiple Worker Types

I see there's only one type of Concourse worker defined in the bosh deployment. Is there any way to easily add more worker types?

If I wasn't using concourse-up I guess I would just modify the bosh deployment to have another worker entry, but if I do that manually concourse-up will probably kill it when it auto-updates or someone tries to scale with the cli, right?

The primary use case here is to have some GPU nodes with a custom worker name so that you can use them for some but not all concourse jobs. Another desirable might be network or disk optimized workers.

Failing to launch the bosh agent

I'm having a problem launching 0.4.6; I'm a bit of a bosh newbie, so I'm not quite sure where to begin.
Here's the end of the concourse-up deploy output:

...
Started installing CPI
  Compiling package 'ruby_aws_cpi/dc02a5fa6999e95281b7234d4098640b0b90f1e6'... Finished (00:01:42)
  Compiling package 'bosh_aws_cpi/04ca340b3d64ea01aa84bd764cc574805785e97c'... Finished (00:00:01)
  Installing packages... Finished (00:00:00)
  Rendering job templates... Finished (00:00:00)
  Installing job 'aws_cpi'... Finished (00:00:00)
Finished installing CPI (00:01:44)

Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3445.11'... Finished (00:00:04)

Started deploying
  Creating VM for instance 'bosh/0' from stemcell 'ami-c03ec3b8 light'... Finished (00:00:40)
  Waiting for the agent on VM 'i-0512d97c6b57d36aa' to be ready... Failed (00:10:17)
Failed deploying (00:10:58)

Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Deploying:
  Creating instance 'bosh/0':
    Waiting until instance is ready:
      Post https://mbus:<redacted>@<redacted>:6868/agent: dial tcp <redacted>:6868: i/o timeout

A re-run produces the same error. A curl -k -vv to the public URL times out equivalently; an SSH attempt lands on a host (i get the ssh banner, so the security group rules seem correct) but I can't figure out how to log in (I tried the private_key and the director_key from the s3 bucket, using the usernames root and admin; all four combinations are rejected). Not sure how to continue debugging, could you help? Thanks :D

deploy fails on installing CPI

version 0.10.4

Started installing CPI
  Compiling package 'ruby-2.4-r4/0cdc60ed7fdb326e605479e9275346200af30a25'... Failed (00:01:14)
Failed installing CPI (00:01:14)

Installing CPI:
  Compiling job package dependencies for installation:
    Compiling job package dependencies:
      Compiling package:
        Running command: 'bash -x packaging', stdout: 'checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes

... 3k lines ...

	from setup.rb:46:in `<main>'
+ echo 'Cannot install rubygems'
+ exit 1
':
          exit status 1

Exit code 1
exit status 1

fail to provision workers

I am not able to generate certs with letsencrypt using 0.8.6. I have tried with an existing domain that also had a private zone and then registered a new domain without any other records other then the R53 NS and SOA - same error in both cases.

GENERATING BOSH DIRECTOR CERTIFICATE (34.215.198.136, 10.0.0.6)
2018/03/23 16:48:50 [INFO] acme: Registering account for [email protected]
2018/03/23 16:48:51 [INFO][cci.incontact.xyz] acme: Obtaining bundled SAN certificate
2018/03/23 16:48:51 [INFO][cci.incontact.xyz] AuthURL: https://acme-v01.api.letsencrypt.org/acme/authz/GPwqvnpIT8w2hqwikRsZYOHd1-DpixlH5krOpm9BW8M
2018/03/23 16:48:51 [INFO][cci.incontact.xyz] acme: Could not find solver for: http-01
2018/03/23 16:48:51 [INFO][cci.incontact.xyz] acme: Trying to solve DNS-01
map[cci.domain.xyz:Error presenting token: Failed to determine Route 53 hosted zone ID: Could not find the start of authority]

Allow distinct CIDR ranges access to Concourse vs Credhub

We treat our Credhub and Concourse as distinct security zones. The Concourse interface has generally wider access than we'd like our Credhub instance to have.

We've been accommodating this by altering the ATC security group after running concourse-up, restricting access on 8844/8443 to a tighter set of ranges. Of course, when we rerun concourse-up deploy, this gets reset (as the group is managed by the Terraform module).

It would be super if we could specify distinct ranges for Concourse vs Credhub in concourse-up to avoid this.

README clarification regarding system prerequisites

tl;dr- If people know the high-level prereqs they can quickly understand the scope and context of the project. Also, offering a clear starting point tends to help people get started.

For example, in the Prerequisites section: "Recommended use is a, b, c. Popular use is d, c, or e. We don't recommend x, y, or z."

I also ran into some packages that didn't play nicely with Ruby 1.9, so I immediately began to question whether or not I had the right distro as a starting point. Fast-forward an hour and I've hit 2 different issues on 2 different distros, so I'm thinking "I dunno? Seemed cool but I couldn't get it working, or even identify my hurdles, in 1 hour so I'll go find another thing to check out".

Thanks! Love this idea and think you guys have done an awesome job on this project.

Using temporary credentials for initial deployment breaks self-updates

The self-updating pipeline has its AWS creds baked in from the initial CLI invocation of concourse-up. If these creds were issued by a aws sts assume-role or equivalent, then the concourse-up-self-update job fails, as the credentials expire a maximum of 60 minutes after creation.

Perhaps an additional self-update IAM user could be created during deployment. This might be possible to make optional - I believe it's possible to discover if the context you're operating in (as Terraform, etc) is based on temporary creds.

new install fails- bosh director certificate

Due to a bug with the 3.14.1. release of concourse, we have taken backups of the credhub secrets and the pipeline configs.

We ran concourseup destroy first, which failed, we fixed the cause, ran destroy again, this time ok.

No we are trying to install anew with the same name, however, concourseup 0.9.2 fails with the following.

GENERATING BOSH DIRECTOR CERTIFICATE (34.240.151.184, 10.0.0.6)
2018/07/02 15:46:11 [INFO] acme: Registering account for [email protected]
acme: Error 400 - urn:acme:error:invalidEmail - Error creating new registration :: invalid contact domain. Contact emails @example.com are forbidden

How can we get past this problem?

Forbidden

Just getting the following error when trying to deploy. AWS CLI is configured with a valid IAM account with full access to S3, Route53, EC2, RDS and VPC.

# concourse-up deploy ci --region eu-west-2 --domain ci.aws.domain.com Forbidden: Forbidden status code: 403, request id: A351BAF8B8B2DA4F

I have tested with no flags and get the same result.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.