Giter VIP home page Giter VIP logo

bosh-lite's Introduction

This repository is deprecated. It is no longer maintained, and it is not recommended for continued use. Vagrant BOSH lite has been deprecated in favor of Virtualbox BOSH lite.

The original purpose of this project was to provide a pre-baked image where you could easily start BOSH with popular tools like Vagrant. Since then, we have made improvements to the provisioning process which avoids extra dependencies like vagrant, the original bosh Ruby CLI, and the original bosh-init.

Going forward, please follow the recommended guide for running BOSH locally using VirtualBox. This improved process uses the same provisioning process as you would to deploy to any other IaaS; it ensures you are using recent BOSH components and features; and allows you to more easily change the configuration of BOSH for testing.

bosh-lite's People

Contributors

7hunderbird avatar adamstegman avatar ajackson avatar bgandon avatar calebamiles avatar chou avatar cppforlife avatar datianshi avatar drnic avatar fhanik avatar gerhard avatar jfoley avatar kaixiang avatar krishicks avatar leoross avatar mariash avatar maxbrunsfeld avatar michaelklishin avatar mikegehard avatar mkocher avatar mmb avatar phanle avatar shalako avatar squeedee avatar sykesm avatar thansmann avatar tjarratt avatar vito avatar xtreme-andrei-dinin avatar zaksoup avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bosh-lite's Issues

cf target timeout

After a successful CF deploy, I have yet to successfully target cloud controller.

± |master ✗| → cf target http://api.10.244.0.34.xip.io
Setting target to http://api.10.244.0.34.xip.io... FAILED
Time of crash:
2014-01-13 15:27:20 -0800

Errno::ETIMEDOUT: Operation timed out - connect(2)

/Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/net/http.rb:763:in initialize' /Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/net/http.rb:763:inopen'
/Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/net/http.rb:763:in block in connect' /Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/timeout.rb:55:intimeout'
/Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/timeout.rb:100:in timeout' /Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/net/http.rb:763:inconnect'
/Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/net/http.rb:756:in do_start' /Users/scoen/.rvm/rubies/ruby-1.9.3-p484/lib/ruby/1.9.1/net/http.rb:745:instart'
cfoundry-4.7.1/lib/cfoundry/rest_client.rb:148:in request_uri' cfoundry-4.7.1/lib/cfoundry/rest_client.rb:60:inrequest'
cfoundry-4.7.1/lib/cfoundry/baseclient.rb:93:in request_raw' cfoundry-4.7.1/lib/cfoundry/baseclient.rb:88:inrequest'
cfoundry-4.7.1/lib/cfoundry/baseclient.rb:66:in get' cfoundry-4.7.1/lib/cfoundry/baseclient.rb:62:ininfo'
cf-5.4.5/lib/cf/cli/start/target.rb:24:in block in target' interact-0.5.2/lib/interact/progress.rb:98:inwith_progress'
cf-5.4.5/lib/cf/cli/start/target.rb:22:in target' mothership-0.5.1/lib/mothership/base.rb:66:inrun'
mothership-0.5.1/lib/mothership/command.rb:72:in block in invoke' mothership-0.5.1/lib/mothership/command.rb:86:ininstance_exec'
mothership-0.5.1/lib/mothership/command.rb:86:in invoke' mothership-0.5.1/lib/mothership/base.rb:55:inexecute'
cf-5.4.5/lib/cf/cli.rb:195:in block (2 levels) in execute' cf-5.4.5/lib/cf/cli.rb:202:insave_token_if_it_changes'
cf-5.4.5/lib/cf/cli.rb:194:in block in execute' cf-5.4.5/lib/cf/cli.rb:123:inwrap_errors'
cf-5.4.5/lib/cf/cli.rb:190:in execute' mothership-0.5.1/lib/mothership.rb:45:instart'
cf-5.4.5/bin/cf:18:in <top (required)>' ruby-1.9.3-p484@global/bin/cf:23:inload'
ruby-1.9.3-p484@global/bin/cf:23:in <main>' ruby-1.9.3-p484@bosh-lite/bin/ruby_executable_hooks:15:ineval'
ruby-1.9.3-p484@bosh-lite/bin/ruby_executable_hooks:15:in `

'

I have downgraded the vagrant box to 110 as recommended.

± |master ✗| → cat Vagrantfile
VM_MEMORY = 6*1024
VM_CORES = 4
BOX_VERSION = 110

Vagrant.configure('2') do |config|

config.vm.hostname='bosh-lite'
config.vm.box = "boshlite-ubuntu1204-build#{BOX_VERSION}"
config.vm.network :private_network, ip: '192.168.50.4'

config.vm.provider :virtualbox do |v, override|
override.vm.box_url = "http://bosh-lite-build-artifacts.s3.amazonaws.com/bosh-lite/#{BOX_VERSION}/boshlite-virtualbox-ubuntu1204.box"
v.customize ["modifyvm", :id, "--memory", VM_MEMORY]
v.customize ["modifyvm", :id, "--cpus", VM_CORES]
end

config.vm.provider :vmware_fusion do |v, override|
override.vm.box_url = "http://bosh-lite-build-artifacts.s3.amazonaws.com/bosh-lite/#{BOX_VERSION}/boshlite-vmware-ubuntu1204.box"
v.vmx["numvcpus"] = VM_CORES
v.vmx["memsize"] = VM_MEMORY
end

end

Error while trying to run bundle `evaluate': undefined method `ruby' for #<Bundler::Dsl:0x7fe541045280> (NoMethodError)

/home/df/boshlite/bosh-lite/Gemfile:3:in evaluate': undefined methodruby' for #Bundler::Dsl:0x7fe541045280 (NoMethodError)
from /usr/lib/ruby/vendor_ruby/bundler/definition.rb:17:in build' from /usr/lib/ruby/vendor_ruby/bundler.rb:136:indefinition'
from /usr/lib/ruby/vendor_ruby/bundler/cli.rb:222:in install' from /usr/lib/ruby/vendor_ruby/bundler/vendor/thor/task.rb:22:insend'
from /usr/lib/ruby/vendor_ruby/bundler/vendor/thor/task.rb:22:in run' from /usr/lib/ruby/vendor_ruby/bundler/vendor/thor/invocation.rb:118:ininvoke_task'
from /usr/lib/ruby/vendor_ruby/bundler/vendor/thor.rb:246:in dispatch' from /usr/lib/ruby/vendor_ruby/bundler/vendor/thor/base.rb:389:instart'
from /usr/bin/bundle:13

Ruby version: ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux]

bundle: bundler-1.6.0.pre.1

provision_cf failure - not in gzip format (Zlib::GzipFile::Error)

I've been hitting the following error when trying to run the provision_cf script.

/home/samirah/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/bosh_cli-1.2334.0/lib/cli/stemcell.rb:27:in `initialize': not in gzip format (Zlib::GzipFile::Error)

Any ideas?

scripts/transform.rb breaks manifest

It turns out that after running scripts/transform.rb, some of the multiline properties, in the manifest are messed up - in particular the signing_key and verification_key. For example the verification_key starts out looking like this:

      verification_key: |
        -----BEGIN PUBLIC KEY-----
        MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
        KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX
        qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug
        spULZVNRxq7veq/fzwIDAQAB
        -----END PUBLIC KEY-----

but after running the script it looks like this:

      verification_key: ! '-----BEGIN PUBLIC KEY-----

        MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d

        KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX

        qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug

        spULZVNRxq7veq/fzwIDAQAB

        -----END PUBLIC KEY-----

'

This ends up messing up the loggregator, preventing it's job from starting.

Latest fusion box (build 110) from s3 doesn't boot on fusion 6.0.2

Pulled the latest code, brought everything up to date, and ensured that I don't have any old boxes floating around:

[12:51:05] (1.9.3@bosh-lite) [~/workspace/bosh-lite]
[sykesm@sykesm-macbook] (master)$ git pull && git status && vagrant box list
Already up-to-date.
On branch master
Your branch is up-to-date with 'origin/master'.

nothing to commit, working directory clean
dummy     (openstack)
lucid64   (vmware_fusion)
precise64 (openstack)
precise64 (vmware_fusion)

Then I tried to bring up the box:

[12:51:09] (1.9.3@bosh-lite) [~/workspace/bosh-lite]
[sykesm@sykesm-macbook] (master)$ vagrant up
Bringing machine 'default' up with 'vmware_fusion' provider...
[default] Box 'boshlite-ubuntu1204-build110' was not found. Fetching box from specified URL for
the provider 'vmware_fusion'. Note that if the URL does not have
a box for this provider, you should interrupt Vagrant now and add
the box yourself. Otherwise Vagrant will attempt to download the
full box prior to discovering this error.
Downloading or copying the box...
Extracting box...te: 9771k/s, Estimated time remaining: 0:00:01)
Successfully added box 'boshlite-ubuntu1204-build110' with provider 'vmware_desktop'!
[default] Cloning VMware VM: 'boshlite-ubuntu1204-build110'. This can take some time...
[default] Verifying vmnet devices are healthy...
[default] Preparing network adapters...
[default] Starting the VMware VM...
An error occurred while executing `vmrun`, a utility for controlling
VMware machines. The command and output are below:

Command: ["start", "/Users/sykesm/workspace/bosh-lite/.vagrant/machines/default/vmware_fusion/13388229-aebd-4052-84ea-c91949697b1f/boshlite-vmware-ubuntu1204.vmx", "nogui", {:notify=>[:stdout, :stderr]}]

Stdout: Error: Cannot open VM: /Users/sykesm/workspace/bosh-lite/.vagrant/machines/default/vmware_fusion/13388229-aebd-4052-84ea-c91949697b1f/boshlite-vmware-ubuntu1204.vmx, Cannot read the virtual machine configuration file

It looks like the packer build is generating a bad configuration file for vmware. I had this issue when using packer 0.4 but things worked with v0.3.11. You may want to lock the packer version or open an issue with packer if you can figure out what's causing it.

bosh deployments - 500s

$ bosh deployments
HTTP 500: 

Inside vagrant:

$ tail -f /var/log/director/current
...
2014-01-29_21:28:53.57858 E, [2014-01-29T13:28:53.577548 #965] [0xfe1078] ERROR -- : NoMethodError - undefined method `name' for #<Bosh::Director::Models::Deployment:0x00000003f59940>:
2014-01-29_21:28:53.57860 /opt/rbenv/versions/1.9.3-p484/lib/ruby/gems/1.9.1/gems/bosh-director-1.5.0.pre.1657/lib/bosh/director/api/controllers/deployments_controller.rb:140:in `block (2 levels) in <class:DeploymentsController>'

Bundle spiff

Today the "non-spiff" approach was removed.

The benefit of the "non-spiff" approach is that it didn't require building spiff from source (setting up golang etc). Can we please bundle a set of working builds of spiff with this repo?

Or can spiff tool be shipped pre-built?

From a standing start, for users who've never setup development of golang apps, using bosh-lite got harder today.

Even with add-route running, cannot access warden bosh containers

$ scripts/add-route
Adding the following route entry to your local route table to enable direct warden container access. Your sudo password may be required.
  - net 10.244.0.0/24 via 192.168.50.4
Password:
route: writing to routing socket: File exists
add net 10.244.0.0: gateway 192.168.50.4: File exists
$ cf target http://api.10.244.0.254.xip.io
Setting target to http://api.10.244.0.254.xip.io... FAILED
Time of crash:
  2013-10-08 20:18:35 -0700

Errno::EADDRNOTAVAIL: Can't assign requested address - connect(2)

bosh-lite not building today

[2013-10-30T17:48:21+00:00] INFO: cookbook_file[/opt/warden/config/warden-cpi-vm.yml] owner changed to 1000
[2013-10-30T17:48:21+00:00] INFO: execute[rbenv rehash] ran successfully

================================================================================
Error executing action `run` on resource 'execute[setup_warden]'
================================================================================


Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of /opt/rbenv/shims/bundle install && /opt/rbenv/shims/bundle exec rake setup:bin[/opt/warden/config/warden-cpi-vm.yml] ----
STDOUT: Fetching gem metadata from http://rubygems.org/.........
Fetching gem metadata from http://rubygems.org/..
Fetching https://github.com/cloudfoundry/common.git
Fetching https://github.com/cloudfoundry/membrane.git
Fetching https://github.com/cloudfoundry/steno.git
Fetching https://github.com/cloudfoundry/warden.git
Using rake (0.9.2.2) 
Installing beefcake (0.3.7) 
Installing diff-lcs (1.1.3) 
Installing eventmachine (1.0.0) 
Installing posix-spawn (0.3.6) 
Using em-posix-spawn (0.1.10) from https://github.com/cloudfoundry/common.git (at master) 
Installing hashie (1.2.0) 
Installing multi_json (1.3.6) 
Installing multi_xml (0.5.1) 
Installing rack (1.4.1) 
Installing rack-mount (0.8.3) 
Installing grape (0.2.1) 
Using membrane (0.0.3) from https://github.com/cloudfoundry/membrane.git (at master) 
Installing pidfile (0.3.0) 
Installing rspec-core (2.11.0) 
Installing rspec-expectations (2.11.1) 
Installing rspec-mocks (2.11.1) 
Installing rspec (2.11.0) 
Installing yajl-ruby (1.1.0) 
Using steno (0.0.15) from https://github.com/cloudfoundry/steno.git (at master) 
Using warden-protocol (0.1.3) from https://github.com/cloudfoundry/warden.git (at master) 
Using warden-client (0.1.0) from https://github.com/cloudfoundry/warden.git (at master) 
Using bundler (1.3.5) 
Your bundle is complete!
Use `bundle show [gemname]` to see where a bundled gem is installed.
STDERR: /opt/rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/rake-0.9.2.2/bin/rake:30:in `require': no such file to load -- rake (LoadError)
    from /opt/rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/rake-0.9.2.2/bin/rake:30
    from /opt/chef/embedded/bin/rake:23:in `load'
    from /opt/chef/embedded/bin/rake:23
---- End output of /opt/rbenv/shims/bundle install && /opt/rbenv/shims/bundle exec rake setup:bin[/opt/warden/config/warden-cpi-vm.yml] ----
Ran /opt/rbenv/shims/bundle install && /opt/rbenv/shims/bundle exec rake setup:bin[/opt/warden/config/warden-cpi-vm.yml] returned 1


Resource Declaration:
---------------------
# In /tmp/vagrant-chef-1/chef-solo-2/cookbooks/bosh-lite/recipes/warden.rb

 29: execute "setup_warden" do
 30:   cwd "/opt/warden/warden"
 31:   command "/opt/rbenv/shims/bundle install && /opt/rbenv/shims/bundle exec rake setup:bin[/opt/warden/config/warden-cpi-vm.yml]"
 32:   action :run
 33: end
 34: 



Compiled Resource:
------------------
# Declared in /tmp/vagrant-chef-1/chef-solo-2/cookbooks/bosh-lite/recipes/warden.rb:29:in `from_file'

execute("setup_warden") do
  action [:run]
  retries 0
  retry_delay 2
  command "/opt/rbenv/shims/bundle install && /opt/rbenv/shims/bundle exec rake setup:bin[/opt/warden/config/warden-cpi-vm.yml]"
  backup 5
  cwd "/opt/warden/warden"
  returns 0
  cookbook_name :"bosh-lite"
  recipe_name "warden"
end



[2013-10-30T17:48:49+00:00] INFO: Running queued delayed notifications before re-raising exception
[2013-10-30T17:48:49+00:00] ERROR: Running exception handlers
[2013-10-30T17:48:49+00:00] ERROR: Exception handlers complete
[2013-10-30T17:48:49+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2013-10-30T17:48:50+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
No error message

make_manifest_spiff => "error generating manifest: unresolved nodes"

@vito any thoughts on why I might be getting this error? I believe I'm using latest spiff. I'm also using cf-release master/HEAD.

$ ./scripts/make_manifest_spiff
Target already set to `Bosh Lite Director'
53e56169-85eb-428f-b2ef-04caf08525bb
2013/12/10 17:20:23 error generating manifest: unresolved nodes:
    dynaml.ConcatenationExpr{{[jobs nats_z1 networks cf1 static_ips]} {[jobs nats_z2 networks cf2 static_ips]}}
Incorrect YAML structure in `/Users/drnic/Projects/ruby/gems/cloudfoundry/bosh-lite/manifests/cf-manifest.yml': expected Hash at the root

Sorry, your current directory doesn't look like release directory

When running ./scripts/provision_cf, I get the following error...

Director
Name Bosh Lite Director
URL https://192.168.50.4:25555
Version 1.2559.0 (fe0b2436)
User admin
UUID c5f8d0da-f8ac-4918-a1a3-0a846fb97d09
CPI warden
dns disabled
compiled_package_cache enabled (provider: local)
snapshots disabled

Deployment
Manifest /home/programsam/tmp/bosh-lite/manifests/cf-manifest.yml

  • deploy_release
    ++ find /home/programsam/tmp/cf-release/releases -regex '.*cf-[0-9]{3}.yml'
    ++ sort
    ++ tail -n 1
  • MOST_RECENT_CF_RELEASE=
  • bosh upload release --skip-if-exists
    Sorry, your current directory doesn't look like release directory

Turns out this command

find /home/programsam/tmp/cf-release/releases -regex '.*cf-[0-9]{3}.yml'

Is not finding any of the releases in the cf-release directory... The regex seems to be causing the problem. I'm running Fedora 19.

provision_cf script doesn't complete

This is what it ends with:

From https://github.com/cloudfoundry/warden
   cf7f0a8..a4250d2  master     -> origin/master
You are not currently on a branch. Please specify which
branch you want to merge with. See git-pull(1) for details.

    git pull <remote> <branch>

and nothing is deployed.

New tasks are queuing but not being run

I'm using latest bosh-lite with pre-built virtualbox. I've vagrant destroy; vagrant up and I've deleted tmp and .vagrant; and I've tried upgrading to vagrant 1.4; but each time my bosh-lite isn't doing the work I need it to do; rather it just queues things.

✗ bosh upload stemcell latest-bosh-stemcell-warden.tgz

Verifying stemcell...
File exists and readable                                     OK
Using cached manifest...
Stemcell properties                                          OK

Stemcell info
-------------
Name:    bosh-stemcell
Version: 993

Checking if stemcell already exists...
No

Uploading stemcell...

latest-bosh-s: 100% |oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 170.4MB  37.9MB/s Time: 00:00:04

Director task 1

Update stemcell
  extracting stemcell archive (00:00:01)                                                            
  verifying stemcell manifest (00:00:00)                                                            
  checking if this stemcell already exists (00:00:00)                                               
  uploading stemcell bosh-stemcell/993 to the cloud (00:00:07)                                      
  save stemcell bosh-stemcell/993 (stemcell-ed9d039c-e808-4a07-bbe1-5d5c86cfa61a) (00:00:00)        
Not done                5/5 00:00:08                                                                

Task 1 queued

Error executing action `install` on resource 'chef_gem[pg]'

[2013-10-02T22:10:59+00:00] WARN: Current  package[debootstrap]: /tmp/vagrant-chef-1/chef-solo-2/cookbooks/bosh-lite/recipes/bosh.rb:20:in `block in from_file'
[2013-10-02T22:11:02+00:00] INFO: execute[apt-get update] ran successfully

================================================================================
Error executing action `install` on resource 'chef_gem[pg]'
================================================================================


Gem::Installer::ExtensionBuildError
-----------------------------------
ERROR: Failed to build gem native extension.

        /opt/chef/embedded/bin/ruby extconf.rb
checking for pg_config... yes
Using config values from /usr/bin/pg_config
checking for libpq-fe.h... yes
checking for libpq/libpq-fs.h... yes
checking for pg_config_manual.h... yes
checking for PQconnectdb() in -lpq... no
checking for PQconnectdb() in -llibpq... no
checking for PQconnectdb() in -lms/libpq... no
Can't find the PostgreSQL client library (libpq)
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.
...

execution expired on bosh deploy, director unresponsive

Preparing configuration
binding configuration (00:00:01)
Done 1/1 00:00:01

Started updating job ha_proxy_z1: ha_proxy_z1/0
Done updating job ha_proxy_z1: ha_proxy_z1/0
Started updating job nats_z1: nats_z1/0
Done updating job nats_z1: nats_z1/0
Started updating job postgres_z1: postgres_z1/0
Done updating job postgres_z1: postgres_z1/0
Started updating job uaa_z1: uaa_z1/0
Perform request get, https://192.168.50.4:25555/tasks/3, {"Authorization"=>"Basic YWRtaW46YWRtaW4="}, nil
REST API call exception: execution expired

maybe I'll try again

± |master ✗| → bosh deploy
Getting deployment properties from director...
Perform request get, https://192.168.50.4:25555/deployments/cf-warden/properties, {"Content-Type"=>"application/json", "Authorization"=>"Basic YWRtaW46YWRtaW4="}, nil
REST API call exception: execution expired

vagrant reload succeeded, but now director is unresponsive.

± |master ✗| → bosh deploy
Getting deployment properties from director...
Compiling deployment manifest...
HTTP 500:

now what?

README and vagrantfile says IP should be on private network, but box came up on a public network IP

I wasn't able to bosh target the vm until @mmb pointed me at this command:

± |master ✗| → vagrant ssh-config
Host default
HostName 172.16.95.129
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/scoen/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL

± |master ✗| → cat Vagrantfile
VM_MEMORY = 6*1024
VM_CORES = 4
BOX_VERSION = 110

Vagrant.configure('2') do |config|

config.vm.hostname='bosh-lite'
config.vm.box = "boshlite-ubuntu1204-build#{BOX_VERSION}"
config.vm.network :private_network, ip: '192.168.50.4'

config.vm.provider :virtualbox do |v, override|
override.vm.box_url = "http://bosh-lite-build-artifacts.s3.amazonaws.com/bosh-lite/#{BOX_VERSION}/boshlite-virtualbox-ubuntu1204.box"
v.customize ["modifyvm", :id, "--memory", VM_MEMORY]
v.customize ["modifyvm", :id, "--cpus", VM_CORES]
end

config.vm.provider :vmware_fusion do |v, override|
override.vm.box_url = "http://bosh-lite-build-artifacts.s3.amazonaws.com/bosh-lite/#{BOX_VERSION}/boshlite-vmware-ubuntu1204.box"
v.vmx["numvcpus"] = VM_CORES
v.vmx["memsize"] = VM_MEMORY
end

end

vagrant up fails at: Unable to resolve dependencies: director requires bosh_cpi

Currently the installation of the director seems to break the provisioning process with error:

ERROR:  While executing gem ... (Gem::DependencyError)
    Unable to resolve dependencies: director requires bosh_cpi (~> 1.5.0.pre.1090); bosh_vcloud_cpi requires bosh_cpi (>= 0.4.2); bosh_vsphere_cpi requires bosh_cpi (~> 1.5.0.pre.1100); bosh_aws_cpi requires bosh_cpi (~> 1.5.0.pre.1090); bosh_openstack_cpi requires bosh_cpi (~> 1.5.0.pre.1090)

make virtualbox/boshlite-withcf-ubuntu1204.box fails during cf compilation

$ make virtualbox/boshlite-withcf-ubuntu1204.box
...
    virtualbox: Preparing deployment
    virtualbox:   binding deployment (00:00:00)
    virtualbox:   binding releases (00:00:00)
    virtualbox:   binding existing deployment (00:00:00)
    virtualbox:   binding resource pools (00:00:00)
    virtualbox:   binding stemcells (00:00:00)
    virtualbox:   binding templates (00:00:00)
    virtualbox:   binding properties (00:00:00)
    virtualbox:   binding unallocated VMs (00:00:00)
    virtualbox:   binding instance networks (00:00:00)
    virtualbox: Done
    virtualbox:
    virtualbox: Preparing package compilation
    virtualbox:
    virtualbox: Compiling packages
    virtualbox:   buildpack_cache/3.1-dev: Timed out pinging to 1c58d7c7-8cfb-484a-ad27-8ec8eba43020 after 600 seconds (00:10:03)
    virtualbox:   dea_logging_agent/10.1-dev: Timed out pinging to 2dbd21c2-3c34-41db-a62f-10813fc1e253 after 600 seconds (00:10:03)
    virtualbox:   rootfs_lucid64/1.1-dev: Timed out pinging to 1c498a97-faf8-46ce-b878-1dcc41e0d5ba after 600 seconds (00:10:03)
    virtualbox: Error.  3/24 00:10:03
    virtualbox:
    virtualbox: Error 450002: Timed out pinging to 1c58d7c7-8cfb-484a-ad27-8ec8eba43020 after 600 seconds
    virtualbox:
    virtualbox: Task 3 error
    virtualbox:
    virtualbox: For a more detailed error report, run: bosh task 3 --debug
==> virtualbox: Unregistering and deleting virtual machine...
==> virtualbox: Deleting output directory...
Build 'virtualbox' errored: Script exited with non-zero exit status: 1

==> Some builds didn't complete successfully and had errors:
--> virtualbox: Script exited with non-zero exit status: 1

==> Builds finished but no artifacts were created.
make: *** [virtualbox/boshlite-withcf-ubuntu1204.box] Error 1

This has happened twice.

cf-release submodule update and release Upload errors

Hi,

I followed the steps in https://github.com/cloudfoundry/bosh-lite/README.md, I got two main errors in the deployment stage:

(1) when running "./update" in the cf-release repository, there is an error:

fatal: unable to connect to github.com:
github.com[0: 192.30.252.131]: errno=Connection refused

Unable to fetch in submodule path 'src/github.com/cloudfoundry/loggregator_consumer'
Failed to recurse into submodule path 'src/loggregator'

I tried several times, the problem is always related to "loggregator_consumer". I checked the .gitmodules and .git/config, but there is no appearances of "loggregator_consumer".

(2) I went on by ignoring the first error, when running "bosh upload release releases/cf-170.yml", I got the second type of errors:

cloud_controller_ng (48) FOUND REMOTE
Downloading 15346a29-1ca4-4d99-bfea-12e7abcb927d...
Blobstore error: Failed to fetch object, underlying error: #<HTTPClient::ConnectTimeoutError: execution expired> /home/nwy/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/openssl/buffering.rb:53:in sysread' /home/nwy/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/openssl/buffering.rb:53:insysread'
/home/nwy/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/openssl/buffering.rb:53:in fill_rbuff' /home/nwy/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/openssl/buffering.rb:200:ingets'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:337:in gets' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:850:inblock in parse_header'
/home/nwy/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/timeout.rb:69:in timeout' /home/nwy/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/timeout.rb:100:intimeout'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:847:in parse_header' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:808:inconnect_ssl_proxy'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:738:in block in connect' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:731:inconnect'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:594:in query' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient/session.rb:161:inquery'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient.rb:1060:in do_get_block' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient.rb:869:inblock in do_request'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient.rb:956:in protect_keep_alive_disconnected' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient.rb:868:indo_request'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient.rb:756:in request' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/httpclient-2.2.4/lib/httpclient.rb:661:inget'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/blobstore_client-1.2334.0/lib/blobstore_client/simple_blobstore_client.rb:38:in get_file' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/blobstore_client-1.2334.0/lib/blobstore_client/s3_blobstore_client.rb:85:inget_file'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/blobstore_client-1.2334.0/lib/blobstore_client/base.rb:50:in get' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/blobstore_client-1.2334.0/lib/blobstore_client/sha1_verifiable_blobstore_client.rb:19:inget'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/blobstore_client-1.2334.0/lib/blobstore_client/retryable_blobstore_client.rb:19:in block in get' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_common-1.2334.0/lib/common/retryable.rb:21:inblock in retryer'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_common-1.2334.0/lib/common/retryable.rb:19:in loop' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_common-1.2334.0/lib/common/retryable.rb:19:inretryer'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/blobstore_client-1.2334.0/lib/blobstore_client/retryable_blobstore_client.rb:18:in get' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/release_compiler.rb:150:infind_in_indices'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/release_compiler.rb:109:in find_package' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/release_compiler.rb:59:inblock in compile'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/release_compiler.rb:53:in each' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/release_compiler.rb:53:incompile'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/commands/release.rb:209:in upload_manifest' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/commands/release.rb:116:inupload'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/command_handler.rb:57:in run' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/runner.rb:56:inrun'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/lib/cli/runner.rb:16:in run' /home/nwy/.rvm/gems/ruby-1.9.3-p547/gems/bosh_cli-1.2334.0/bin/bosh:7:in<top (required)>'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/bin/bosh:23:in load' /home/nwy/.rvm/gems/ruby-1.9.3-p547/bin/bosh:23:in

'
/home/nwy/.rvm/gems/ruby-1.9.3-p547/bin/ruby_executable_hooks:15:in eval' /home/nwy/.rvm/gems/ruby-1.9.3-p547/bin/ruby_executable_hooks:15:in'

Does anyone meet the similar errors or have a solution?Thanks.

Weiyuan

v144 + make_manifest: Error filling in template `loggregator.json.erb'

Preparing configuration
  binding configuration: Error filling in template `loggregator.json.erb' for `loggregator/0' (line 11: Can't find property `["loggregator.status.user"]') (00:00:00)
Error                   1/1 00:00:00                                                                

Error 80006: Error filling in template `loggregator.json.erb' for `loggregator/0' (line 11: Can't find property `["loggregator.status.user"]')

Can't authenticate with admin/admin?

I'm following the readme for the bosh-lite install, and I'm getting stuck after setting the API endpoint and trying to authenticate with admin/admin. Is there something obvious I've missed? I'm using Virtualbox, and haven't ran into a single error while setting things up, up to this point. I used the manual deploy when deploying cloud foundry - the only thing i haven't done is step 8: the tests, as it was being difficult to deal with on my computer. I don't see how tests could affect this however. Is anybody familiar with this happening already?

enabling DNS

any idea to enable dns?

meruvian@baremerv:~$ bosh status
Config
/home/meruvian/.bosh_config

Director
Name Bosh Lite Director
URL https://192.168.50.4:25555
Version 1.2559.0 (fe0b2436)
User admin
UUID c5f8d0da-f8ac-4918-a1a3-0a846fb97d09
CPI warden
dns disabled
compiled_package_cache enabled (provider: local)
snapshots disabled

Deployment
not set

Internal uaa job & vagrant host cannot resolve xip.io DNS

I don't know why I'm seeing this - I cannot ping to api.10.244.0.254.xip.io within the vagrant host (and warden containers see the same issue), but I can ping to the IP directly.

$ ping 10.244.0.254
PING 10.244.0.254 (10.244.0.254) 56(84) bytes of data.
64 bytes from 10.244.0.254: icmp_req=1 ttl=64 time=0.067 ms

$ ping uaa.10.244.0.254.xip.io
hangs

The resolv.conf is:

$ cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.46.2
search localdomain

When I add nameserver 8.8.8.8 it still doesn't resolve the xip.io.

But, I can resolve the IP and access login.10.244.0.254.xip.io from my browser etc.

Anyone seen this or have ideas to fix it?

UAA returns error: getaddrinfo: Name or service not known

With latest code, I was able to deploy bosh-lite successfully but I am trying to login using cf but it fails with error that Name or service not known. Any suggestion?

devops@malligai2:~/git/cf/cf-samples/hello$ 
devops@malligai2:~/git/cf/cf-samples/hello$ cf target
Target Information (where will apps be pushed):
  CF instance: http://api.10.244.0.34.xip.io (API version: 2)
  user: N/A
  target app space: N/A (org: N/A)
devops@malligai2:~/git/cf/cf-samples/hello$ cf login
target: http://api.10.244.0.34.xip.io

Time of crash:
  2013-11-05 23:54:30 -0800

CF::UAA::BadTarget: error: getaddrinfo: Name or service not known

cf-uaa-lib-2.0.0/lib/uaa/http.rb:161:in `rescue in net_http_request'
cf-uaa-lib-2.0.0/lib/uaa/http.rb:149:in `net_http_request'
cf-uaa-lib-2.0.0/lib/uaa/http.rb:137:in `request'
cf-uaa-lib-2.0.0/lib/uaa/http.rb:119:in `http_get'
cf-uaa-lib-2.0.0/lib/uaa/http.rb:89:in `json_get'
cf-uaa-lib-2.0.0/lib/uaa/info.rb:70:in `server'

cannot acces director (Connection refused )

Hi~~
I'm a beginner of CF. I follow this step to set up a local development environment, my OS is ubuntukylin-14.04, and use VirtualBox to create a VM. But when I run " bosh target 10.0.2.15 lite" commond, it returns " cannot access director (Connection refused -c onnect(2) (https://10.0.2.15:25555))" . The VM's firewall is disabled. Do I miss anythig?

error generating manifest: unresolved nodes

After updating cf-release and checking out v154, running the make_manifest_spiff script fails.

± |master ✗| → ./scripts/make_manifest_spiff
d235f4e9-fc81-4924-b331-3712f16611ec
2014/01/10 15:25:53 error generating manifest: unresolved nodes:
dynaml.ReferenceExpr{[properties databases address]}
dynaml.ReferenceExpr{[jobs postgres_z1 networks cf1 static_ips [0]]}
dynaml.ConcatenationExpr{{[jobs ha_proxy_z1 networks cf1 static_ips [0]]} {.xip.io}}
dynaml.ReferenceExpr{[jobs nats_z1 networks cf1 static_ips [0]]}
dynaml.ReferenceExpr{[properties databases address]}
dynaml.CallExpr{static_ips [{1}]}
dynaml.CallExpr{static_ips [{1}]}
dynaml.CallExpr{static_ips [{2}]}
dynaml.CallExpr{static_ips [{2}]}
dynaml.CallExpr{static_ips [{3} {4}]}
dynaml.CallExpr{static_ips [{3} {4}]}
dynaml.CallExpr{static_ips [{9}]}
dynaml.CallExpr{static_ips [{5}]}
dynaml.CallExpr{static_ips [{5}]}
dynaml.CallExpr{static_ips [{6}]}
dynaml.CallExpr{static_ips [{7}]}
dynaml.CallExpr{static_ips [{8}]}
Incorrect YAML structure in `/Users/scoen/workspace/bosh-lite/manifests/cf-manifest.yml': expected Hash at the root
Config
/Users/scoen/.bosh_config

Director
Name Bosh Lite Director
URL https://172.16.95.129:25555
Version 1.5.0.pre.1478 (95a039aa)
User admin
UUID d235f4e9-fc81-4924-b331-3712f16611ec
CPI warden
dns disabled
compiled_package_cache enabled (provider: local)
snapshots disabled

Deployment
not set

Latest version for stemcell `bosh-warden-boshlite-ubuntu' is unknown

Hi,
I am trying to install v170 into Virtualbox running on a Mac and I get this error message when issuing 'bosh deploy' (step 7 of the manual deploy):

Latest version for stemcell `bosh-warden-boshlite-ubuntu' is unknown

When issuing 'bosh public stemcells' I get:

+-------------------------------------------------------------+
| Name                                                        |
+-------------------------------------------------------------+
| bosh-stemcell-2427-aws-xen-ubuntu.tgz                       |
| bosh-stemcell-1471_2-aws-xen-ubuntu.tgz                     |
| bosh-stemcell-2611-aws-xen-centos.tgz                       |
| bosh-stemcell-2611-aws-xen-centos-go_agent.tgz              |
| bosh-stemcell-2427-aws-xen-ubuntu-go_agent.tgz              |
| bosh-stemcell-2611-aws-xen-ubuntu-lucid-go_agent.tgz        |
| bosh-stemcell-2611-aws-xen-ubuntu-lucid.tgz                 |
| bosh-stemcell-2611-aws-xen-ubuntu-trusty-go_agent.tgz       |
| light-bosh-stemcell-2427-aws-xen-ubuntu.tgz                 |
| light-bosh-stemcell-1471_2-aws-xen-ubuntu.tgz               |
| light-bosh-stemcell-2611-aws-xen-centos.tgz                 |
| light-bosh-stemcell-2611-aws-xen-centos-go_agent.tgz        |
| light-bosh-stemcell-2427-aws-xen-ubuntu-go_agent.tgz        |
| light-bosh-stemcell-2611-aws-xen-ubuntu-lucid-go_agent.tgz  |
| light-bosh-stemcell-2611-aws-xen-ubuntu-lucid.tgz           |
| light-bosh-stemcell-2611-aws-xen-ubuntu-trusty-go_agent.tgz |
| bosh-stemcell-2427-openstack-kvm-ubuntu.tgz                 |
| bosh-stemcell-1471_2-openstack-kvm-ubuntu.tgz               |
| bosh-stemcell-2611-openstack-kvm-centos.tgz                 |
| bosh-stemcell-2611-openstack-kvm-ubuntu-lucid.tgz           |
| bosh-stemcell-2539-openstack-kvm-centos-go_agent.tgz        |
| bosh-stemcell-2605-openstack-kvm-ubuntu-trusty-go_agent.tgz |
| bosh-stemcell-2427-vcloud-esxi-ubuntu.tgz                   |
| bosh-stemcell-2611-vcloud-esxi-ubuntu-lucid.tgz             |
| bosh-stemcell-1099-vsphere-esxi-ubuntu.tgz                  |
| bosh-stemcell-1099-vsphere-esxi-centos.tgz                  |
+-------------------------------------------------------------+
To download use `bosh download public stemcell <stemcell_name>'. For full url use --full. 

Which one of those is the correct one to download in order to satisfy 'bosh deploy'?

Thanks.

Invalid DNS resolver in /etc/resolv.conf

When testing with bosh-lite, I find that my first "vagrant up" typically fails early on, due to the DNS server IP address being something odd, like 10.0.2.23 (I don't have the exact value at the moment, but it's something like this). This results in failures to retrieve the requisite files. To fix, I ssh in to the vagrant VM, alter /etc/resolv.conf to use 8.8.8.8 as the DNS server, and then run "vagrant provision".

Is that VT-d blocking my installation CF v2 with bosh-lite?

Hi. I run into a problem that my installation is stuck on "Vagrant up".
Here is my installation environment:
all my installation is indeed on a VM created by WMware vsPhere. And my physical machine on which vsphere runs has already enable the CPU VT.

Then my installation is below:
1.install vagrant -------------------------------------------------------------------OK
2.install ruby with rvm, rubygems and bundle --------------------------OK
3.run bundle in repository of bosh-lite-master----------------------------OK
4.install virtualbox------------------------------------------------------------------OK
5.vagrant up --------------------------------------------------------------------------Failed
the error showed is :
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'poweroff' state. Please verify everything is configured
properly and try again.

when I run sudo VAGRANT_LOG=debug vagrant up, the error in stack is :
ERROR vagrant: /opt/vagrant/embedded/gems/gems/vagrant-1.3.5/lib/vagrant/action/builtin/wait_for_communicator.rb:60:in call' …………………………………… /opt/vagrant/embedded/gems/gems/vagrant-1.3.5/lib/vagrant/batch_action.rb:63:inblock (2 levels) in run'

When i seek help on internet, lots of resolutions is about opening VT-d. And My phisical server is providing VT by setting BIOS indeed. But I heard that vsphere still has a switch which decides whether providing VT to created VMs or not.

Is my problem on VT?
If not, can anyone give me some guide? Any tip help.
Thanks.

While when I use bosh-lite install CF v2 on virtualbox in my phisical machine, it works.
@Kaixiang

java.net.UnknownHostException: uaa.10.244.0.34.xip.io - after re-opening my laptop

Over the v110 thru to v147 I've noticed the following issue. When I shut & reopen my macbook, I often (always?) cannot access Cloud Foundry; trying to re-login gives the following error:

$ gcf login -u admin -p admin
API endpoint: https://api.10.244.0.34.xip.io
Authenticating...
Authentication Server error: I/O error: uaa.10.244.0.34.xip.io; nested exception is java.net.UnknownHostException: uaa.10.244.0.34.xip.io

But bosh cck doesn't recognize any issues:

$ bosh cck
Performing cloud check...

Director task 120

Scanning 13 VMs
  checking VM states (00:00:00)                                                                     
  13 OK, 0 unresponsive, 0 missing, 0 unbound, 0 out of sync (00:00:00)                             
Done                    2/2 00:00:00                                                                

Scanning 5 persistent disks
  looking for inactive disks (00:00:00)                                                             
  5 OK, 0 inactive, 0 mount-info mismatch (00:00:00)                                                
Done                    2/2 00:00:00                                                                

Task 120 done
Started     2014-01-16 17:16:22 UTC
Finished    2014-01-16 17:16:22 UTC
Duration    00:00:00

Scan is complete, checking if any problems found...

sv thinks director didn't start

[2013-10-02T22:27:53+00:00] INFO: runit_service[director] enabled

================================================================================
Error executing action `enable` on resource 'runit_service[director]'
================================================================================


Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of /usr/bin/sv restart /etc/service/director ----
STDOUT: timeout: run: /etc/service/director: (pid 14074) 8s, got TERM
STDERR: 
---- End output of /usr/bin/sv restart /etc/service/director ----
Ran /usr/bin/sv restart /etc/service/director returned 1


Cookbook Trace:
---------------
/tmp/vagrant-chef-1/chef-solo-1/cookbooks/runit/libraries/provider_runit_service.rb:179:in `restart_service'
/tmp/vagrant-chef-1/chef-solo-1/cookbooks/runit/libraries/provider_runit_service.rb:95:in `action_enable'


Resource Declaration:
---------------------
# In /tmp/vagrant-chef-1/chef-solo-2/cookbooks/bosh-lite/recipes/bosh.rb

107:   runit_service service_name do
108:     default_logger true
109:     options({:user => 'root'})
110:   end
111: end



Compiled Resource:
------------------
# Declared in /tmp/vagrant-chef-1/chef-solo-2/cookbooks/bosh-lite/recipes/bosh.rb:107:in `block in from_file'

runit_service("director") do
  provider Chef::Provider::Service::Runit
  action :enable
  supports {:restart=>true, :reload=>true, :status=>true}
  retries 0
  retry_delay 2
  service_name "director"
  enabled true
  pattern "director"
  status_command "/usr/bin/sv status /etc/service"
  startup_type :automatic
  sv_bin "/usr/bin/sv"
  sv_dir "/etc/sv"
  service_dir "/etc/service"
  options {:user=>"root"}
  log true
  default_logger true
  restart_on_update true
  run_template_name "director"
  log_template_name "director"
  finish_script_template_name "director"
  sv_templates true
  service_mirror # Declared in 

service("director") do
  provider Chef::Provider::Service::Simple
  action [:nothing]
  supports {:restart=>true, :reload=>true, :status=>true}
  retries 0
  retry_delay 2
  service_name "director"
  pattern "director"
  start_command "/usr/bin/sv start /etc/service/director"
  stop_command "/usr/bin/sv stop /etc/service/director"
  status_command "/usr/bin/sv status /etc/service/director"
  restart_command "/usr/bin/sv restart /etc/service/director"
  startup_type :automatic
end

  cookbook_name :"bosh-lite"
  recipe_name "bosh"
end

No loop device support in the latest-bosh-stemcell-warden.tgz

I tried to deploy MySQL service in a bosh-lite created CF? MySQL failed to start with error message:

Tue Oct 22 07:46:05 UTC 2013 ERROR: fail to get one free loop device

This message is from /var/vcap/jobs/mysql_node/bin/create_mysql_tmp_dir , below line:

loop_dev=losetup -f

I executed "losetup -f" in the MySQL VM (it's actually a Warden instance), and got below result:
root@31a6a515-e2cf-43d9-8d76-88ea4993f4b4:/var/vcap/jobs/mysql_node/bin# losetup -f
losetup: Could not find any loop device. Maybe this kernel does not know
about the loop device? (If so, recompile or `modprobe loop'.)

Does that mean the stemcell need to be recompiled?

./scripts/provision_cf fails with Error 100: command exited with failure

Hi,
I've attempted to provision cf multiple times now and I always end up with the same failure:

  Started creating bound missing vms > router_z1/1
   Failed creating bound missing vms > medium_z1/2: command exited with failure (00:00:43)
   Failed creating bound missing vms > small_z1/0: command exited with failure (00:00:47)
   Failed creating bound missing vms > runner_z1/0: command exited with failure (00:00:52)
   Failed creating bound missing vms > medium_z1/6: command exited with failure (00:00:55)
   Failed creating bound missing vms > router_z1/1: command exited with failure (00:00:51)
     Done creating bound missing vms > medium_z1/3 (00:01:08)
     Done creating bound missing vms > medium_z1/0 (00:01:10)
     Done creating bound missing vms > medium_z1/4 (00:01:09)
     Done creating bound missing vms > medium_z1/1 (00:01:09)
     Done creating bound missing vms > large_z1/0 (00:01:12)
     Done creating bound missing vms > medium_z1/5 (00:01:14)
     Done creating bound missing vms > router_z1/0 (00:01:10)
   Failed creating bound missing vms (00:01:16)

Error 100: command exited with failure

Task 3 error
bosh_cli version: 1.2334.0
bosh-lite version (commit SHA): 1d470e6dea4c5cd70d4eda33c28e3eb72de60a54
cf-release version (release or commit SHA): 173
stemcell version: 60

spiff download pulls wrong binary

Was working through the README and found an error...
The curl command shown to download spiff pulls the work binary for OSX. (Think it pulls the linux binary, rather than the desired OSX binary.) Work around was to go to git repo and pull it down manually from release.

Unable to log-in after deploying on VirtualBox based environment

After successfully having followed all steps in README, I cannot do login apparently because of a networking issue. I'm getting crazy with this issue so, please any help would be highly appreciated.

When I try to log in, I get this:

cf login

API endpoint: https://api.10.244.0.34.xip.io

REQUEST: [2014-04-25T12:30:47+02:00]
GET /v2/info HTTP/1.1
Host: api.10.244.0.34.xip.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.0.2-0bba99f / linux

FAILED
Invalid API endpoint.
Error performing request: Get https://api.10.244.0.34.xip.io/v2/info: dial tcp: lookup api.10.244.0.34.xip.io: no such host
FAILED
Invalid API endpoint.
Error performing request: Get https://api.10.244.0.34.xip.io/v2/info: dial tcp: lookup api.10.244.0.34.xip.io: no such host

Environment:

Linux Ubuntu 14.04 LTS
Vagrant 1.5.2
cf -v => 6.0.2-0bba99f
BOSH 1.2448.0 (in Vagrantfile BOX_VERSION = 186)
bosh releases
+------+----------+-------------+
| Name | Versions | Commit Hash |
+------+----------+-------------+
| cf | 169* | ae04d8ba+ |
+------+----------+-------------+

bosh deployments
+-----------+------------+--------------------------------+
| Name | Release(s) | Stemcell(s) |
+-----------+------------+--------------------------------+
| cf-warden | cf/169 | bosh-warden-boshlite-ubuntu/53 |
+-----------+------------+--------------------------------+

All vms are up and running:
+------------------------------------+---------+---------------+--------------+
| Job/index | State | Resource Pool | IPs |
+------------------------------------+---------+---------------+--------------+
| api_z1/0 | running | large_z1 | 10.244.0.138 |
| etcd_leader_z1/0 | running | medium_z1 | 10.244.0.38 |
| ha_proxy_z1/0 | running | router_z1 | 10.244.0.34 |
| hm9000_z1/0 | running | medium_z1 | 10.244.0.142 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 | 10.244.0.10 |
| loggregator_z1/0 | running | medium_z1 | 10.244.0.14 |
| login_z1/0 | running | medium_z1 | 10.244.0.134 |
| nats_z1/0 | running | medium_z1 | 10.244.0.6 |
| postgres_z1/0 | running | medium_z1 | 10.244.0.30 |
| router_z1/0 | running | router_z1 | 10.244.0.22 |
| runner_z1/0 | running | runner_z1 | 10.244.0.26 |
| uaa_z1/0 | running | medium_z1 | 10.244.0.130 |
+------------------------------------+---------+---------------+--------------+

bosh status

Config
/home/jvazquez/.bosh_config

Director
Name Bosh Lite Director
URL https://192.168.50.4:25555
Version 1.2200.0 (f71e2276)
User admin
UUID 1283c62e-8e7b-43c2-8f97-f42bf8aba812
CPI warden
dns disabled
compiled_package_cache enabled (provider: local)
snapshots disabled

Deployment
Manifest /data/sources/cloudfoundry/bosh-lite/manifests/cf-manifest.yml

whenever I bring up vagrant, jobs are unresponsive, requiring me to run bosh cck

Every morning I do a 'vagrant up' and find that all the jobs are unresponsive. Then I have to do a bosh cck to recreate all the jobs.

± |master ✗| → bosh vms
Deployment `cf-warden'

Director task 52

Task 52 done

+-----------------+--------------------+---------------+-----+
| Job/index | State | Resource Pool | IPs |
+-----------------+--------------------+---------------+-----+
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
+-----------------+--------------------+---------------+-----+

VMs total: 15
Deployment `cf-warden-mysql'

Director task 53

Task 53 done

+-----------------+--------------------+---------------+-----+
| Job/index | State | Resource Pool | IPs |
+-----------------+--------------------+---------------+-----+
| unknown/unknown | unresponsive agent | | |
| unknown/unknown | unresponsive agent | | |
+-----------------+--------------------+---------------+-----+

VMs total: 2

Error 100: File exists - /vagrant

I followed the steps here and then used ./scripts/provision_cf

... all success ...

Deployment set to `/Users/grk/workspace/bosh-lite/manifests/cf-manifest.yml'
Config
/Users/grk/.bosh_config

Director
Name Bosh Lite Director
URL https://192.168.50.4:25555
Version 1.2559.0 (fe0b2436)
User admin
UUID c5f8d0da-f8ac-4918-a1a3-0a846fb97d09
CPI warden
dns disabled
compiled_package_cache enabled (provider: local)
snapshots disabled

Deployment
Manifest /Users/grk/workspace/bosh-lite/manifests/cf-manifest.yml

  • deploy_release
    ++ find /Users/grk/workspace/cf-release/releases -regex '.*cf-[0-9]{3}.yml'
    ++ sort
    ++ tail -n 1
  • MOST_RECENT_CF_RELEASE=/Users/grk/workspace/cf-release/releases/cf-171.yml
  • bosh upload release --skip-if-exists /Users/grk/workspace/cf-release/releases/cf-171.yml

$ bosh -n deploy
Getting deployment properties from director...
Compiling deployment manifest...

Director task 4
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done (00:00:00)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:00)

Started preparing package compilation > Finding packages to compile. Failed: File exists - /vagrant (00:00:00)

Correct way to restart environment

What is the correct way to restart my bosh-lite environment?

If I do vagrant halt, followed by vagrant up, all my vms are listed as:

+-----------------+--------------------+---------------+-----+
| Job/index       | State              | Resource Pool | IPs |
+-----------------+--------------------+---------------+-----+
| unknown/unknown | unresponsive agent |               |     |
| unknown/unknown | unresponsive agent |               |     |
| unknown/unknown | unresponsive agent |               |     |
...

If I look at the release details I see:

+--------------+------------+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name         | Versions   | Commit Hash | Jobs                                                                                                                                                                                                                                          |
+--------------+------------+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| jens-test    | 144.1-dev  | 9cd263c3+   |                                                                                                                                                                                                                                               |
|              | 144.2-dev* | dc022561    | collector, dashboard, debian_nfs_server, health_manager_next, login, narc, nats, postgres, saml_login, syslog_aggregator, uaa, cloud_controller_ng, dea_logging_agent, dea_next, gorouter, hm9000, loggregator, loggregator_trafficcontroller |
+--------------+------------+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I've tried starting jobs with bosh start gorouter for example but just get back:

Job `gorouter' doesn't exist

What is the correct way to bring up my environment?

source necessary rbenv boilerplate in aws image

Currently, one cannot login to a running bosh-lite aws image and run the bosh cli. Adding the following to the .bashrc would fix this:

 export RBENV_SHELL="bash"
 export RBENV_ROOT="/opt/rbenv"
 export PATH=/opt/rbenv/shims:/opt/rbenv/bin:/opt/rbenv/bin/rbenv:$PATH
 eval "$(rbenv init -)"

cf apps within warden cannot access public internet

When I deploy an app into CF within bosh-lite, it must have all its assets (and not use a public buildpack) as it cannot access the public internet:

Following https://github.com/cloudfoundry-community/container-info-buildpack/blob/master/README.md

Preparing to start container-info-test... OK
-----> Downloaded app package (4.0K)
fatal: Unable to look up github.com (port 9418) (Temporary failure in name resolution)
Initialized empty Git repository in /tmp/buildpacks/container-info-buildpack.git/.git/
/var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:115:in `clone_buildpack': Failed to git clone buildpack (RuntimeError)
    from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:91:in `build_pack'
    from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:72:in `block in compile_with_timeout'
    from /usr/lib/ruby/1.9.1/timeout.rb:68:in `timeout'
    from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:71:in `compile_with_timeout'
    from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:53:in `block in stage_application'
    from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:49:in `chdir'
    from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:49:in `stage_application'
    from /var/vcap/packages/dea_next/buildpacks/bin/run:10:in `<main>'
Checking status of app 'container-info-test'...Application failed to stage

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.