Giter VIP home page Giter VIP logo

bosh-azure-cpi-release's Introduction

bosh-azure-cpi-release's People

Contributors

abelhu avatar andyliuliming avatar anshrupani avatar aramprice avatar bgandon avatar bingosummer avatar cf-rabbit-bot avatar cppforlife avatar cunnie avatar danielfor avatar dependabot[bot] avatar dsboulder avatar gossion avatar h4xnoodle avatar happytobi avatar jpalermo avatar justin-w avatar klakin-pivotal avatar lnguyen avatar mrosecrance avatar mssedusch avatar mvach avatar nterry avatar ragaskar avatar ramonskie avatar ritazh avatar rkoster avatar vicwicker avatar ystros avatar zhongyi-zhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bosh-azure-cpi-release's Issues

Azure CPI hangs talking to Azure for > 2 days

Azure CPI hangs talking to Azure. In this instance, two threads were dead waiting for a response. We think that it was waiting without somehow triggering a timeout. We're not sure why this was.

# ps -auxfw
vcap     46743  0.0  1.9 148508 67016 ?        S<l  Feb17   0:26 resque-1.25.2: Forked 3530 at 1455954009
vcap      3530  0.0  2.0 622160 72404 ?        S<l  Feb20   1:04  \_ resque-1.25.2: Processing normal since 1455954009 [Bosh::Director::Jobs::CloudCheck::ScanAndFix]
vcap      3560  0.0  0.0  17968  2776 ?        S<   Feb20   0:00      \_ /bin/bash /var/vcap/jobs/cpi/bin/cpi
vcap      3565  0.0  1.0 121396 38480 ?        S<l  Feb20   0:01          \_ ruby /var/vcap/packages/bosh_azure_cpi/bin/azure_cpi /var/vcap/jobs/cpi/config/cpi.json
vcap     46748  0.0  1.8 148932 65444 ?        S<l  Feb17   0:25 resque-1.25.2: Forked 3323 at 1455953292
vcap      3323  0.0  1.9 622076 70436 ?        S<l  Feb20   1:04  \_ resque-1.25.2: Processing normal since 1455953292 [Bosh::Director::Jobs::CloudCheck::ScanAndFix]
vcap      3347  0.0  0.0  17968  2804 ?        S<   Feb20   0:00      \_ /bin/bash /var/vcap/jobs/cpi/bin/cpi
vcap      3352  0.0  1.0 121388 38636 ?        S<l  Feb20   0:01          \_ ruby /var/vcap/packages/bosh_azure_cpi/bin/azure_cpi /var/vcap/jobs/cpi/config/cpi.json


root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~# strace -vfFp 3352
Process 3352 attached with 2 threads
[pid  3354] restart_syscall(<... resuming interrupted call ...> <unfinished ...>
[pid  3352] ppoll([{fd=7, events=POLLIN}], 1, NULL, NULL, 8^CProcess 3352 detached
 <detached ...>
Process 3354 detached

root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~# lsof -p 3352
COMMAND  PID USER   FD   TYPE    DEVICE SIZE/OFF      NODE NAME
ruby    3352 vcap  cwd    DIR      8,18     4096   1575493 /var/vcap/data/packages/director/50bccec23f808dbeb00211e81b35f92455d0d11e.1-fcd8ee20028896bb1a9bab5010bc22bd2366a1ab/gem_home/ruby/2.1.0
...
ruby    3352 vcap    7u  IPv4 268242816      0t0       TCP 10.10.0.7:45423->157.55.80.182:https (ESTABLISHED)
root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~#


root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~# gdb /var/vcap/packages/ruby_azure_cpi/bin/ruby 3352
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /var/vcap/packages/ruby_azure_cpi/bin/ruby...(no debugging symbols found)...done.
Attaching to program: /var/vcap/data/packages/ruby_azure_cpi/3db71123fb72f5ec81955710b2e89e2cbbd8aca0.1-c75c4d3821bf906b729bf1f5930ae9841cecd87a/bin/ruby, process 3352
Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...(no debugging symbols found)...done.
[New LWP 3354]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Loaded symbols for /lib/x86_64-linux-gnu/libpthread.so.0
Reading symbols from /lib/x86_64-linux-gnu/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libdl.so.2
Reading symbols from /lib/x86_64-linux-gnu/libcrypt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libcrypt.so.1
Reading symbols from /lib/x86_64-linux-gnu/libm.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libm.so.6
Reading symbols from /lib/x86_64-linux-gnu/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/encdb.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/encdb.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/trans/transdb.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/trans/transdb.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/thread.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/thread.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/pathname.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/pathname.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/io/console.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/io/console.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/etc.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/etc.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/digest/sha1.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/digest/sha1.so
Reading symbols from /lib/x86_64-linux-gnu/libcrypto.so.1.0.0...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libcrypto.so.1.0.0
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/digest.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/digest.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/socket.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/socket.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/zlib.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/zlib.so
Reading symbols from /lib/x86_64-linux-gnu/libz.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libz.so.1
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/stringio.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/stringio.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/date_core.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/date_core.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/fcntl.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/fcntl.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/openssl.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/openssl.so
Reading symbols from /lib/x86_64-linux-gnu/libssl.so.1.0.0...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libssl.so.1.0.0
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/strscan.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/strscan.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/psych.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/psych.so
Reading symbols from /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/json-1.8.3/json/ext/parser.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/json-1.8.3/json/ext/parser.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_16be.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_16be.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_16le.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_16le.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_32be.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_32be.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_32le.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/enc/utf_32le.so
Reading symbols from /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/json-1.8.3/json/ext/generator.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/json-1.8.3/json/ext/generator.so
Reading symbols from /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/yajl-ruby-1.2.1/yajl/yajl.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/yajl-ruby-1.2.1/yajl/yajl.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/digest/md5.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/digest/md5.so
Reading symbols from /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/nokogiri-1.6.6.2/nokogiri/nokogiri.so...done.
Loaded symbols for /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/nokogiri-1.6.6.2/nokogiri/nokogiri.so
Reading symbols from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/racc/cparse.so...(no debugging symbols found)...done.
Loaded symbols for /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/racc/cparse.so
Reading symbols from /usr/lib/x86_64-linux-gnu/gconv/CP932.so...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/gconv/CP932.so
Reading symbols from /lib/x86_64-linux-gnu/libnss_files.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libnss_files.so.2
Reading symbols from /lib/x86_64-linux-gnu/libnss_dns.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libnss_dns.so.2
Reading symbols from /lib/x86_64-linux-gnu/libresolv.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libresolv.so.2
0x00007fefc7b341ef in ppoll () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) t a a bt

Thread 2 (Thread 0x7fefc898c700 (LWP 3354)):
#0  0x00007fefc7b3412d in poll () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x000055dff179e72e in timer_thread_sleep ()
#2  0x000055dff179e7d9 in thread_timer ()
#3  0x00007fefc8557182 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#4  0x00007fefc7b4147d in clone () from /lib/x86_64-linux-gnu/libc.so.6

Thread 1 (Thread 0x7fefc8983740 (LWP 3352)):
#0  0x00007fefc7b341ef in ppoll () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x000055dff17a3f53 in rb_wait_for_single_fd ()
#2  0x000055dff17a3bbc in rb_thread_wait_fd_rw ()
#3  0x000055dff17a3beb in rb_thread_wait_fd ()
#4  0x000055dff16705e4 in rb_io_wait_readable ()
#5  0x00007fefc57752fc in ossl_start_ssl () from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/openssl.so
#6  0x00007fefc57753dd in ossl_ssl_connect () from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/x86_64-linux/openssl.so
#7  0x000055dff177d24b in call_cfunc_0 ()
#8  0x000055dff177ddcf in vm_call_cfunc_with_frame ()
#9  0x000055dff177dedc in vm_call_cfunc ()
#10 0x000055dff177e87c in vm_call_method ()
#11 0x000055dff177f10b in vm_call_general ()
#12 0x000055dff17829ae in vm_exec_core ()
#13 0x000055dff1792a67 in vm_exec ()
#14 0x000055dff178b1ed in vm_call0_body ()
#15 0x000055dff178ad60 in vm_call0 ()
#16 0x000055dff178b85b in rb_call0 ()
#17 0x000055dff178c26e in rb_call ()
#18 0x000055dff178cb6a in rb_funcallv ()
#19 0x000055dff164b614 in rb_obj_call_init ()
#20 0x000055dff1699e88 in rb_class_new_instance ()
#21 0x000055dff177d220 in call_cfunc_m1 ()
#22 0x000055dff177ddcf in vm_call_cfunc_with_frame ()
#23 0x000055dff177dedc in vm_call_cfunc ()
#24 0x000055dff177e87c in vm_call_method ()
#25 0x000055dff177f10b in vm_call_general ()
---Type <return> to continue, or q <return> to quit---
#26 0x000055dff17829ae in vm_exec_core ()
#27 0x000055dff1792a67 in vm_exec ()
#28 0x000055dff178b1ed in vm_call0_body ()
#29 0x000055dff178ad60 in vm_call0 ()
#30 0x000055dff178b85b in rb_call0 ()
#31 0x000055dff178c26e in rb_call ()
#32 0x000055dff178cb6a in rb_funcallv ()
#33 0x000055dff164b614 in rb_obj_call_init ()
#34 0x000055dff1699e88 in rb_class_new_instance ()
#35 0x000055dff177d220 in call_cfunc_m1 ()
#36 0x000055dff177ddcf in vm_call_cfunc_with_frame ()
#37 0x000055dff177dedc in vm_call_cfunc ()
#38 0x000055dff177e87c in vm_call_method ()
#39 0x000055dff177f10b in vm_call_general ()
#40 0x000055dff17829ae in vm_exec_core ()
#41 0x000055dff1792a67 in vm_exec ()
#42 0x000055dff17911d0 in invoke_block_from_c ()
#43 0x000055dff1791583 in vm_invoke_proc ()
#44 0x000055dff179163b in rb_vm_invoke_proc ()
#45 0x000055dff164ce65 in proc_call ()
#46 0x000055dff177d220 in call_cfunc_m1 ()
#47 0x000055dff177ddcf in vm_call_cfunc_with_frame ()
#48 0x000055dff177dedc in vm_call_cfunc ()
#49 0x000055dff177e87c in vm_call_method ()
#50 0x000055dff177f10b in vm_call_general ()
#51 0x000055dff17829ae in vm_exec_core ()
#52 0x000055dff1792a67 in vm_exec ()
#53 0x000055dff1793bb0 in rb_iseq_eval_main ()
#54 0x000055dff1648da7 in ruby_exec_internal ()
#55 0x000055dff1648ed0 in ruby_exec_node ()
#56 0x000055dff1648ea3 in ruby_run_node ()
#57 0x000055dff1646f02 in main ()
(gdb)
(gdb) call (void) close(1)
(gdb) call (void) close(2)
(gdb) shell tty
/dev/pts/0
(gdb) call (int) open("/dev/pts/0", 2, 0)
$1 = 1
(gdb) call (int) open("/dev/pts/0", 2, 0)
$2 = 2
(gdb) call (void) rb_backtrace()
    from /var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:34:in `<main>'
    from /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:in `run'
    from /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:in `call'
    from /var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:26:in `block in <main>'
    from /var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:26:in `new'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:19:in `initialize'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:in `init_azure'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:in `new'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/table_manager.rb:13:in `initialize'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:792:in `get_storage_account_keys_by_name'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:1013:in `http_post'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:861:in `http_get_response'
    from /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:835:in `get_token'
    from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:1369:in `request'
    from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:852:in `start'
    from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:863:in `do_start'
    from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:920:in `connect'
    from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/timeout.rb:76:in `timeout'
    from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:920:in `block in connect'
    from /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:920:in `connect'
(gdb) quit
A debugging session is active.

    Inferior 1 [process 3352] will be detached.

Quit anyway? (y or n) y
Detaching from program: /var/vcap/data/packages/ruby_azure_cpi/3db71123fb72f5ec81955710b2e89e2cbbd8aca0.1-c75c4d3821bf906b729bf1f5930ae9841cecd87a/bin/ruby, process 3352
root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~#

root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~# kill 3352
root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~# {"result":null,"error":{"type":"Unknown","message":"SIGTERM","ok_to_retry":false},"log":"Rescued Unknown: SIGTERM. backtrace: /var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:920:in `connect'\n/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:920:in `block in connect'\n/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/timeout.rb:76:in `timeout'\n/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:920:in `connect'\n/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:863:in `do_start'\n/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:852:in `start'\n/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:1369:in `request'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:835:in `get_token'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:861:in `http_get_response'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:1013:in `http_post'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:792:in `get_storage_account_keys_by_name'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/table_manager.rb:13:in `initialize'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:in `new'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:in `init_azure'\n/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:19:in `initialize'\n/var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:26:in `new'\n/var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:26:in `block in <main>'\n/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:in `call'\n/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:in `run'\n/var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:34:in `<main>'"}


root@c89bf596-a6ba-4cd7-a72a-141b993ad445:~# grep s.version /var/vcap/packages/bosh_azure_cpi/bosh_azure_cpi.gemspec
  s.version       = '2.0.0'

delete vm does not seem to properly clean up network interfaces

Error 100: #<Bosh::AzureCloud::AzureError: http_put - error: 400 message: {
  "error": {
    "code": "PrivateIPAddressInUse",
    "message": "IP configuration /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Network/networkInterfaces/xxx/ipConfigurations/ipconfig1 is using the same private IP address 10.10.16.57 as IP configuration /subscriptions/xxx/resourceGroups/titletest-resource/providers/Microsoft.Network/networkInterfaces/xxx/ipConfigurations/ipconfig1.",
    "details": []
  }
}>
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:937:in `check_completion'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:1020:in `http_put'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:632:in `create_network_interface'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/vm_manager.rb:52:in `create'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:95:in `block in create_vm'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_common-1.3100.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:83:in `create_vm'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:in `public_send'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:in `run'
/var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:34:in `<main>'

should delete vm be more careful about cleaning up network interfaces?

deploy_bosh.sh fails when creating stemcell in get_storage_account_keys_byname

I created a new trial account and used the deployment template to create a VM for the bosh installation.

Bosh deploy fails with the following error when attempting to create the stemcell.

CPI 'create_stemcell' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"http_get_response - retry 0: #\u003cBosh::AzureCloud::AzureError: get_token - http error: 400\u003e
/home/bosh/.bosh_init/installations/8c931220-e6a5-46b0-507f-b360dd201005/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:848:in `get_token'
/home/bosh/.bosh_init/installations/8c931220-e6a5-46b0-507f-b360dd201005/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:869:in `http_get_response'
/home/bosh/.bosh_init/installations/8c931220-e6a5-46b0-507f-b360dd201005/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:1035:in `http_post'
/home/bosh/.bosh_init/installations/8c931220-e6a5-46b0-507f-b360dd201005/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:795:in `get_storage_account_keys_by_name'
/home/bosh/.bosh_init/installations/8c931220-e6a5-46b0-507f-b360dd201005/packages/bosh_azure_cpi/lib/cloud/azure/table_manager.rb:13:in `initialize'

This is a new Azure account and trial subscription. There does appear to be a valid storage account that contains some items in the vids storage container.

Gist for last 1000 lines of run.log: https://gist.github.com/mdcarlson/57633c50c436bafc2bb2

I have deleted the deployment and retried a fresh install with the same results.

Regards,

Mark

Bosh Deploy fails with "Can't find property 'azure.client_secret'" Error

I've tried deploying the template via the Azure portal and the Azure CLI, but get the same error in both. When I look in the resource group the client secret isn't display, which is expected as it's a secure string. I have also verified that the Client-Secret I'm entering is correct by logging in via the Azure CLI using it. Here is the full error that I'm seeing in the run.log

main] 2016/03/04 13:20:38 ERROR - Command 'deploy' failed: Installing CPI: Rendering and uploading Jobs: Rendering job templates for installation: Rendering templates for job 'cpi/f2ba877ecb7fdca6c87d2e49a767d269cbae970d': Rendering template src: cpi.json.erb, dst: config/cpi.json: Rendering template src: /home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/bosh-init-release234418446/extracted_jobs/cpi/templates/cpi.json.erb, dst: /home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/rendered-jobs107117686/config/cpi.json: Running ruby to render templates: Running command: 'ruby /home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/erb-renderer068050200/erb-render.rb /home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/erb-renderer068050200/erb-context.json /home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/bosh-init-release234418446/extracted_jobs/cpi/templates/cpi.json.erb /home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/rendered-jobs107117686/config/cpi.json', stdout: '', stderr: '/home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/erb-renderer068050200/erb-render.rb:180:in `rescue in render': Error filling in template '/home/drjCFTest/.bosh_init/installations/ce0ad0d9-1aa6-44f5-7411-157b2fb30505/tmp/bosh-init-release234418446/extracted_jobs/cpi/templates/cpi.json.erb' for cpi/0 (line 13: #TemplateEvaluationContext::UnknownProperty: Can't find property 'azure.client_secret') (RuntimeError)

cloudfoundry.blob.core.windows.net no longer exist as a DNS record at all

The Azure storage cloudfoundry was deleted by someone.
If you followed the template guidance to deploy dev-box, you need to update the link to all three releases in ~/bosh.yml manually.

----
name: bosh

releases:
- name: bosh
  url: https://bosh.io/d/github.com/cloudfoundry/bosh?v=253
  sha1: 940956a23b642af3bb24b3cac37c4da746d6f9a9
- name: bosh-azure-cpi
  url: https://bosh.io/d/github.com/cloudfoundry-incubator/bosh-azure-cpi-release?v=7
  sha1: 8df7b79458335188a1ecab83cf5ef9a82366baeb
...

resource_pools:
- name: vms
  network: private
  stemcell:
    url: https://bosh.io/d/stemcells/bosh-azure-hyperv-ubuntu-trusty-go_agent?v=3192
    sha1: d096582bddf1771df4194c795edf6a96b90c8190
  cloud_properties:
    instance_type: Standard_D1

CID of VMs is not well named

Currently the helper for generate_instance_id (azure/helpers.rb:56) uses the storage account name as the base name for the VM name. This creates confusion while searching for both storage and VMs by polluting the namespace with the same string across both.

We would prefer that the VMs simply had the UUID without the storage account name. It does not add value to understanding and clouds search results.

Bosh failed to create VM (Error random and flaky)

When using bosh deploy .

I got Error 450002: Timed out pinging to c6f09e6a-2ff1-42f0-bd2e-3d98e6574d34 after 600 seconds

This errors comes with a significant chance. It is kind of more than 40% for me (Unluck!!!!), while sometimes succeed.
When it happened, I looked at the azure portal. All the vnet, subnets and IPs are placed correctly, as well as no azure error logs happened. Ping this VM from the internal network (A jump host on the same subnet) does not work. Ironically, if the failed VM has a VIP, I can ssh into it through the public VIP, however the network inside of failed VM can not route through the subnet gateway. Guess subnet gateway failed on the bosh deployed VM is causing timeout.

Any thoughts whether the bosh side could cause the problem? Or it is just flaky Microsoft Azure network?

stemcell 3163, CPI release 6, bosh release 250

Error using Azure Template

I'm trying to deploy Cloud Foundry. I'm on the step where I'm creating Azure resources using ARM templates. I click on the Deploy To Azure link on this page.

Once I fill in the parameters, it runs for a while, and then errors on the initdevbox step with the below error:

{
   "status":"Failed",
   "error":{
      "code":"ResourceDeploymentFailure",
      "message":"The resource operation completed with terminal provisioning state 'Failed'.",
      "details":[
         {
            "code":"VMExtensionProvisioningError",
            "message":"VM has reported a failure when processing extension 'initdevbox'. Error message: \"Script returned an error.\n---stdout---\n2015/11/10 18:08:35 }\n\n---errout---\nw.githubusercontent.com)|199.27.76.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 3721 (3.6K) [text/plain]\nSaving to: 'bosh.yml'\n\n     0K ...                                                   100%  670M=0s\n\n2015-11-10 18:08:34 (670 MB/s) - 'bosh.yml' saved [3721/3721]\n\n--2015-11-10 18:08:34--  https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/bosh-setup//setup_dns.py\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.76.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.76.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 7406 (7.2K) [text/plain]\nSaving to: 'setup_dns.py'\n\n     0K .......                                               100% 1.25G=0s\n\n2015-11-10 18:08:34 (1.25 GB/s) - 'setup_dns.py' saved [7406/7406]\n\n--2015-11-10 18:08:34--  https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/bosh-setup//create_cert.sh\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.76.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.76.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 204 [text/plain]\nSaving to: 'create_cert.sh'\n\n     0K                                                       100% 40.5M=0s\n\n2015-11-10 18:08:34 (40.5 MB/s) - 'create_cert.sh' saved [204/204]\n\n--2015-11-10 18:08:34--  https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/bosh-setup//setup_devbox.py\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.76.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.76.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 4034 (3.9K) [text/plain]\nSaving to: 'setup_devbox.py'\n\n     0K ...                                                   100%  726M=0s\n\n2015-11-10 18:08:34 (726 MB/s) - 'setup_devbox.py' saved [4034/4034]\n\n--2015-11-10 18:08:34--  https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/bosh-setup//init.sh\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.76.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.76.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 1024 (1.0K) [text/plain]\nSaving to: 'init.sh'\n\n     0K .                                                     100%  212M=0s\n\n2015-11-10 18:08:34 (212 MB/s) - 'init.sh' saved [1024/1024]\n\n--2015-11-10 18:08:34--  https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/bosh-setup//deploy_bosh.sh\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.76.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.76.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 110 [text/plain]\nSaving to: 'deploy_bosh.sh'\n\n     0K                                                       100% 25.0M=0s\n\n2015-11-10 18:08:34 (25.0 MB/s) - 'deploy_bosh.sh' saved [110/110]\n\n--2015-11-10 18:08:34--  https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/bosh-setup//98-msft-love-cf\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.76.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.76.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 407 [text/plain]\nSaving to: '98-msft-love-cf'\n\n     0K                                                       100% 86.3M=0s\n\n2015-11-10 18:08:35 (86.3 MB/s) - '98-msft-love-cf' saved [407/407]\n\nTraceback (most recent call last):\n  File \"setup_devbox.py\", line 31, in <module>\n    blob_service.create_container('bosh')\n  File \"/var/lib/waagent/Microsoft.OSTCExtensions.CustomScriptForLinux-1.2.2.0/azure/storage/blobservice.py\", line 190, in create_container\n    _dont_fail_on_exist(ex)\n  File \"/var/lib/waagent/Microsoft.OSTCExtensions.CustomScriptForLinux-1.2.2.0/azure/__init__.py\", line 891, in _dont_fail_on_exist\n    raise error\nazure.WindowsAzureError\n\n\"."
         }
      ]
   }
}

Is there something wrong with the template or might it be something I'm not doing right when setting values for the template?

Following guidance docs, fails uploading stemcell

I got to this doc: https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/blob/master/docs/get-started/deploy-bosh-using-arm-templates.md

And deployed my dev box by clicking the button. I left tenantID as TENANT-ID and clientID as CLIENT-ID, and set clientSecret to some new secret I made up. The dev box provisioned fine, and then I tried to run ./deploy_bosh.sh. I did not update ~/bosh.yml because it's not clear what values I need to change, and what they should be changed to.

The error I'm seeing is:

agupta@devbox:~$ ./deploy_bosh.sh
Deployment manifest: '/home/agupta/bosh.yml'
Deployment state: '/home/agupta/bosh-state.json'

Started validating
  Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh'... Finished (00:00:03)
  Downloading release 'bosh-azure-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-azure-cpi'... Finished (00:00:00)
  Validating cpi release... Finished (00:00:00)
  Validating deployment manifest... Finished (00:00:00)
  Downloading stemcell... Skipped [Found in local cache] (00:00:00)
  Validating stemcell... Finished (00:00:21)
Finished validating (00:00:26)

Started installing CPI
  Compiling package 'ruby_azure_cpi/3db71123fb72f5ec81955710b2e89e2cbbd8aca0'... Finished (00:00:00)
  Compiling package 'bosh_azure_cpi/5985e1e82c78fadb5f2c951f319012403ee04fd8'... Finished (00:00:00)
  Installing packages... Finished (00:00:11)
  Rendering job templates... Finished (00:00:03)
  Installing job 'cpi'... Finished (00:00:00)
Finished installing CPI (00:00:15)

Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-azure-hyperv-ubuntu-trusty-go_agent/0000'... Failed (00:00:01)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)

Command 'deploy' failed:
  creating stemcell (bosh-azure-hyperv-ubuntu-trusty-go_agent 0000):
    CPI 'create_stemcell' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"http_get_response - get_token - http error: 400\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:842:in `get_token'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:861:in `http_get_response'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:1013:in `http_post'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:792:in `get_storage_account_keys_by_name'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/table_manager.rb:13:in `initialize'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:in `new'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:in `init_azure'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:19:in `initialize'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/bin/azure_cpi:26:in `new'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/bin/azure_cpi:26:in `block in \u003cmain\u003e'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:in `call'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:in `run'\n/home/agupta/.bosh_init/installations/2a344a15-6309-449a-5e8c-266083aa014f/packages/bosh_azure_cpi/bin/azure_cpi:34:in `\u003cmain\u003e'","ok_to_retry":false}

When I look at run.log, I see:

[Cmd Runner] 2015/12/25 01:49:18 DEBUG - Stderr: I, [2015-12-25T01:49:17.892396 #61489]  INFO -- : http_post - trying to post https://management.azure.com//subscriptions/<REDACTED>/resourceGroups/<REDACTED>/providers/Microsoft.Storage/storageAccounts/<REDACTED>/listKeys?api-version=2015-05-01-preview
I, [2015-12-25T01:49:17.892683 #61489]  INFO -- : get_token - trying to get/refresh Azure authentication token
E, [2015-12-25T01:49:18.064396 #61489] ERROR -- : http_get_response - get_token - http error: 400

When I try to curl the URL, I get a 401 error because I'm not setting the Authorization headers, which makes sense. But the deploy is getting 400, so I assume it's setting the Authorization headers fine, and there's something else malformed about the request, but I don't know what it is, and can't see the request payload in any log output.

I wouldn't be surprised if my bosh.yml manifest needs some values changed, but I don't know what would need to be changed. The docs say:

If you leave TENANT-ID, CLIENT-ID, CLIENT-SECRET at default values in section 1, you need to update these three properties in ~/bosh.yml.

But update the values to what?

mount point /var/vcap/data was based on ephemeral disk(/dev/sdb2)

I'm trying to build Cloud Foundry and I created VMs(bosh, cloudfoundry, devbox) using bosh and bosh_init on Windows Azure.
It seems to successful, bosh and cf-cli work fine.
However /var/vcap/data was based on ephemeral disk(/dev/sdb2) in CloudFoundry and BOSH VM, and /var/vcap/data was removed when VM deallocated.

so I restart VMs bosh-directory and CloudFoundry were failed to start.
Is it right behavior?

I archived /var/vcap/data after deploy and recover data after VM restart.
Do you have any good ideas?

vcap@5ea67f10-f0ac-4afc-acc3-95b9de8aa26f:~$ sudo monit status
/var/vcap/monit/job/0019_nats.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/nats/bin/nats_ctl'
/var/vcap/monit/job/0019_nats.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/nats/bin/nats_ctl'
/var/vcap/monit/job/0018_nats_stream_forwarder.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/nats_stream_forwarder/bin/nats_stream_forwarder_ctl'
/var/vcap/monit/job/0018_nats_stream_forwarder.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/nats_stream_forwarder/bin/nats_stream_forwarder_ctl'
/var/vcap/monit/job/0017_metron_agent.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/metron_agent/bin/metron_agent_ctl'
/var/vcap/monit/job/0017_metron_agent.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/metron_agent/bin/metron_agent_ctl'
/var/vcap/monit/job/0016_etcd.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/etcd/bin/etcd_ctl'
/var/vcap/monit/job/0016_etcd.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/etcd/bin/etcd_ctl'
/var/vcap/monit/job/0015_etcd_metrics_server.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/etcd_metrics_server/bin/etcd_metrics_server_ctl'
/var/vcap/monit/job/0015_etcd_metrics_server.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/etcd_metrics_server/bin/etcd_metrics_server_ctl'
/var/vcap/monit/job/0014_debian_nfs_server.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/debian_nfs_server/bin/rpc_nfsd_ctl'
/var/vcap/monit/job/0014_debian_nfs_server.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/debian_nfs_server/bin/rpc_nfsd_ctl'
/var/vcap/monit/job/0014_debian_nfs_server.monitrc:9: Warning: the executable does not exist '/var/vcap/jobs/debian_nfs_server/bin/rpc_mountd_ctl'
/var/vcap/monit/job/0014_debian_nfs_server.monitrc:10: Warning: the executable does not exist '/var/vcap/jobs/debian_nfs_server/bin/rpc_mountd_ctl'
/var/vcap/monit/job/0013_postgres.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/postgres/bin/postgres_ctl'
/var/vcap/monit/job/0013_postgres.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/postgres/bin/postgres_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_ng_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:6: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_ng_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:10: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/restart_drain'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:11: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/restart_drain'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:20: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:21: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:33: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:34: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:47: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/nginx_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:48: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/nginx_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:56: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_migration_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:57: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_migration_ctl'
/var/vcap/monit/job/0011_cloud_controller_worker.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_worker/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0011_cloud_controller_worker.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_worker/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0010_cloud_controller_clock.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_clock/bin/cloud_controller_clock_ctl'
/var/vcap/monit/job/0010_cloud_controller_clock.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_clock/bin/cloud_controller_clock_ctl'
/var/vcap/monit/job/0014_debian_nfs_server.monitrc:9: Warning: the executable does not exist '/var/vcap/jobs/debian_nfs_server/bin/rpc_mountd_ctl'
/var/vcap/monit/job/0014_debian_nfs_server.monitrc:10: Warning: the executable does not exist '/var/vcap/jobs/debian_nfs_server/bin/rpc_mountd_ctl'
/var/vcap/monit/job/0013_postgres.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/postgres/bin/postgres_ctl'
/var/vcap/monit/job/0013_postgres.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/postgres/bin/postgres_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_ng_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:6: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_ng_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:10: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/restart_drain'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:11: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/restart_drain'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:20: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:21: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:33: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:34: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:47: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/nginx_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:48: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/nginx_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:56: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_migration_ctl'
/var/vcap/monit/job/0012_cloud_controller_ng.monitrc:57: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_ng/bin/cloud_controller_migration_ctl'
/var/vcap/monit/job/0011_cloud_controller_worker.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_worker/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0011_cloud_controller_worker.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_worker/bin/cloud_controller_worker_ctl'
/var/vcap/monit/job/0010_cloud_controller_clock.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_clock/bin/cloud_controller_clock_ctl'
/var/vcap/monit/job/0010_cloud_controller_clock.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_clock/bin/cloud_controller_clock_ctl'
/var/vcap/monit/job/0009_nfs_mounter.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/nfs_mounter/bin/nfs_mounter_ctl'
/var/vcap/monit/job/0009_nfs_mounter.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/nfs_mounter/bin/nfs_mounter_ctl'
/var/vcap/monit/job/0008_route_registrar.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/route_registrar/bin/route_registrar_ctl'
/var/vcap/monit/job/0008_route_registrar.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/route_registrar/bin/route_registrar_ctl'
/var/vcap/monit/job/0007_haproxy.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/haproxy/bin/haproxy_ctl'
/var/vcap/monit/job/0007_haproxy.monitrc:6: Warning: the executable does not exist '/var/vcap/jobs/haproxy/bin/haproxy_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_listener_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_listener_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:10: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_fetcher_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:12: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_fetcher_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:17: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_analyzer_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:19: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_analyzer_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:24: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_sender_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:26: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_sender_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:31: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_metrics_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:33: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_metrics_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:38: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_api_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:40: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_api_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:45: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_evacuator_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:47: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_evacuator_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:52: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_shredder_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:54: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_shredder_ctl'
/var/vcap/monit/job/0005_doppler.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/doppler/bin/doppler_ctl'
/var/vcap/monit/job/0005_doppler.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/doppler/bin/doppler_ctl'
/var/vcap/monit/job/0004_loggregator_trafficcontroller.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/loggregator_trafficcontroller/bin/loggregator_trafficcontroller_ctl'
/var/vcap/monit/job/0010_cloud_controller_clock.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/cloud_controller_clock/bin/cloud_controller_clock_ctl'
/var/vcap/monit/job/0009_nfs_mounter.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/nfs_mounter/bin/nfs_mounter_ctl'
/var/vcap/monit/job/0009_nfs_mounter.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/nfs_mounter/bin/nfs_mounter_ctl'
/var/vcap/monit/job/0008_route_registrar.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/route_registrar/bin/route_registrar_ctl'
/var/vcap/monit/job/0008_route_registrar.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/route_registrar/bin/route_registrar_ctl'
/var/vcap/monit/job/0007_haproxy.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/haproxy/bin/haproxy_ctl'
/var/vcap/monit/job/0007_haproxy.monitrc:6: Warning: the executable does not exist '/var/vcap/jobs/haproxy/bin/haproxy_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_listener_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_listener_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:10: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_fetcher_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:12: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_fetcher_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:17: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_analyzer_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:19: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_analyzer_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:24: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_sender_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:26: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_sender_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:31: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_metrics_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:33: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_metrics_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:38: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_api_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:40: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_api_server_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:45: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_evacuator_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:47: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_evacuator_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:52: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_shredder_ctl'
/var/vcap/monit/job/0006_hm9000.monitrc:54: Warning: the executable does not exist '/var/vcap/jobs/hm9000/bin/hm9000_shredder_ctl'
/var/vcap/monit/job/0005_doppler.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/doppler/bin/doppler_ctl'
/var/vcap/monit/job/0005_doppler.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/doppler/bin/doppler_ctl'
/var/vcap/monit/job/0004_loggregator_trafficcontroller.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/loggregator_trafficcontroller/bin/loggregator_trafficcontroller_ctl'
/var/vcap/monit/job/0004_loggregator_trafficcontroller.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/loggregator_trafficcontroller/bin/loggregator_trafficcontroller_ctl'
/var/vcap/monit/job/0003_uaa.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/uaa/bin/uaa_ctl'
/var/vcap/monit/job/0003_uaa.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/uaa/bin/uaa_ctl'
/var/vcap/monit/job/0002_gorouter.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/gorouter/bin/gorouter_ctl'
/var/vcap/monit/job/0002_gorouter.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/gorouter/bin/gorouter_ctl'
/var/vcap/monit/job/0001_dea_next.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/dea_next/bin/warden_ctl'
/var/vcap/monit/job/0001_dea_next.monitrc:5: Warning: the executable does not exist '/var/vcap/jobs/dea_next/bin/warden_ctl'
/var/vcap/monit/job/0001_dea_next.monitrc:17: Warning: the executable does not exist '/var/vcap/jobs/dea_next/bin/dea_ctl'
/var/vcap/monit/job/0001_dea_next.monitrc:18: Warning: the executable does not exist '/var/vcap/jobs/dea_next/bin/dea_ctl'
/var/vcap/monit/job/0001_dea_next.monitrc:24: Warning: the executable does not exist '/var/vcap/jobs/dea_next/bin/dir_server_ctl'
/var/vcap/monit/job/0001_dea_next.monitrc:25: Warning: the executable does not exist '/var/vcap/jobs/dea_next/bin/dir_server_ctl'
/var/vcap/monit/job/0000_dea_logging_agent.monitrc:3: Warning: the executable does not exist '/var/vcap/jobs/dea_logging_agent/bin/dea_logging_agent_ctl'
/var/vcap/monit/job/0000_dea_logging_agent.monitrc:4: Warning: the executable does not exist '/var/vcap/jobs/dea_logging_agent/bin/dea_logging_agent_ctl'
The Monit daemon 5.2.4 uptime: 50m 

Process 'nats'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:15:40 2016

Process 'nats_stream_forwarder'
  status                            initializing
  monitoring status                 initializing
  data collected                    Thu Jan 21 01:15:40 2016

Process 'metron_agent'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:16:10 2016

Process 'etcd'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:17:10 2016

Process 'etcd_metrics_server'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:17:40 2016

Process 'rpc_nfsd'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:18:40 2016

Process 'rpc_mountd'
  status                            initializing
  monitoring status                 initializing
  data collected                    Thu Jan 21 01:18:40 2016

Process 'postgres'
  status                            not monitored
  monitoring status                 not monitored
  data collected                    Thu Jan 21 00:56:00 2016

Process 'cloud_controller_ng'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 00:58:00 2016

Process 'cloud_controller_worker_local_1'
  status                            initializing
  monitoring status                 initializing
  data collected                    Thu Jan 21 00:58:00 2016

Process 'cloud_controller_worker_local_2'
  status                            initializing
  monitoring status                 initializing
  data collected                    Thu Jan 21 00:58:00 2016

Process 'nginx_cc'
  status                            initializing
  monitoring status                 initializing
  data collected                    Thu Jan 21 00:58:00 2016

Process 'cloud_controller_migration'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 00:58:30 2016

Process 'cloud_controller_worker_1'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 00:59:00 2016

Process 'cloud_controller_clock'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 00:59:30 2016

File 'nfs_mounter'
  status                            Does not exist
  monitoring status                 monitored
  data collected                    Thu Jan 21 00:59:30 2016

Process 'route_registrar'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:00:00 2016

Process 'haproxy'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:00:30 2016

Process 'hm9000_listener'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:01:30 2016

Process 'hm9000_fetcher'
  status                            Execution failed
  monitoring status                 monitored
  data collected                    Thu Jan 21 01:02:30 2016

Unknown CPI "'Bosh::AzureCloud::AzureError' " error encountered during Bosh Deployment

During our Bosh deployment in azure we encountered the following error message :
.
Failed: Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'status: Failed
http code: 200
request id: 2fd005bc-c0fd-496a-bfef-6e65d5e56d73
error:
{"code"=>"InternalExecutionError", "message"=>"An internal execution error occurred."}' (00:01:34)

Error 100: Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'status: Failed
http code: 200
request id: 2fd005bc-c0fd-496a-bfef-6e65d5e56d73
error:
{"code"=>"InternalExecutionError", "message"=>"An internal execution error occurred."}'

We can provide additional log files in a non public forum.

Thanks!

changed default system domain to custom but cf push still uses cf.azurelovecf.com

hello,

I suppose this might be a problem that the default system domain popped up in cf push:

tintoverano@tintodev:~/development/simple-todos-angular$ cf push simple-todos-angular -b https://github.com/cloudfoundry-community/cf-meteor-buildpack.git --no-start
Creating app simple-todos-angular in org default_organization / space meteor as admin...
OK

Creating route simple-todos-angular.cf.azurelovecf.com...
OK

Binding simple-todos-angular.cf.azurelovecf.com to simple-todos-angular...
OK

Uploading simple-todos-angular...
Uploading app files from: /home/tintoverano/development/simple-todos-angular
Uploading 4.9K, 10 files
Done uploading               
OK

update: login was ok

Targeted org default_organization

Targeted space meteor


API endpoint:   https://api.cf.protact.me (API version: 2.44.0)   
User:           admin   
Org:            default_organization   
Space:          meteor   

what should I do about route and bind?

thanks,

zoltán

Errors deploying

We were trying to do a stemcell roll and encountered the following errors:

Started compiling packages > switchboard/941dfdd61e92f3f45af7a95df6b8aa33fba2e661
Failed compiling packages > golang1.3/e4b65bcb478d9bea1f9c92042346539713551a4a: status: Failed
http code: 200
request id: b34bda50-c66e-46d2-838c-cd4e533ed56c
error:
{"code"=>"NetworkingInternalOperationError", "message"=>"Unknown network allocation error."}
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:924:in check_completion' /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:976:inhttp_put'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:187:in create_virtual_machine' /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/vm_manager.rb:89:increate'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:90:in block in create_vm' /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_common-1.3100.0/lib/common/thread_formatter.rb:49:inwith_thread_name'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:83:in create_vm' /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:inpublic_send'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:in run' /var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:34:in

' (00:02:25)

Error 100: status: Failed
http code: 200
request id: b34bda50-c66e-46d2-838c-cd4e533ed56c
error:
{"code"=>"NetworkingInternalOperationError", "message"=>"Unknown network allocation error."}
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:924:in check_completion' /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:976:inhttp_put'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:187:in create_virtual_machine' /var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/vm_manager.rb:89:increate'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:90:in block in create_vm' /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_common-1.3100.0/lib/common/thread_formatter.rb:49:inwith_thread_name'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:83:in create_vm' /var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:inpublic_send'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:in run' /var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:34:in

'

Task 1110 error

Azure template issues

(Note https://github.com/Azure/azure-quickstart-templates/tree/master/bosh-setup sent me here for feedback).

The Jumpbox 30gb "/" drive size is too small - Downloading a few copies of elastic runtime distributions and services fills the drive. 100gb would be betterer.

a /24 network is too small - a large deployment of Cloud Foundry will easily fill this, needs to be a /22.

It'd also be nice to have installed by default:
cf cli
uaac
mysqlclient and pgclient / pgdump
ag-silversearcher

unable to access ci: ssl cert revoked error

Hi,

I'm experiencing "sec_error_revoked_certificate" error (FF) when trying to connect to https://bosh-azure-cpi.ci.cf-app.com/ and "NET::ERR_CERT_REVOKED" (chrome).

Both with Firefox 43.0 and chrome 47.0.2526.106 m running windows 7 with standard built-in consummer certs.

Is there an available workaround ? Started looking at blog post, but did not seem obvious yet.
https://productforums.google.com/forum/?hl=en#!topic/chrome/4P2O3M0zGfg

Thanks in advance,

Guillaume.

Azure CPI reports "disk named XXX already exists" when trying to re-attach disk.

When running a bosh cloudcheck and an unattached disk is found, attempting to remediate the situation by re-attaching the disk results in the nonsensical error "A disk named 'XXX' already exists", where XXX is the name of the identified disk. This operation should not entail the creation of a new disk.

Example Incident:

Performing cloud check...

Director task 304
  Started scanning 4 vms
  Started scanning 4 vms > Checking VM states. Done (00:00:10)
  Started scanning 4 vms > 4 OK, 0 unresponsive, 0 missing, 0 unbound, 0 out of sync. Done (00:00:00)
     Done scanning 4 vms (00:00:10)

  Started scanning 3 persistent disks
  Started scanning 3 persistent disks > Looking for inactive disks. Done (00:00:18)
  Started scanning 3 persistent disks > 2 OK, 0 missing, 0 inactive, 1 mount-info mismatch. Done (00:00:00)
     Done scanning 3 persistent disks (00:00:18)

Task 304 done

Started     2016-01-05 18:06:40 UTC
Finished    2016-01-05 18:07:08 UTC
Duration    00:00:28

Scan is complete, checking if any problems found...

Found 1 problem

Problem 1 of 1: Inconsistent mount information:
Record shows that disk '[REDACTED]' should be mounted on [REDACTED].
However it is currently :
    Not mounted in any VM.
  1. Ignore
  2. Reattach disk to instance
  3. Reattach disk and reboot instance
Please choose a resolution [1 - 3]: 3

Below is the list of resolutions you've provided
Please make sure everything is fine and confirm your changes

  1. Inconsistent mount information:
Record shows that disk '[REDACTED]' should be mounted on [REDACTED].
However it is currently :
    Not mounted in any VM
     Reattach disk and reboot instance

Apply resolutions? (type 'yes' to continue): yes
Applying resolutions...

Director task 305
  Started applying problem resolutions > mount_info_mismatch 12: Reattach disk and reboot instance. Failed: Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'http_put - error: 400 message: {
  "error": {
    "code": "InvalidParameter",
    "target": "dataDisk.name",
    "message": "A disk named '[REDACTED]' already exists."
  }
}' (00:00:03)

Task 305 done

InternalExecutionError

  Started updating job etcd_server-partition-e4fffe2eeec37a9cd821 > etcd_server-partition-e4fffe2eeec37a9cd821/0 (canary)
  Started updating job nats-partition-e4fffe2eeec37a9cd821 > nats-partition-e4fffe2eeec37a9cd821/0 (canary)
  Started updating job nfs_server-partition-e4fffe2eeec37a9cd821 > nfs_server-partition-e4fffe2eeec37a9cd821/0 (canary)
  Started updating job mysql_proxy-partition-e4fffe2eeec37a9cd821 > mysql_proxy-partition-e4fffe2eeec37a9cd821/0 (canary)
  Started updating job consul_server-partition-e4fffe2eeec37a9cd821 > consul_server-partition-e4fffe2eeec37a9cd821/0 (canary)
  Started updating job mysql-partition-e4fffe2eeec37a9cd821 > mysql-partition-e4fffe2eeec37a9cd821/0 (canary)
  Started updating job router-partition-e4fffe2eeec37a9cd821 > router-partition-e4fffe2eeec37a9cd821/0 (canary)
  Started updating job diego_database-partition-e4fffe2eeec37a9cd821 > diego_database-partition-e4fffe2eeec37a9cd821/0 (canary)
   Failed updating job mysql-partition-e4fffe2eeec37a9cd821 > mysql-partition-e4fffe2eeec37a9cd821/0 (canary): Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'status: Failed
http code: 200
request id: 78e635f6-1179-4e0f-a440-daf8c93c3a88
error:
{"code"=>"InternalExecutionError", "message"=>"An internal execution error occurred."}' (00:01:11)^C

This error commonly occurs when updating jobs that require re-attaching a persistent disk. I've seen it for several hours now (westus , Standard_D's and D_V2's).

I've found that deleting the VM manually and deleting the VM reference via bosh cck helps, but I have to rerun bosh deploy a lot before they all re-attach.

cpi should raise an error if instance_type is not set for create_vm call

right now it doesnt so azure gets confused and shows following error message.

/var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:34:in `<main>' (00:00:43)

Error 100: #<Bosh::AzureCloud::AzureError: http_put - error: 400 message: {
  "error": {
    "code": "InvalidParameter",
    "target": "vmSize",
    "message": "The value of parameter vmSize is invalid."
  }
}>

let's show an error that mentions instance_type key must be provided.

user should be able to set storage_account_name on a persistent disk

so that we can make sure that persistent disk uses specific storage account

users will say something like this:

persistent_disk_pools:
- name: large
  cloud_properties:
    storage_account_name: premiumsotre

and then CPI will receive create_disk CPI call with cloud_properties filled in (https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/blob/master/src/bosh_azure_cpi/lib/cloud/azure/cloud.rb#L207).

storage_account_name on disk should take precedence over any other configuration (global or VM one).

cc @cppforlife

Error when trying to upgrade from Bosh Azure CPIv7 to v8 or v9

Hello, when we try to upgrade to CPI v8 or v9 from v7 we get the following error in the run log
Even though we replaced the "ssh_certificate" line with "ssh_pubic_key" in the bosh.yml file


[Cmd Runner] 2016/04/04 14:49:09 DEBUG - Running command: ruby /tmp/erb-renderer775012399/erb-render.rb /tmp/erb-renderer775012399/erb-context.json /tmp/bosh-init-release659996273/extracted_jobs/cpi/templates/cpi.json.erb /tmp/rendered-jobs895232053/config/cpi.json
[Cmd Runner] 2016/04/04 14:49:09 DEBUG - Stdout:
[Cmd Runner] 2016/04/04 14:49:09 DEBUG - Stderr: /tmp/erb-renderer775012399/erb-render.rb:180:in rescue in render': Error filling in template '/tmp/bosh-init-release659996273/extracted_jobs/cpi/templates/cpi.json.erb' for cpi/0 (line 43: #<RuntimeError: ssh_certificate has been replaced by ssh_public_key. Please read https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/blob/master/src/bosh_azure_cpi/README.md.>) (RuntimeError) from /tmp/erb-renderer775012399/erb-render.rb:166:inrender'
from /tmp/erb-renderer775012399/erb-render.rb:191:in `

'
[Cmd Runner] 2016/04/04 14:49:09 DEBUG - Successful: false (1)
[File System] 2016/04/04 14:49:09 DEBUG - Remove all /tmp/erb-renderer775012399
[File System] 2016/04/04 14:49:09 DEBUG - Remove all /tmp/rendered-jobs895232053
[File System] 2016/04/04 14:49:09 DEBUG - Remove all /tmp/stemcell-manager690346780
[File System] 2016/04/04 14:49:09 DEBUG - Remove all /tmp/bosh-init-release260197594
[File System] 2016/04/04 14:49:09 DEBUG - Remove all /tmp/bosh-init-release659996273
[Main] 2016/04/04 14:49:09 ERROR - Panic: runtime error: index out of range


/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/Godeps/_workspace/src/github.com/cloudfoundry/bosh-utils/logger/logger.go:134 (0x46c33e)
/usr/local/go/src/runtime/asm_amd64.s:403 (0x43bf85)
/usr/local/go/src/runtime/panic.go:387 (0x4139d8)
/usr/local/go/src/runtime/panic.go:12 (0x412b6e)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/installation/installer.go:78 (0x4d3820)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/cpi/release/installer.go:37 (0x4c73ec)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/ui/stage.go:65 (0x47ba91)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/cpi/release/installer.go:39 (0x4c6798)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/cpi/release/installer.go:48 (0x4c6949)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/cmd/deployment_preparer.go:176 (0x45625e)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/cmd/deploy_cmd.go:74 (0x453529)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/cmd/runner.go:27 (0x4621f7)
/tmp/build/src/gopath/src/github.com/cloudfoundry/bosh-init/main.go:42 (0x401352)
/usr/local/go/src/runtime/proc.go:63 (0x415523)
/usr/local/go/src/runtime/asm_amd64.s:2232 (0x43dfe1)


We can provide full log to a secure location.

Thanks in advance :)

premium storage and tables

we are trying to use premium storage for everything, so default storage account is premium. it looks like tables do not work in premium storage accounts.

thoughts?

  Started creating missing vms > ha_proxy_z1/0 (feabf4e3-cc22-4e44-a1fb-e47a45f2df17). Failed: has_table?: #<Faraday::ConnectionFailed>
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/resolv-replace.rb:12:in `rescue in getaddress'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/resolv-replace.rb:9:in `getaddress'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/resolv-replace.rb:23:in `initialize'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:879:in `open'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:879:in `block in connect'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/timeout.rb:76:in `timeout'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:878:in `connect'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:863:in `do_start'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:852:in `start'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:1369:in `request'
/var/vcap/packages/ruby_azure_cpi/lib/ruby/2.1.0/net/http.rb:1128:in `get'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:80:in `perform_request'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:40:in `block in call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:87:in `with_net_http_connection'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:32:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday_middleware-0.10.0/lib/faraday_middleware/response/follow_redirects.rb:76:in `perform_with_redirection'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday_middleware-0.10.0/lib/faraday_middleware/response/follow_redirects.rb:64:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in `build_response'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/faraday-0.9.2/lib/faraday/connection.rb:377:in `run_request'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/core/http/http_request.rb:143:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/core/http/signer_filter.rb:28:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/core/http/signer_filter.rb:28:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/core/http/http_request.rb:97:in `block in with_filter'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/core/service.rb:36:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/core/filtered_service.rb:34:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/core/signed_service.rb:41:in `call'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/azure-0.7.1/lib/azure/table/table_service.rb:97:in `get_table'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/table_manager.rb:22:in `has_table?'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/stemcell_manager.rb:93:in `handle_stemcell_in_different_storage_account'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/stemcell_manager.rb:69:in `has_stemcell?'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:91:in `block in create_vm'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_common-1.3100.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:83:in `create_vm'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:in `public_send'
/var/vcap/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:71:in `run'
/var/vcap/packages/bosh_azure_cpi/bin/azure_cpi:34:in `<main>' (00:00:08)

Unknown CPI error occurred when I execute 'bosh deploy'

bosh command failed in this week.

It seems like Azure API specification changed.

Error messages as follow.

bosh deploy

Deploying
---------
Are you sure you want to deploy? (type 'yes' to continue): 
Director task 35
  Started unknown
  Started unknown > Binding deployment. Done (00:00:00)

  Started preparing deployment
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Done (00:00:00)
  Started preparing deployment > Binding resource pools. Done (00:00:00)
  Started preparing deployment > Binding stemcells. Done (00:00:01)
  Started preparing deployment > Binding templates. Done (00:00:00)
  Started preparing deployment > Binding properties. Done (00:00:00)
  Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
  Started preparing deployment > Binding instance networks. Done (00:00:00)

  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started preparing dns > Binding DNS. Done (00:00:00)

  Started preparing configuration > Binding configuration. Done (00:00:03)

  Started updating job cf_z1 > cf_z1/0 (canary). Failed: Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'http_put - error: 400 message: {"error":{"code":"InvalidRequestContent","message":"The reques
t content was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 3000.'."}}' (00:06:11)

Error 100: Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'http_put - error: 400 message: {"error":{"code":"InvalidRequestContent","message":"The request content was invalid and could not be deseria
lized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 3000.'."}}'

bosh delete deployment cf-azure

Acting as user 'admin' on deployment 'cf-azure' on 'bosh'

You are going to delete deployment `cf-azure'.

THIS IS A VERY DESTRUCTIVE OPERATION AND IT CANNOT BE UNDONE!

Are you sure? (type 'yes' to continue): yes         

Director task 51
  Started deleting instances > cf_z1/0. Failed: Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'http_put - error: 400 message: {"error":{"code":"InvalidRequestContent","message":"The request content
 was invalid and could not be deserialized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 3000.'."}}' (00:00:10)

Error 100: Unknown CPI error 'Bosh::AzureCloud::AzureError' with message 'http_put - error: 400 message: {"error":{"code":"InvalidRequestContent","message":"The request content was invalid and could not be deseria
lized: 'Could not find member 'resources' on object of type 'ResourceDefinition'. Path 'resources', line 1, position 3000.'."}}'

CI - Azure Ruby SDK Client Side timeout fix

Uploading page blobs outside of azure reliability is difficult due to the fact that the ruby sdk doesn't have a client side timeout.

The CI Infrastructure is running in AWS and needs this in order to be reliable and avoid failures like this one

Can't set bosh target, error message "cannot access director"

I had been trying to do it on the local machine using the private IP address, then from another machine using the public IP address. The problem appears to be that nothing is listening on port 25555. Did something go wrong during bosh deploy?

setting compilation worker numbers has no impact on deployment process

hello,

while deploying cloud foundry using this setting (because of free tier core usage limitations: use 4 workers instead of 6):

compilation:
  workers: 4
  network: cf_private
  reuse_compilation_vms: true
  cloud_properties:
    instance_type: Standard_D1

I did a redeploy after changing the manifest yml as above:

bosh deploy --recreate

I still get the following error:

Error 100: http_put - error: 409 message: {
  "error": {
    "code": "OperationNotAllowed",
    "message": "Operation results in exceeding quota limits of Core. Maximum allowed: 4, Current in use: 4, Additional requested: 1."
  }
}

what do you suggest I should do?

thanks,

zoltán

Deploy BOSH failed

Error in the log:

Command 'deploy' failed:
creating stemcell (bosh-azure-hyperv-ubuntu-trusty-go_agent 0000):
CPI 'create_stemcell' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"http_get_response - get_token - http error: 400\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:842:in get_token'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:861:inhttp_get_response'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:1013:in http_post'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/azure_client2.rb:792:inget_storage_account_keys_by_name'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/table_manager.rb:13:in initialize'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:innew'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:367:in init_azure'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/lib/cloud/azure/cloud.rb:19:ininitialize'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/bin/azure_cpi:26:in new'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/bin/azure_cpi:26:inblock in \u003cmain\u003e'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:in call'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/vendor/bundle/ruby/2.1.0/gems/bosh_cpi-1.3100.0/lib/bosh/cpi/cli.rb:70:inrun'\n/home/ritacfcasestudyvmadmin/.bosh_init/installations/beacefe0-7d5e-4da2-6639-445160a29cd5/packages/bosh_azure_cpi/bin/azure_cpi:34:in `\u003cmain\u003e'","ok_to_retry":false}

generate.exe failing with "panic: no rep jobs found"

Attempting to push my first cf/diego .net application on azure as described here:
https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/push-your-first-net-application-to-cloud-foundry-on-azure

Completed all prerequisites:

  • Bosh created and deployed in azure.
  • CF & Diego created and deployed in azure.
  • Diego test app bingo app deployed and working.
  • Windows subnet created
  • WindowsForDiego VM created
  • setup.ps1 executed properly.

generate.exe failed with this error:

panic: no rep jobs found

goroutine 1 [running]:
main.firstRepJob(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        C:/diego-windows-release/greenhouse-install-script-generator/src/generate/generate.go:254 +0x147
main.fillEtcdCluster(0xc082019ee8, 0x0, 0x0, 0x0, 0x0)
        C:/diego-windows-release/greenhouse-install-script-generator/src/generate/generate.go:236 +0x40
main.main()
        C:/diego-windows-release/greenhouse-install-script-generator/src/generate/generate.go:121 +0xc71

goroutine 8 [runnable]:
net/http.(*persistConn).readLoop(0xc08200e210)
        c:/go/src/net/http/transport.go:928 +0x9d5
created by net/http.(*Transport).dialConn
        c:/go/src/net/http/transport.go:660 +0xca6

goroutine 9 [select]:
net/http.(*persistConn).writeLoop(0xc08200e210)
        c:/go/src/net/http/transport.go:945 +0x424
created by net/http.(*Transport).dialConn
        c:/go/src/net/http/transport.go:661 +0xcc3

install.bat has not been created. Has anyone seen this before? Any ideas what it could mean?

Bosh Deployments, Releases, Instances info:


+----------+----------------------+-----------------------------------------------+--------------+
| Name     | Release(s)           | Stemcell(s)                                   | Cloud Config |
+----------+----------------------+-----------------------------------------------+--------------+
| cf-azure | cf/224               | bosh-azure-hyperv-ubuntu-trusty-go_agent/3169 | none         |
+----------+----------------------+-----------------------------------------------+--------------+
| cf-diego | cf/224               | bosh-azure-hyperv-ubuntu-trusty-go_agent/3169 | none         |
|          | diego/0.1444.0       |                                               |              |
|          | etcd/20              |                                               |              |
|          | garden-linux/0.330.0 |                                               |              |
+----------+----------------------+-----------------------------------------------+--------------+

+--------------+-----------+-------------+
| Name         | Versions  | Commit Hash |
+--------------+-----------+-------------+
| cf           | 224*      | 65621dd0+   |
| diego        | 0.1444.0* | c5f802c8    |
| etcd         | 20*       | f9cfa965+   |
| garden-linux | 0.330.0*  | a92518a2    |
+--------------+-----------+-------------+
(*) Currently deployed
(+) Uncommitted changes


+----------------------------------------------------+---------+-----+-------------+-----------+
| Instance                                           | State   | AZ  | VM Type     | IPs       |
+----------------------------------------------------+---------+-----+-------------+-----------+
| diego_z1/0 (2cc3653d-fffc-4eb6-8d52-e1bf6906686a)* | running | n/a | resource_z1 | 10.0.32.4 |
+----------------------------------------------------+---------+-----+-------------+-----------+

Director couldn't be targeted after VM restart

hello,

following a successful bosh deploy I stopped and restarted the bosh vm, but Director was not reachable at 10.0.0.4

I tried to figure out how this could be mended, but finally resorted to redeploying bosh and Director works again

is there a way to ensure that stopping the bosh vm doesn't effect the bosh environment?

thanks,

Zoli

Support multiple load_balancer entries

Hi all, here's a scenario , let me know what you think of it

  1. It is good to have split horizon DNS for a CF instance, i.e. on the devbox you have a named/bind9 server that resolves *.cf.azurelovecf.com to 10.10.16.4 inside the VNet, and the Public IP outside the VNet. This avoids hairpin NAT routing to the load balancer public IP address which sometimes can be slower and flaky than remaining within the VNet.
  2. In a high availability enterprise scenario, we should use an Azure Load Balancer created ahead of time and have it load balance to two HAproxy instances annotated with the load_balancer cloud property.
  3. However an azure load balancer can only have a Public OR a Private IP address, it cannot have both.
  4. Therefore I think it makes sense to be able to have two (or more?) load_balancer entries, one for a Private load balancer and one for a Public load balancer.

Let me know if this is a reasonable enhancement. Right now as workaround, I'd need to create two resource pools and jobs for haproxy.

`database_z1/0' is not running after update when deploying Diego

Acting as user 'admin' on deployment 'cf-diego' on 'bosh'
Getting deployment properties from director...

Deploying
---------

Director task 74
  Started unknown
  Started unknown > Binding deployment. Done (00:00:00)

  Started preparing deployment
  Started preparing deployment > Binding releases. Done (00:00:00)
  Started preparing deployment > Binding existing deployment. Done (00:00:00)
  Started preparing deployment > Binding resource pools. Done (00:00:00)
  Started preparing deployment > Binding stemcells. Done (00:00:00)
  Started preparing deployment > Binding templates. Done (00:00:00)
  Started preparing deployment > Binding properties. Done (00:00:00)
  Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
  Started preparing deployment > Binding instance networks. Done (00:00:00)

  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started preparing dns > Binding DNS. Done (00:00:00)

  Started preparing configuration > Binding configuration. Done (00:00:01)

  Started updating job database_z1 > database_z1/0 (canary). Failed: `database_z1/0' is not running after update (00:02:55)

Error 400007: `database_z1/0' is not running after update

Task 74 error

For a more detailed error report, run: bosh task 74 --debug

We ssh into database_z1/0 and find etcd is not running because etcdmain: listen tcp 0.0.0.0:7001: bind: address already in use.

root@39d8e04e-5544-4a40-88b0-3498590eb581:/var/vcap/sys/log/etcd# monit summary
The Monit daemon 5.2.5 uptime: 28m

Process 'etcd'                      not monitored
Process 'bbs'                       running
Process 'consul_agent'              running
Process 'metron_agent'              running
System 'system_localhost'           running

This issue is about diego release itself. A similar one is tracked here.
cloudfoundry/diego-release#119

Cannot find the subnet boshvnet-crp/BOSH

azure network vnet subnet list bosh-res-group boshvnet-crp
info: Executing command network vnet subnet list

  • Getting virtual network subnets
    data: Name Address prefix
    data: ------------ --------------
    data: BOSH 10.0.0.0/20
    data: CloudFoundry 10.0.16.0/20
    info: network vnet subnet list command OK

any suggestion?

Azure CPI CI cannot reset resources when blobs have snapshots.

Logs.
info: Executing command storage blob delete
+
error: This operation is not permitted because the blob has snapshots.
RequestId:3d7dfbf8-0001-0102-7ca4-596853000000
Time:2016-01-28T08:15:25.9764965Z
info: Error information has been recorded to /root/.azure/azure.err
error: storage blob delete command failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.