Giter VIP home page Giter VIP logo

k8s-vagrant-multi-node's Introduction

k8s-vagrant-multi-node

Build Status

This project was based on work from coolsvap/kubeadm-vagrant by @coolsvap, now it is mostly independent.

A demo of the start and destroy of a cluster can be found here: README.md Demo section.

Prerequisites

  • make
  • kubectl - Optional when KUBECTL_AUTO_CONF is set to false (default: true).
  • grep
  • cut
  • rsync
  • Source for randomness (only used to generate a kubeadm token, when no custom KUBETOKEN is given):
    • /dev/urandom
    • openssl command - Fallback for when /dev/urandom is not available.
  • Vagrant (>= 2.2.0)
    • Tested with 2.2.2 (if you should experience issues, please upgrade to at least this Vagrant version or higher)
    • Plugins
      • vagrant-reload REQUIRED For BOX_OS=fedora (set by default) and when using the vagrant-reload* targets, the vagrant-reload plugin is needed. An automatic attempt to install the plugin is made. To install manually run one of the following commands:
        • make vagrant-plugins or
        • vagrant plugin install vagrant-reload
  • Vagrant Provider (one of the following two is needed)
    • libvirt (vagrant plugin install vagrant-libvirt)
      • Tested with libvirtd version 5.10.0.
      • Libvirt support is still a bit experimental and can be unstable (e.g., VMs not getting IPs).
        • Troubleshooting: If your VM creation is hanging at Waiting for domain to get an IP address..., using virsh run virsh force reset VM_NAME (VM_NAME can be obtained using virsh list command) or in virt-manager Force Reset on the VM.
    • Virtualbox (WARNING VirtualBox seems to hang the Makefile randomly for some people, libvirt is recommended)
      • Tested with 6.0.0 (if you should experience issues, please upgrade to at least this version or higher)
      • VBoxManage binary in PATH.

NOTE kubectl is only needed when the kubectl auto configuration is enabled (default is enabled), to disable it set the variable KUBECTL_AUTO_CONF to false. For more information, see the Configuration / Variables doc page.

Hardware Requirements

  • Master
    • CPU: 2 Cores (MASTER_CPUS)
    • Memory: 2GB (MASTER_MEMORY_SIZE_GB)
  • 1x Node:
    • CPU: 1 Core (it is recommended to use at least 2 Cores; NODE_CPUS)
    • Memory: 2GB (it is recommended to use more than 2GB; NODE_MEMORY_SIZE_GB)

These resources can be changed by setting the according variables for the make up command, see Configuration / Variables doc page.

Quickstart

To start with the defaults, 1x master and 2x workers, run the following:

$ make up -j 3

The -j3 will cause three VMs to be started in parallel to speed up the cluster creation.

NOTE Your kubectl is automatically configured to use a context for the created cluster, after the master VM is started. The context is named after the directory the Makefile is in.

$ kubectl config current-context
k8s-vagrant-multi-node
$ kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    4m        v1.17.3
node1     Ready     <none>    4m        v1.17.3
node2     Ready     <none>    4m        v1.17.3

VM OS Selection

There are multiple sets of Vagrantfiles available (see vagrantfiles/) which can be used to use a different OS for the Kubernetes environment.

See VM OS Selection doc page.

Usage

Also see Usage doc page.

Starting the environment

To start up the Vagrant Kubernetes multi node environment with the default of two worker nodes + a master (not parallel) run:

$ make up

NOTE Your kubectl is automatically configured to use a context for the created cluster, after the master VM is started. The context is named after the directory the Makefile is in.

Faster (parallel) environment start

To start up 4 VMs in parallel run (-j flag does not control how many (worker) VMs are started, the NODE_COUNT variable is used for that):

$ NODE_COUNT=3 make up -j4

The flag -j CORES/THREADS allows yout to set how many VMs (Makefile targets) will be run at the same time. You can also use -j $(nproc) to start as many VMs as cores/threads you have in your machine. So to start up all VMs (master and three nodes) in parallel, you would add one to the chosen NODE_COUNT.

Show status of VMs

$ make status
master                    not created (virtualbox)
node1                     not created (virtualbox)
node2                     not created (virtualbox)

Shutting down the environment

To destroy the Vagrant environment run:

$ make clean
$ make clean-data

Copy local Docker image into VMs

The make load-image target can be used to copy a docker image from your local docker daemon to all the VMs in your cluster. The IMG variable can be expressed in a few ways, for example:

$ make load-image IMG=your_name/your_image_name:your_tag
$ make load-image IMG=your_name/your_image_name
$ make load-image IMG=my-private-registry.com/your_name/your_image_name:your_tag

You can also specify a new image name and tag to use after the image has been copied to the VM's by setting the TAG variable. This will not change the image/tag in your local docker daemon, it will only affect the image in the VM's.

$ make load-image IMG=repo/image:tag TAG=new_repo/new_image:new_tag

Data inside VM

See the data/VM_NAME/ directories, where VM_NAME is for example master.

make Targets

See make Targets doc page.

Configuration / Variables

See Configuration / Variables doc page.

Troubleshooting

See Troubleshooting doc page.

Demo

See Demo doc page.


Creating an Issue

Please attach the output of the make versions command to the issue as is shown in the issue template. This makes debugging easier.

k8s-vagrant-multi-node's People

Contributors

aloysaugustin avatar alrighttheresham avatar blaineexe avatar franciosi avatar galexrt avatar jbw976 avatar jmolmo avatar lalbers avatar laurentdavid avatar m-kostrzewa avatar probot-auto-merge[bot] avatar scaleoutsean avatar sophalhong avatar wildermesser avatar woohhan avatar y0zg avatar yuggupta27 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-vagrant-multi-node's Issues

Fail to create cluster due to undefined method `to_bool' for "false":String

Bug Report

Expected behavior:

It should be possible to create cluster with clean install

Deviation from expected behavior:

Attempting to use k8s-vagrant-multi-node to create first k8s cluster but with a vanilla install I'm getting errors that seem to be vagrant related.

How to reproduce it (minimal and precise):

  • Cloned k8s-vagrant-multi-node from git
  • Run the following command and the the output that follows:
NODE_MEMORY_SIZE_GB=3 NODE_CPUS=2 NODE_COUNT=3 make up -j4

=== BEGIN Version Info ===
Repo state: 8b4f56edc515afe881f2efa9fa26f8b004932d03 (dirty? NO)
make: /usr/bin/make
kubectl: /usr/local/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===
=== BEGIN Version Info ===
Repo state: 8b4f56edc515afe881f2efa9fa26f8b004932d03 (dirty? NO)
make: /usr/bin/make
kubectl: /usr/local/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===
Vagrant failed to initialize at a very early stage:

There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: /tmp/k8s-vagrant-multi-node/vagrantfiles/Vagrantfile
Line number: 0
Message: NoMethodError: undefined method `to_bool' for "false":String
make[1]: *** [pull] Error 1
make: *** [up] Error 2
  • Similar issues with make status:
make status
Vagrant failed to initialize at a very early stage:

There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: /tmp/k8s-vagrant-multi-node/vagrantfiles/Vagrantfile
Line number: 0
Message: NoMethodError: undefined method `to_bool' for "false":String
Vagrant failed to initialize at a very early stage:

There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: /tmp/k8s-vagrant-multi-node/vagrantfiles/Vagrantfile
Line number: 0
Message: NoMethodError: undefined method `to_bool' for "false":String
Vagrant failed to initialize at a very early stage:

There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: /tmp/k8s-vagrant-multi-node/vagrantfiles/Vagrantfile
Line number: 0
Message: NoMethodError: undefined method `to_bool' for "false":String

Environment:

  • OS of the machine (e.g. from /etc/os-release):
  • Kernel of the machine (e.g. uname -a):
  • make versions output:
make versions
=== BEGIN Version Info ===
Repo state: 8b4f56edc515afe881f2efa9fa26f8b004932d03 (dirty? NO)
make: /usr/bin/make
kubectl: /usr/local/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===

BOX_OS Centos

Hi, @galexrt switched to try the Centos option assuming that the version of docker would be older than whats on Fedora.

Had a problem starting the VM's. The nodes wouldn't bind to the master.

Was stuck in this loop

    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: con

The issue seems to be described here kubernetes/kubernetes#58876

Quick hack suggested dropping the firewall, I logged into the master and ran

 $ make ssh-master
vagrant ssh
[vagrant@master ~]$ sudo -i
[root@master ~]# systemctl stop firewalld

This allowed the nodes to connect.

    node2: [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.26.10:6443"
    node2: [discovery] Successfully established connection with API Server "192.168.26.10:6443"

This allowed the installation to continue, however still seeing the same issue as #35

-86c58d9df4-fmww2": NetworkPlugin cni failed to teardown pod "coredns-86c58d9df4-fmww2_kube-system" network: failed to get IP addresses for "eth0": <nil>]
  Normal   SandboxChanged          10m (x12 over 10m)   kubelet, master    Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  12s (x545 over 10m)  kubelet, master    (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "08b0ebab7526e1395d27a412b501117321037429c11c0203660111157e54750f" network for pod "coredns-86c58d9df4-fmww2": NetworkPlugin cni failed to set up pod "coredns-86c58d9df4-fmww2_kube-system" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "08b0ebab7526e1395d27a412b501117321037429c11c0203660111157e54750f" network for pod "coredns-86c58d9df4-fmww2": NetworkPlugin cni failed to teardown pod "coredns-86c58d9df4-fmww2_kube-system" network: failed to get IP addresses for "eth0": <nil>]

coredns doesn't start - failed to find plugin "loopback" in path [/opt/cni/bin]

Hi, please check how to fix this properly

make start -j 3

OS Fedora

kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

get pods -n kube-system
NAME                             READY   STATUS              RESTARTS   AGE
coredns-86c58d9df4-w79cc         0/1     ContainerCreating   0          12m
coredns-86c58d9df4-w9q65         0/1     ContainerCreating   0          12m
etcd-master                      1/1     Running             3          12m
kube-apiserver-master            1/1     Running             3          12m
kube-controller-manager-master   1/1     Running             3          12m
kube-proxy-qkvx5                 1/1     Running             1          12m
kube-proxy-v66zb                 1/1     Running             1          12m
kube-proxy-z9trz                 1/1     Running             1          12m
kube-scheduler-master            1/1     Running             3          12m

k get events -n kube-system
LAST SEEN   TYPE      REASON                   KIND         MESSAGE
56m         Normal    SandboxChanged           Pod          Pod sandbox changed, it will be killed and re-created.
22m         Normal    Scheduled                Pod          Successfully assigned kube-system/coredns-86c58d9df4-7vqkw to node1
22m         Warning   FailedCreatePodSandBox   Pod          Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7d9c818779632409b7a9d82f474c58411ca4b67cc65b571fc3a4d852add23e09" network for pod "coredns-86c58d9df4-7vqkw": NetworkPlugin cni failed to set up pod "coredns-86c58d9df4-7vqkw_kube-system" network: failed to find plugin "loopback" in path [/opt/cni/bin], failed to clean up sandbox container "7d9c818779632409b7a9d82f474c58411ca4b67cc65b571fc3a4d852add23e09" network for pod "coredns-86c58d9df4-7vqkw": NetworkPlugin cni failed to teardown pod "coredns-86c58d9df4-7vqkw_kube-system" network: failed to find plugin "flannel" in path [/opt/cni/bin]]

Failure to successfully boot environment

  • Bug Report

make up -j3 fails on k8s cluster creation and never completes.

    master: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    master: [control-plane] Creating static Pod manifest for "kube-apiserver"
    master: [control-plane] Creating static Pod manifest for "kube-controller-manager"
    master: [control-plane] Creating static Pod manifest for "kube-scheduler"
    master: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    master: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    master: [kubelet-check] Initial timeout of 40s passed.
    master: [kubelet-check] It seems like the kubelet isn't running or healthy.
    master: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    master: [kubelet-check] It seems like the kubelet isn't running or healthy.
    master: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    master: [kubelet-check] It seems like the kubelet isn't running or healthy.
    master: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    master: [kubelet-check] It seems like the kubelet isn't running or healthy.
    master: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    master: [kubelet-check] It seems like the kubelet isn't running or healthy.
    master: [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
<...>
    master: [preflight] Running pre-flight checks
    master: error execution phase preflight: [preflight] Some fatal errors occurred:
    master: 	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    master: 	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    master: 	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    master: 	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
    master: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    master: kubeadm join failed, trying again in 3 seconds (try 4/5)...
    master: ++ echo 'kubeadm join failed, trying again in 3 seconds (try 4/5)...'
    master: ++ sleep 3
    master: Failed to run kubeadm init after 5 tries
    master: ++ (( i++ ))
    master: ++ (( i<retries ))
    master: ++ [[ 5 -eq i ]]
    master: ++ echo 'Failed to run kubeadm init after 5 tries'
    master: ++ exit 1
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
make[2]: *** [start-master] Error 1
make[2]: *** Waiting for unfinished jobs....

How to reproduce it (minimal and precise):

Git clone as of 8/13.

Environment:

Mac 10.14.6
18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019; root:xnu-4903.270.47~4/RELEASE_X86_64 x86_64
=== BEGIN Version Info ===
Repo state: 071fa53facd5f2bbd8a73198ad6a1128437ad30f (dirty? NO)
make: /usr/bin/make
kubectl: /usr/local/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /Users/andrewnelson/anaconda3/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===

Worker VM memory size should be configurable

The worker VMs are hard coded to 1 GB right now which causes out of memory killer to be invoked pretty quickly once some workloads are being run on the cluster. Since user pods can only be scheduled on the worker nodes, I think we only need to expose a config option for their memory size.

coredns is pending and not starting

Is this a bug report or feature request?

  • Bug Report
kylix3511@kylixlab:~/k8s/k8s-vagrant-multi-node$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
default       nginix-56c74f9ccd-ph64g          0/1     Pending   0          8m33s
kube-system   coredns-fb8b8dccf-7vwm9          0/1     Pending   0          2d12h
kube-system   coredns-fb8b8dccf-jqnt5          0/1     Pending   0          2d12h
kube-system   etcd-master                      1/1     Running   0          2d12h
kube-system   kube-apiserver-master            1/1     Running   0          2d12h
kube-system   kube-controller-manager-master   1/1     Running   0          2d12h
kube-system   kube-proxy-chgv6                 1/1     Running   0          2d12h
kube-system   kube-proxy-fcfkb                 1/1     Running   0          2d12h
kube-system   kube-proxy-xslfc                 1/1     Running   0          2d12h
kube-system   kube-scheduler-master            1/1     Running   0          2d12h

  • Feature Request

Bug Report

Expected behavior:

Deviation from expected behavior:

How to reproduce it (minimal and precise):

Environment:

  • OS of the machine (e.g. from /etc/os-release):
[I] kylix3511 โ˜ธ๏ธ  kylixlab~/k/k8s-vagrant-multi-node> cat /etc/os-release 
NAME="Ubuntu"
VERSION="19.04 (Disco Dingo)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 19.04"
VERSION_ID="19.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=disco
UBUNTU_CODENAME=disco
[I] kylix3511 โ˜ธ๏ธ  kylixlab~/k/k8s-vagrant-multi-node> 
  • Kernel of the machine (e.g. uname -a):
I] kylix3511 โ˜ธ๏ธ  kylixlab~/k/k8s-vagrant-multi-node> uname -a 
Linux kylixlab 5.0.0-13-generic #14-Ubuntu SMP Mon Apr 15 14:59:14 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[I] kylix3511 โ˜ธ๏ธ  kylixlab~/k/k8s-vagrant-multi-node> 
  • make versions output:
[I] kylix3511 โ˜ธ๏ธ  kylixlab~/k/k8s-vagrant-multi-node> 
sudo BOX_OS=ubuntu  KUBERNETES_VERSION=1.14.1  CLUSTER_NAME=local-k8s-ubuntu NODE_MEMORY_SIZE_GB=3 MASTER_MEMORY_SIZE_GB=3 NODE_CPUS=2 MASTER_CPUS=2 NODE_COUNT=2  make -j8 versions
[sudo] password for kylix3511: 
=== BEGIN Version Info ===
Repo state: 6f84df85b9339bedfa327a18b5cfdffb6a99566d (dirty? NO)
make: /usr/bin/make
kubectl: /snap/bin/kubectl
grep: /bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.4
vboxmanage version:
6.0.6r130049
=== END Version Info ===
[I] kylix3511 โ˜ธ๏ธ  kylixlab~/k/k8s-vagrant-multi-node> 

Feature Request

Are there any similar features already existing:

What should the feature do:

What would be solved through this feature:

Does this have an impact on existing features:

vagrant reports the 2nd node as "preparing"

Bug Report
While running a multi-node cluster with 2 worker nodes, the second node doesn't seem to start and vagrant reports the 2nd node as preparing.
https://termbin.com/kvvw

Rebooting the worker node works sometimes but is not a consistent solution.

Expected behavior:
All three nodes should show "running" status.

Deviation from expected behavior:
When make status is run, the second node doesn't seem to exist.
https://termbin.com/wvh3

How to reproduce it (minimal and precise):

make up -j3 BOX_OS=centos VAGRANT_DEFAULT_PROVIDER=libvirt KUBERNETES_VERSION="$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt | sed 's/^v//')"  MASTER_CPUS=4 MASTER_MEMORY_SIZE_GB=5 NODE_COUNT=2 NODE_MEMORY_SIZE_GB=5 DISK_COUNT=2 DISK_SIZE_GB=25

Environment:
server type: gusty.ci.centos.org
see: https://wiki.centos.org/QaWiki/PubHardware

  • OS of the machine (e.g. from /etc/os-release): centos
  • Kernel of the machine (e.g. uname -a):
  • make versions output:
OUTPUT_HERE
=== BEGIN Version Info ===
Repo state: 83da22f1cf7285caa6f0c169d8f6473a2f3a1db6 (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.7
=== END Version Info ===

Automatically mount VirtIO disks to specific destination

Feature Request
So with these envars:
DISK_COUNT ?= 2
DISK_SIZE_GB ?= 50

I get state where each machine in libvirt has 2 VirtIO disks, but one has size the other one doesn't:
First has size:
image

Second doesn't have size (that is strange):
image

But when I create a file system it has the correct size.

Another small issue is that on master node there are only visible:
/dev/vda /dev/vdb
But on worker nodes I can see:
/dev/vda /dev/vda1 /dev/vdb /dev/vdb1

Seems like something related to disc recognition works in different ways on master vs workers...

it will be good the script somehow automounts these drives to the specific mount point in Vagrantfile_scripts: $prepareScript

I understand that its not super easy as soon as:

  1. Visibility of disks is different per provider (vb vs libvirt) and eventually also under different OS (but maybe I am wrong here with OS)
  2. As this is based on DISK_COUNT variable, the possible new variable DISK_MOUNT_POINTS would need to be kind of array, eg. /var/lib/docker|/var/lib/other as 1 dim array, if we would like to specify mount point per disk count and not as sequence, we would need 2 dim array which in bash is simple like this (its actually matching strings..:-) ):
# Make myarray an associative array
declare -A DISKS

#Assign some random value
1. first disk
 disk count
DISKS[0,1]="0"
disk mount point
DISKS[0,2]="/var/lib/docker"
size
DISKS[0,3]="50GB"
fs
DISKS[0,4]="ext4"

2. second disk
 disk count
DISKS[1,1]="1"
disk mount point
DISKS[1,2]="/mnt/disk2"
size
DISKS[1,3]="50GB"
fs
DISKS[1,4]="ext4"

# Access it through variables
get disk 1 size
x=1 y=3
echo "disk 2 size: ${DISKS[$x,$y]}"

i know it's not nice, just quickly kicked how its possible to achieve #66
Any hints on this?

NOTE: After I deployed apache pulsar I am getting disk pressure taints as nodes running out of space...

Are there any similar features already existing:

What should the feature do:
Enable user to make bigger root fs or let him mount bigger disks to specific destinations, eg eventually /var will help.

What would be solved through this feature:
kubernetes disk pressure - I just deployed minio s3 and apache pulsar operators and all the images pulled in filled out the storage.

Does this have an impact on existing features:

Allow HTTP_PROXY and HTTPS_PROXY to bet set in the VMs

Feature Request

Allow HTTP_PROXY and HTTPS_PROXY to bet set in the VMs.

Are there any similar features already existing:

No.

What should the feature do:

Allow HTTP_PROXY and HTTPS_PROXY to bet set in the VMs for using, e.g., corporate proxies.

What would be solved through this feature:

E.g., users in corporate networks can use their corporate proxies.

Does this have an impact on existing features:

No.

Code examples:

Nothing happens while running with defaults

Hello there,
Thanks for sharing this.

Running with the defaults using make up -j 3 doesn't produce anything.
I have downgraded my vagrant to 2.1.1
Appreciate any feedback

tr: Illegal byte sequence

When running commit 27c3304 on my Mac, I see the following messages from tr as the first output of make up. The installation completes and the Kubernetes cluster seems functional, so this may not be a big issue.

jared@Jareds-MacBook-Pro ~/dev/k8s-vagrant-multi-node (master)
> make up -j4
tr: Illegal byte sequence
tr: Illegal byte sequence
vagrant up
VAGRANT_VAGRANTFILE=Vagrantfile_nodes NODE=1 vagrant up
VAGRANT_VAGRANTFILE=Vagrantfile_nodes NODE=2 vagrant up
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'master' up with 'virtualbox' provider...
...

Ubuntu vagrant file error since vagrant cloud dependency

Bug Report

make up with BOX_OS=ubuntu make error since it gets vagrant image from https://app.vagrantup.com/generic/boxes/ubuntu1804/versions/3.2.10/providers/virtualbox.box, But this page is not accessible now.

Expected behavior:

I know it's not the fault of this repo, but the vagrant cloud page in HashiCorp. But if this issue happend, how about get image from older version in that page ? such as v3.2.8 for ubuntu 1804 image

Deviation from expected behavior:

How to reproduce it (minimal and precise):

BOX_OS=ubuntu KUBERNETES_VERSION=v1.17.3 make up -j2

Environment:

  • OS of the machine (e.g. from /etc/os-release):
  • Kernel of the machine (e.g. uname -a):
  • make versions output:
OUTPUT_HERE

Temporary failure resolving 'us.archive.ubuntu.com'

Is this a bug report or feature request?

  • Bug Report

Bug Report

Expected behavior:
K8s master and node starting up with ubuntu image
Deviation from expected behavior:

Fetched 3,685 kB in 2min 15s (27.3 kB/s)
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/b/binutils/binutils-common_2.30-21ubuntu1~18.04_amd64.deb  Temporary failure resolving 'us.archive.ubuntu.com'
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/b/binutils/libbinutils_2.30-21ubuntu1~18.04_amd64.deb  Temporary failure resolving 'us.archive.ubuntu.com'
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/b/binutils/binutils-x86-64-linux-gnu_2.30-21ubuntu1~18.04_amd64.deb  Temporary failure resolving 'us.archive.ubuntu.com'
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/b/binutils/binutils_2.30-21ubuntu1~18.04_amd64.deb  Temporary failure resolving 'us.archive.ubuntu.com'
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
==> master: Checking for guest additions in VM...

How to reproduce it (minimal and precise):
If I specify ubuntu as box I get the errors above. There could be something wrong with my system as well, but I could really use some help to pinpoint the exact issue.

Environment:

  • OS of the machine (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel of the machine (e.g. uname -a):
    4.15.0-47-generic
  • make versions output:
=== BEGIN Version Info ===
Repo state: 6f84df85b9339bedfa327a18b5cfdffb6a99566d (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.4
vboxmanage version:
6.0.6r130049
=== END Version Info ===

HostOnly network

Hi again @galexrt ,
I have a question:
How you were able to access the master IP using a host-only network adapter on Virtualbox.
kubectl get nodes
errors out
Unable to connect to the server: dial tcp 192.168.26.10:6443: getsockopt: no route to host

Ephemeral storage too small - larger / by default?

Feature Request

Are there any similar features already existing: Ability to configure sizes of additional disks is there - but not the primary one.

What should the feature do: Either provide a way to provision vm with larger disk&root partition or just increase it by default to 20GB or something.

What would be solved through this feature: Actively developing using k8s-vagrant-multi-node I quickly encountered issues with "NodeHasDiskPressure" due to lack of ephemeral-storage.

Does this have an impact on existing features: No.

k8's nodes not ready after installation using Kubernetes Version 1.16

Bug Report
k8's nodes not ready after installation using Kubernetes Version 1.16.

It seems that there is any kind of problem with the cni installation, but im not able to locate what can be the source of the problem.
Any help with this will be welcomed, i'm available to investigate more or to test the possible solution.
Thanks!

Expected behavior:
Install a kubernetes functional kubernetes cluster version 1.16.0

Deviation from expected behavior:
Using the last version of k8, vms installed but k8 nodes never reach the "ready" state

How to reproduce it (minimal and precise):

Execute:
NODE_COUNT=1 make up -j4

Environment:

  • OS of the machine (e.g. from /etc/os-release): Fedora release 28
  • Kernel of the machine (e.g. uname -a): Linux juanmipc 5.0.5-100.fc28.x86_64 #1 SMP Wed Mar 27 22:16:29 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • make versions output:
=== BEGIN Version Info ===
Repo state: 6b4302ba3d6aa4c127267746834238cb815ae5cd (dirty? YES)
make: /usr/bin/make
kubectl: /usr/local/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.4r128413
=== END Version Info ===

Other useful informations:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-10-10T16:38:01Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady   master    8m30s     v1.16.0
node1     NotReady   <none>    7m48s     v1.16.0
node2     NotReady   <none>    7m48s     v1.16.0
node3     NotReady   <none>    7m46s     v1.16.0


$ kubectl describe master
...
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 20 Sep 2019 09:33:00 +0200   Fri, 20 Sep 2019 09:32:31 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 20 Sep 2019 09:33:00 +0200   Fri, 20 Sep 2019 09:32:31 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 20 Sep 2019 09:33:00 +0200   Fri, 20 Sep 2019 09:32:31 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 20 Sep 2019 09:33:00 +0200   Fri, 20 Sep 2019 09:32:31 +0200   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
...

ubuntu OS Error

Hi @galexrt I got this error ... have you seen this error before :


kylix3511 >>> ย BOX_OS=ubuntu KUBERNETES_VERSION=1.12.0 CLUSTER_NAME=k8s-local NO
DE_MEMORY_SIZE_GB=2 MASTER_MEMORY_SIZE_GB=3 NODE_CPUS=2 MASTER_CPUS=2 NODE_COUNT=2  make -j8 up 
if !(vagrant box list | grep -q generic/ubuntu1804); then \
		vagrant \
			box \
			add \
			--provider=virtualbox \
			generic/ubuntu1804; \
	else \
		vagrant box update --box=generic/ubuntu1804; \
	fi
Checking for updates to 'generic/ubuntu1804'
Latest installed version: 1.8.60
Version constraints: > 1.8.60
Provider: virtualbox
Box 'generic/ubuntu1804' (v1.8.60) is running the latest version.
vagrant up
NODE=1 vagrant up
NODE=2 vagrant up
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'node1' up with 'virtualbox' provider...
==> node2: Importing base box 'generic/ubuntu1804'...
==> node1: Importing base box 'generic/ubuntu1804'...
==> master: Importing base box 'generic/ubuntu1804'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'generic/ubuntu1804' version '1.8.60' is up to date...
==> node1: Matching MAC address for NAT networking...
==> node2: Matching MAC address for NAT networking...
==> node1: Checking if box 'generic/ubuntu1804' version '1.8.60' is up to date...
==> node2: Checking if box 'generic/ubuntu1804' version '1.8.60' is up to date...
==> master: Setting the name of the VM: k8s-vagrant-multi-node_master_1549245554593_43065
==> node2: Setting the name of the VM: k8s-vagrant-multi-node_node2_1549245554970_91320
==> node1: Setting the name of the VM: k8s-vagrant-multi-node_node1_1549245554972_94381
==> master: Clearing any previously set network interfaces...
==> node1: Fixed port collision for 22 => 2222. Now on port 2200.
==> node1: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
    master: Adapter 1: nat
    master: Adapter 2: hostonly
==> master: Forwarding ports...
    master: 8443 (guest) => 8443 (host) (adapter 1)
    master: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Preparing network interfaces based on configuration...
    node1: Adapter 1: nat
    node1: Adapter 2: hostonly
==> node1: Forwarding ports...
==> master: Running 'pre-boot' VM customizations...
    node1: 22 (guest) => 2200 (host) (adapter 1)
==> node2: Fixed port collision for 22 => 2222. Now on port 2201.
==> node2: Clearing any previously set network interfaces...
==> master: Booting VM...
==> node2: Preparing network interfaces based on configuration...
    node2: Adapter 1: nat
    node2: Adapter 2: hostonly
==> node2: Forwarding ports...
==> node1: Running 'pre-boot' VM customizations...
    node2: 22 (guest) => 2201 (host) (adapter 1)
==> master: Waiting for machine to boot. This may take a few minutes...
==> node2: Running 'pre-boot' VM customizations...
==> node1: Booting VM...
    master: SSH address: 127.0.0.1:2222
    master: SSH username: vagrant
    master: SSH auth method: private key
==> node2: Booting VM...
==> node1: Waiting for machine to boot. This may take a few minutes...
==> node2: Waiting for machine to boot. This may take a few minutes...
    node1: SSH address: 127.0.0.1:2200
    node1: SSH username: vagrant
    node1: SSH auth method: private key
    node2: SSH address: 127.0.0.1:2201
    node2: SSH username: vagrant
    node2: SSH auth method: private key
    master: 
    master: Vagrant insecure key detected. Vagrant will automatically replace
    master: this with a newly generated keypair for better security.
    master: 
    master: Inserting generated public key within guest...
    master: Removing insecure key from the guest if it's present...
    master: Key inserted! Disconnecting and reconnecting using new SSH key...
    node1: 
    node1: Vagrant insecure key detected. Vagrant will automatically replace
    node1: this with a newly generated keypair for better security.
    node2: 
    node2: Vagrant insecure key detected. Vagrant will automatically replace
    node2: this with a newly generated keypair for better security.
    node1: 
    node1: Inserting generated public key within guest...
    node1: Removing insecure key from the guest if it's present...
    node1: Key inserted! Disconnecting and reconnecting using new SSH key...
    node2: 
    node2: Inserting generated public key within guest...
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
    node2: Removing insecure key from the guest if it's present...
    master: The guest additions on this VM do not match the installed version of
    master: VirtualBox! In most cases this is fine, but in rare cases it can
    master: prevent things such as shared folders from working properly. If you see
    master: shared folder errors, please make sure the guest additions within the
    master: virtual machine match the version of VirtualBox you have installed on
    master: your host and reload your VM.
    master: 
    master: Guest Additions Version: 5.2.18
    master: VirtualBox Version: 6.0
==> master: Setting hostname...
    node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> master: Configuring and enabling network interfaces...
==> node1: Machine booted and ready!
==> node1: Checking for guest additions in VM...
    node1: The guest additions on this VM do not match the installed version of
    node1: VirtualBox! In most cases this is fine, but in rare cases it can
    node1: prevent things such as shared folders from working properly. If you see
    node1: shared folder errors, please make sure the guest additions within the
    node1: virtual machine match the version of VirtualBox you have installed on
    node1: your host and reload your VM.
    node1: 
    node1: Guest Additions Version: 5.2.18
    node1: VirtualBox Version: 6.0
==> node1: Setting hostname...
==> node2: Machine booted and ready!
==> node2: Checking for guest additions in VM...
    node2: The guest additions on this VM do not match the installed version of
    node2: VirtualBox! In most cases this is fine, but in rare cases it can
    node2: prevent things such as shared folders from working properly. If you see
    node2: shared folder errors, please make sure the guest additions within the
    node2: virtual machine match the version of VirtualBox you have installed on
    node2: your host and reload your VM.
    node2: 
    node2: Guest Additions Version: 5.2.18
    node2: VirtualBox Version: 6.0
==> node2: Setting hostname...
==> master: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/ubuntu-master/ => /data
==> node1: Configuring and enabling network interfaces...
==> node2: Configuring and enabling network interfaces...
==> master: Running provisioner: shell...
    master: Running: inline script
==> node1: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/ubuntu-node1/ => /data
==> node1: Running provisioner: shell...
    node1: Running: inline script
==> node2: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/ubuntu-node2/ => /data
==> master: Running provisioner: shell...
==> node2: Running provisioner: shell...
    master: Running: inline script
    master: ++ apt-get update
    node2: Running: inline script
==> node1: Running provisioner: shell...
    node1: Running: inline script
    node1: ++ apt-get update
==> node2: Running provisioner: shell...
    node2: Running: inline script
    node2: ++ apt-get update
    master: Hit:1 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    master: Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
    master: Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
    node2: Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
    master: Get:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
    node2: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node1: Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
    node2: Get:3 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
    node1: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    master: Get:5 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [255 kB]
    node1: Get:3 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
    node1: Get:4 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [255 kB]
    node2: Get:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
    master: Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [446 kB]
    node2: Get:5 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [255 kB]
    node2: Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [512 kB]
    node1: Get:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
    node1: Get:6 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [197 kB]
    master: Get:7 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [197 kB]
    node1: Get:7 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [512 kB]
    node1: Get:8 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [96.8 kB]
    master: Get:8 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [96.8 kB]
    master: Get:9 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [116 kB]
    node1: Get:9 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [116 kB]
    master: Get:10 http://security.ubuntu.com/ubuntu bionic-security/universe i386 Packages [114 kB]
    node1: Get:10 http://security.ubuntu.com/ubuntu bionic-security/universe i386 Packages [114 kB]
    master: Get:11 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [66.2 kB]
    node1: Get:11 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [66.2 kB]
    node2: Get:7 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [197 kB]
    node2: Get:8 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [446 kB]
    node2: Get:9 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [96.8 kB]
    node2: Get:10 http://security.ubuntu.com/ubuntu bionic-security/universe i386 Packages [114 kB]
    node1: Get:12 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [446 kB]
    node2: Get:11 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [116 kB]
    master: Get:12 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [512 kB]
    node2: Get:12 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [66.2 kB]
    node2: Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [193 kB]
    node1: Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [193 kB]
    node1: Get:14 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [724 kB]
    node2: Get:14 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [724 kB]
    master: Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [193 kB]
    node1: Get:15 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [716 kB]
    master: Get:14 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [716 kB]
    node2: Get:15 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [716 kB]
    node1: Get:16 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [183 kB]
    node1: Get:17 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [3,452 B]
    master: Get:15 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [724 kB]
    node1: Fetched 3,874 kB in 1min 20s (48.2 kB/s)
    node1: Reading package lists...
    node1: ++ apt-get install -y apt-transport-https curl software-properties-common ca-certificates
    node1: Reading package lists...
    node1: Building dependency tree...
    node1: 
    node1: Reading state information...
    node1: ca-certificates is already the newest version (20180409).
    node1: curl is already the newest version (7.58.0-2ubuntu3.5).
    node1: software-properties-common is already the newest version (0.96.24.32.7).
    node1: The following NEW packages will be installed:
    node1:   apt-transport-https
    node2: Get:16 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [183 kB]
    node1: 0 upgraded, 1 newly installed, 0 to remove and 27 not upgraded.
    node1: Need to get 1,692 B of archives.
    node1: After this operation, 153 kB of additional disk space will be used.
    node1: Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.8 [1,692 B]
    node1: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    node1: Fetched 1,692 B in 11s (157 B/s)
    node1: Selecting previously unselected package apt-transport-https.
    node1: (Reading database ... 
(Reading database ... 55%ase ... 5%
    node1: (Reading database ... 60%
    node1: (Reading database ... 65%
    node1: (Reading database ... 70%
    node1: (Reading database ... 75%
    node1: (Reading database ... 80%
    node1: (Reading database ... 85%
    node1: (Reading database ... 90%
    node1: (Reading database ... 95%
(Reading database ... 105289 files and directories currently installed.)
    node1: Preparing to unpack .../apt-transport-https_1.6.8_all.deb ...
    node1: Unpacking apt-transport-https (1.6.8) ...
    node1: Setting up apt-transport-https (1.6.8) ...
    node1: ++ sudo apt-key add -
    node1: ++ curl -fsSL https://download.docker.com/linux/ubuntu/gpg
    node1: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node2: Get:17 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [3,452 B]
    node1: OK
    node1: +++ lsb_release -cs
    node1: ++ add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu    bionic    stable'
    node1: Hit:1 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node1: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node1: Hit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node2: Fetched 3,874 kB in 1min 40s (38.6 kB/s)
    node2: Reading package lists...
    node1: Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    node1: Get:5 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
    node1: Get:6 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [3,695 B]
    node2: ++ apt-get install -y apt-transport-https curl software-properties-common ca-certificates
    node2: Reading package lists...
    node2: Building dependency tree...
    node2: 
    node2: Reading state information...
    node2: ca-certificates is already the newest version (20180409).
    node2: curl is already the newest version (7.58.0-2ubuntu3.5).
    node2: software-properties-common is already the newest version (0.96.24.32.7).
    node2: The following NEW packages will be installed:
    node2:   apt-transport-https
    node1: Fetched 68.1 kB in 2s (29.0 kB/s)
    node1: Reading package lists...
    node1: ++ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
    node1: ++ apt-key add -
    node1: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node2: 0 upgraded, 1 newly installed, 0 to remove and 27 not upgraded.
    node2: Need to get 1,692 B of archives.
    node2: After this operation, 153 kB of additional disk space will be used.
    node2: Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.8 [1,692 B]
    node2: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    node2: Fetched 1,692 B in 4s (408 B/s)
    node2: Selecting previously unselected package apt-transport-https.
    node2: (Reading database ... 
(Reading database ... 55%ase ... 5%
    node2: (Reading database ... 60%
    node2: (Reading database ... 65%
    node2: (Reading database ... 70%
    node2: (Reading database ... 75%
    node2: (Reading database ... 80%
    node2: (Reading database ... 85%
    node2: (Reading database ... 90%
    node2: (Reading database ... 95%
(Reading database ... 105289 files and directories currently installed.)
    node2: Preparing to unpack .../apt-transport-https_1.6.8_all.deb ...
    node2: Unpacking apt-transport-https (1.6.8) ...
    node2: Setting up apt-transport-https (1.6.8) ...
    node1: OK
    node1: ++ cat
    node1: ++ '[' -n 1.12.0 ']'
    node1: ++ KUBERNETES_PACKAGES='kubelet=1.12.0 kubeadm=1.12.0 kubectl=1.12.0'
    node1: ++ apt-get update
    node1: Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node1: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node1: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node1: Hit:4 https://download.docker.com/linux/ubuntu bionic InRelease
    node1: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    master: Get:16 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [183 kB]
    node2: ++ curl -fsSL https://download.docker.com/linux/ubuntu/gpg
    node2: ++ sudo apt-key add -
    node2: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node2: OK
    node2: +++ lsb_release -cs
    node2: ++ add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu    bionic    stable'
    master: Get:17 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [3,452 B]
    node2: Hit:1 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node2: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node2: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    master: Fetched 3,874 kB in 2min 5s (31.1 kB/s)
    master: Reading package lists...
    node2: Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node1: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
    master: ++ apt-get install -y apt-transport-https curl software-properties-common ca-certificates
    master: Reading package lists...
    master: Building dependency tree...
    node2: Get:5 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
    master: 
    master: Reading state information...
    master: ca-certificates is already the newest version (20180409).
    master: curl is already the newest version (7.58.0-2ubuntu3.5).
    master: software-properties-common is already the newest version (0.96.24.32.7).
    master: The following NEW packages will be installed:
    master:   apt-transport-https
    node1: Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [23.4 kB]
    node2: Get:6 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [3,695 B]
    node1: Fetched 32.4 kB in 15s (2,171 B/s)
    node1: Reading package lists...
    node2: Fetched 68.1 kB in 9s (7,180 B/s)
    node2: Reading package lists...
    node1: ++ apt-get install -y screen telnet docker-ce kubelet=1.12.0 kubeadm=1.12.0 kubectl=1.12.0
    node1: Reading package lists...
    node1: Building dependency tree...
    node1: Reading state information...
    node1: E
    node1: : 
    node1: Version '1.12.0' for 'kubelet' was not found
    node1: E
    node1: : 
    node1: Version '1.12.0' for 'kubeadm' was not found
    node1: E
    node1: : 
    node1: Version '1.12.0' for 'kubectl' was not found
    node1: ++ apt-mark hold kubelet kubeadm kubectl
    node1: kubelet set on hold.
    node1: kubeadm set on hold.
    node1: kubectl set on hold.
    node1: ++ swapoff -a
    node1: ++ sed -i '/swap/s/^/#/g' /etc/fstab
    node1: ++ echo 1
    node1: /tmp/vagrant-shell: line 23: /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
    node1: ++ '[' false '!=' false ']'
==> node1: Running provisioner: shell...
    node1: Running: inline script
    node1: ++ kubeadm reset -f
    node1: /tmp/vagrant-shell: line 2: kubeadm: command not found
    node1: ++ '[' -n '' ']'
    node1: ++ kubeadm join --discovery-token-unsafe-skip-ca-verification --token isif2q.k55krcvi1gu3sdhf 192.168.26.10:6443
    node1: /tmp/vagrant-shell: line 7: kubeadm: command not found
    node1: ++ echo 'Failed to kubeadm join'
    node1: Failed to kubeadm join
    node1: ++ exit 1
    node2: ++ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
    node2: ++ apt-key add -
    node2: Warning: apt-key output should not be parsed (stdout is not a terminal)
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
make[2]: *** [start-node-1] Error 1
make[2]: *** Waiting for unfinished jobs....
    master: 0 upgraded, 1 newly installed, 0 to remove and 27 not upgraded.
    master: Need to get 1,692 B of archives.
    master: After this operation, 153 kB of additional disk space will be used.
    master: Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.8 [1,692 B]
    master: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    master: Fetched 1,692 B in 9s (189 B/s)
    master: Selecting previously unselected package apt-transport-https.
    master: (Reading database ... 
(Reading database ... 55%base ... 5%
    master: (Reading database ... 60%
    master: (Reading database ... 65%
    master: (Reading database ... 70%
    master: (Reading database ... 75%
    master: (Reading database ... 80%
    master: (Reading database ... 85%
    master: (Reading database ... 90%
    master: (Reading database ... 95%
(Reading database ... 105289 files and directories currently installed.)
    master: Preparing to unpack .../apt-transport-https_1.6.8_all.deb ...
    master: Unpacking apt-transport-https (1.6.8) ...
    master: Setting up apt-transport-https (1.6.8) ...
    master: ++ curl -fsSL https://download.docker.com/linux/ubuntu/gpg
    master: ++ sudo apt-key add -
    master: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node2: OK
    node2: ++ cat
    node2: ++ '[' -n 1.12.0 ']'
    node2: ++ KUBERNETES_PACKAGES='kubelet=1.12.0 kubeadm=1.12.0 kubectl=1.12.0'
    node2: ++ apt-get update
    node2: Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node2: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node2: Hit:3 https://download.docker.com/linux/ubuntu bionic InRelease
    node2: Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node2: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    master: OK
    master: +++ lsb_release -cs
    master: ++ add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu    bionic    stable'
    master: Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node2: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
    node2: Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [23.4 kB]
    node2: Fetched 32.4 kB in 11s (2,954 B/s)
    node2: Reading package lists...
    master: Get:2 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
    master: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node2: ++ apt-get install -y screen telnet docker-ce kubelet=1.12.0 kubeadm=1.12.0 kubectl=1.12.0
    node2: Reading package lists...
    node2: Building dependency tree...
    master: Get:4 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [3,695 B]
    node2: 
    node2: Reading state information...
    master: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node2: E: Version '1.12.0' for 'kubelet' was not found
    node2: E: Version '1.12.0' for 'kubeadm' was not found
    node2: E: Version '1.12.0' for 'kubectl' was not found
    node2: ++ apt-mark hold kubelet kubeadm kubectl
    node2: kubelet set on hold.
    node2: kubeadm set on hold.
    node2: kubectl set on hold.
    node2: ++ swapoff -a
    node2: ++ sed -i '/swap/s/^/#/g' /etc/fstab
    node2: ++ echo 1
    node2: /tmp/vagrant-shell: line 23: /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
    node2: ++ '[' false '!=' false ']'
==> node2: Running provisioner: shell...
    master: Hit:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    node2: Running: inline script
    node2: ++ kubeadm reset -f
    node2: /tmp/vagrant-shell: line 2: kubeadm: command not found
    node2: ++ '[' -n '' ']'
    node2: ++ kubeadm join --discovery-token-unsafe-skip-ca-verification --token isif2q.k55krcvi1gu3sdhf 192.168.26.10:6443
    node2: /tmp/vagrant-shell: line 7: kubeadm: command not found
    node2: Failed to kubeadm join
    node2: ++ echo 'Failed to kubeadm join'
    node2: ++ exit 1
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
make[2]: *** [start-node-2] Error 1
    master: Fetched 68.1 kB in 14s (4,905 B/s)
    master: Reading package lists...
    master: ++ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
    master: ++ apt-key add -
    master: Warning: apt-key output should not be parsed (stdout is not a terminal)
    master: gpg: 
    master: no valid OpenPGP data found.
    master: ++ cat
    master: ++ '[' -n 1.12.0 ']'
    master: ++ KUBERNETES_PACKAGES='kubelet=1.12.0 kubeadm=1.12.0 kubectl=1.12.0'
    master: ++ apt-get update
    master: Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
    master: Hit:2 https://download.docker.com/linux/ubuntu bionic InRelease
    master: Hit:4 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    master: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    master: Hit:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    master: Get:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
    master: Err:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
    master:   The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
    master: Reading package lists...
    master: W
    master: : 
    master: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
    master: E
    master: : 
    master: The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed.
    master: ++ apt-get install -y screen telnet docker-ce kubelet=1.12.0 kubeadm=1.12.0 kubectl=1.12.0
    master: Reading package lists...
    master: Building dependency tree...
    master: Reading state information...
    master: E
    master: : 
    master: Unable to locate package kubelet
    master: E
    master: : 
    master: Unable to locate package kubeadm
    master: E
    master: : 
    master: Unable to locate package kubectl
    master: ++ apt-mark hold kubelet kubeadm kubectl
    master: E
    master: : 
    master: Unable to locate package kubelet
    master: E
    master: : 
    master: Unable to locate package kubeadm
    master: E
    master: : 
    master: Unable to locate package kubectl
    master: E
    master: : 
    master: No packages found
    master: ++ swapoff -a
    master: ++ sed -i '/swap/s/^/#/g' /etc/fstab
    master: ++ echo 1
    master: /tmp/vagrant-shell: line 23: /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
    master: ++ '[' false '!=' false ']'
==> master: Running provisioner: shell...
    master: Running: inline script
    master: ++ kubeadm reset -f
    master: /tmp/vagrant-shell: line 2: kubeadm: command not found
    master: ++ kubeadm init --kubernetes-version=1.12.0 --apiserver-advertise-address=192.168.26.10 --pod-network-cidr=10.244.0.0/16 --token isif2q.k55krcvi1gu3sdhf --token-ttl 0
    master: kubeadm init failed.
    master: /tmp/vagrant-shell: line 3: kubeadm: command not found
    master: ++ echo 'kubeadm init failed.'
    master: ++ exit 1
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
make[2]: *** [start-master] Error 1
make[1]: *** [start] Error 2
make: *** [up] Error 2
kylix3511 >>> 


centOS error :::: connect: no route to host

Hi @galexrt Seems something wrong with the CentOS.


connect: no route to host

Following is the screen output


kylix3511.mylabserver.com >>>> ย BOX_OS=centos KUBERNETES_VERSION=1.13.3 CLUSTER_NAME=k8s-centos NODE_MEMORY_SIZE_GB=2 MASTER_MEMORY_SIZE_GB=3 NODE_CPUS=2 MASTER_CPUS=2 NODE_COUNT=2  make -j8 up 
if !(vagrant box list | grep -q generic/centos7); then \
		vagrant \
			box \
			add \
			--provider=virtualbox \
			generic/centos7; \
	else \
		vagrant box update --box=generic/centos7; \
	fi
Checking for updates to 'generic/centos7'
Latest installed version: 1.9.2
Version constraints: > 1.9.2
Provider: virtualbox
Box 'generic/centos7' (v1.9.2) is running the latest version.
vagrant up
NODE=1 vagrant up
NODE=2 vagrant up
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'master' up with 'virtualbox' provider...
==> node2: Importing base box 'generic/centos7'...
==> node1: Importing base box 'generic/centos7'...
==> master: Importing base box 'generic/centos7'...
==> node2: Matching MAC address for NAT networking...
==> node1: Matching MAC address for NAT networking...
==> node2: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node1: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node2: Setting the name of the VM: k8s-vagrant-multi-node_node2_1550706372510_1712
==> node1: Setting the name of the VM: k8s-vagrant-multi-node_node1_1550706372602_27448
==> master: Setting the name of the VM: k8s-vagrant-multi-node_master_1550706372913_72605
==> node1: Clearing any previously set network interfaces...
==> master: Fixed port collision for 22 => 2222. Now on port 2200.
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
    master: Adapter 1: nat
    master: Adapter 2: hostonly
==> master: Forwarding ports...
    master: 22 (guest) => 2200 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> node1: Preparing network interfaces based on configuration...
    node1: Adapter 1: nat
    node1: Adapter 2: hostonly
==> node1: Forwarding ports...
    node1: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Running 'pre-boot' VM customizations...
==> master: Waiting for machine to boot. This may take a few minutes...
==> node2: Fixed port collision for 22 => 2222. Now on port 2201.
==> node2: Clearing any previously set network interfaces...
==> node1: Booting VM...
    master: SSH address: 127.0.0.1:2200
    master: SSH username: vagrant
    master: SSH auth method: private key
==> node2: Preparing network interfaces based on configuration...
    node2: Adapter 1: nat
    node2: Adapter 2: hostonly
==> node2: Forwarding ports...
    node2: 22 (guest) => 2201 (host) (adapter 1)
==> node1: Waiting for machine to boot. This may take a few minutes...
==> node2: Running 'pre-boot' VM customizations...
    node1: SSH address: 127.0.0.1:2222
    node1: SSH username: vagrant
    node1: SSH auth method: private key
==> node2: Booting VM...
==> node2: Waiting for machine to boot. This may take a few minutes...
    node2: SSH address: 127.0.0.1:2201
    node2: SSH username: vagrant
    node2: SSH auth method: private key
    master: Warning: Remote connection disconnect. Retrying...
    master: Warning: Connection reset. Retrying...
    node1: Warning: Connection reset. Retrying...
    node2: Warning: Connection reset. Retrying...
    node1: Warning: Remote connection disconnect. Retrying...
    master: Warning: Connection reset. Retrying...
    node1: Warning: Connection reset. Retrying...
    node2: Warning: Connection reset. Retrying...
    node2: Warning: Remote connection disconnect. Retrying...
    master: Warning: Connection reset. Retrying...
    node1: Warning: Connection reset. Retrying...
    node2: Warning: Connection reset. Retrying...
    master: 
    master: Vagrant insecure key detected. Vagrant will automatically replace
    master: this with a newly generated keypair for better security.
    node1: 
    node1: Vagrant insecure key detected. Vagrant will automatically replace
    node1: this with a newly generated keypair for better security.
    node2: 
    node2: Vagrant insecure key detected. Vagrant will automatically replace
    node2: this with a newly generated keypair for better security.
    master: 
    master: Inserting generated public key within guest...
    master: Removing insecure key from the guest if it's present...
    node1: 
    node1: Inserting generated public key within guest...
    master: Key inserted! Disconnecting and reconnecting using new SSH key...
    node1: Removing insecure key from the guest if it's present...
    node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node1: Machine booted and ready!
==> node1: Checking for guest additions in VM...
    node1: The guest additions on this VM do not match the installed version of
    node1: VirtualBox! In most cases this is fine, but in rare cases it can
    node1: prevent things such as shared folders from working properly. If you see
    node1: shared folder errors, please make sure the guest additions within the
    node1: virtual machine match the version of VirtualBox you have installed on
    node1: your host and reload your VM.
    node1: 
    node1: Guest Additions Version: 5.1.38
    node1: VirtualBox Version: 6.0
==> node1: Setting hostname...
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
    master: The guest additions on this VM do not match the installed version of
    master: VirtualBox! In most cases this is fine, but in rare cases it can
    master: prevent things such as shared folders from working properly. If you see
    master: shared folder errors, please make sure the guest additions within the
    master: virtual machine match the version of VirtualBox you have installed on
    master: your host and reload your VM.
    master: 
    master: Guest Additions Version: 5.1.38
    master: VirtualBox Version: 6.0
==> master: Setting hostname...
    node2: 
    node2: Inserting generated public key within guest...
    node2: Removing insecure key from the guest if it's present...
    node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node1: Configuring and enabling network interfaces...
==> master: Configuring and enabling network interfaces...
==> node2: Machine booted and ready!
==> node2: Checking for guest additions in VM...
    node2: The guest additions on this VM do not match the installed version of
    node2: VirtualBox! In most cases this is fine, but in rare cases it can
    node2: prevent things such as shared folders from working properly. If you see
    node2: shared folder errors, please make sure the guest additions within the
    node2: virtual machine match the version of VirtualBox you have installed on
    node2: your host and reload your VM.
    node2: 
    node2: Guest Additions Version: 5.1.38
    node2: VirtualBox Version: 6.0
==> node2: Setting hostname...
==> node2: Configuring and enabling network interfaces...
==> node1: Installing rsync to the VM...
==> master: Installing rsync to the VM...
==> node2: Installing rsync to the VM...
==> node2: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node2/ => /data
==> node2: Running provisioner: shell...
    node2: Running: inline script
    node2: net.ipv6.conf.all.disable_ipv6 = 0
    node2: net.ipv6.conf.default.disable_ipv6 = 0
    node2: net.ipv6.conf.lo.disable_ipv6 = 0
    node2: net.ipv6.conf.all.accept_dad = 0
    node2: net.ipv6.conf.default.accept_dad = 0
    node2: net.bridge.bridge-nf-call-iptables = 1
    node2: Created symlink from /etc/systemd/system/default.target.wants/ip-set-mtu.service to /etc/systemd/system/ip-set-mtu.service.
==> node1: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node1/ => /data
==> master: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-master/ => /data
==> node1: Running provisioner: shell...
==> master: Running provisioner: shell...
    node1: Running: inline script
    master: Running: inline script
    node1: net.ipv6.conf.all.disable_ipv6 = 0
    node1: net.ipv6.conf.default.disable_ipv6 = 0
    node1: net.ipv6.conf.lo.disable_ipv6 = 0
    node1: net.ipv6.conf.all.accept_dad = 0
    node1: net.ipv6.conf.default.accept_dad = 0
    node1: net.bridge.bridge-nf-call-iptables = 1
    node1: Created symlink from /etc/systemd/system/default.target.wants/ip-set-mtu.service to /etc/systemd/system/ip-set-mtu.service.
    master: net.ipv6.conf.all.disable_ipv6 = 0
    master: net.ipv6.conf.default.disable_ipv6 = 0
    master: net.ipv6.conf.lo.disable_ipv6 = 0
    master: net.ipv6.conf.all.accept_dad = 0
    master: net.ipv6.conf.default.accept_dad = 0
    master: net.bridge.bridge-nf-call-iptables = 1
    master: Created symlink from /etc/systemd/system/default.target.wants/ip-set-mtu.service to /etc/systemd/system/ip-set-mtu.service.
==> node2: Running provisioner: shell...
    node2: Running: inline script
    node2: ++ cat
    node2: ++ '[' -n 1.13.3 ']'
    node2: ++ KUBERNETES_PACKAGES='kubelet-1.13.3 kubeadm-1.13.3'
    node2: ++ setenforce 0
    node2: ++ sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
    node2: ++ yum clean expire-cache
    node2: Loaded plugins: fastestmirror
    node2: Cleaning repos: base epel extras kubernetes updates
    node2: 7 metadata files removed
    node2: ++ yum install --nogpgcheck -y net-tools screen tree telnet conntrack socat docker rsync kubelet-1.13.3 kubeadm-1.13.3
    node2: Loaded plugins: fastestmirror
    node2: Loading mirror speeds from cached hostfile
    node2:  * base: mirror.fileplanet.com
    node2:  * epel: sjc.edge.kernel.org
    node2:  * extras: sjc.edge.kernel.org
    node2:  * updates: mirror.fileplanet.com
==> node1: Running provisioner: shell...
==> master: Running provisioner: shell...
    node1: Running: inline script
    node1: ++ cat
    node1: ++ '[' -n 1.13.3 ']'
    node1: ++ KUBERNETES_PACKAGES='kubelet-1.13.3 kubeadm-1.13.3'
    node1: ++ setenforce 0
    node1: ++ sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
    node1: ++ yum clean expire-cache
    master: Running: inline script
    master: ++ cat
    master: ++ '[' -n 1.13.3 ']'
    master: ++ KUBERNETES_PACKAGES='kubelet-1.13.3 kubeadm-1.13.3'
    master: ++ setenforce 0
    master: ++ sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
    master: ++ yum clean expire-cache
    node1: Loaded plugins: fastestmirror
    node1: Cleaning repos: base epel extras kubernetes updates
    node1: 7 metadata files removed
    node1: ++ yum install --nogpgcheck -y net-tools screen tree telnet conntrack socat docker rsync kubelet-1.13.3 kubeadm-1.13.3
    master: Loaded plugins: fastestmirror
    master: Cleaning repos: base epel extras kubernetes updates
    master: 7 metadata files removed
    master: ++ yum install --nogpgcheck -y net-tools screen tree telnet conntrack socat docker rsync kubelet-1.13.3 kubeadm-1.13.3
    node1: Loaded plugins: fastestmirror
    node2: Package net-tools-2.0-0.24.20131004git.el7.x86_64 already installed and latest version
    node2: Package 1:telnet-0.17-64.el7.x86_64 already installed and latest version
    node2: Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
    node1: Loading mirror speeds from cached hostfile
    master: Loaded plugins: fastestmirror
    master: Loading mirror speeds from cached hostfile
    node2: Resolving Dependencies
    node2: --> Running transaction check
    node2: ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed
    node2: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node2: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node2: --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node2: --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node2: --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node2: --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node2: ---> Package docker.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    node2: --> Processing Dependency: docker-common = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: docker-client = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: subscription-manager-rhsm-certificates for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: ---> Package kubeadm.x86_64 0:1.13.3-0 will be installed
    node2: --> Processing Dependency: kubernetes-cni >= 0.6.0 for package: kubeadm-1.13.3-0.x86_64
    node2: --> Processing Dependency: kubectl >= 1.6.0 for package: kubeadm-1.13.3-0.x86_64
    node2: --> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.13.3-0.x86_64
    node2: ---> Package kubelet.x86_64 0:1.13.3-0 will be installed
    node2: ---> Package screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7 will be installed
    node2: ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
    node2: ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
    node2: --> Running transaction check
    node2: ---> Package cri-tools.x86_64 0:1.12.0-0 will be installed
    node2: ---> Package docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    node2: ---> Package docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    node2: --> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: --> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node2: ---> Package kubectl.x86_64 0:1.13.3-0 will be installed
    node2: ---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed
    node2: ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed
    node2: ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed
    node2: ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
    node2: ---> Package subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos will be installed
    node2: --> Running transaction check
    node2: ---> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
    node2: --> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    node2: --> Processing Dependency: python-setuptools for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    node2: --> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    node2: ---> Package container-selinux.noarch 2:2.74-1.el7 will be installed
    node2: --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.74-1.el7.noarch
    node2: ---> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
    node2: ---> Package containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos will be installed
    node2: ---> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
    node2: ---> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
    node2: --> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64
    node2: ---> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
    node2: --> Running transaction check
    node2: ---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
    node2: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
    node2: ---> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
    node2: --> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node2: ---> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
    node2: ---> Package python-setuptools.noarch 0:0.9.8-7.el7 will be installed
    node2: --> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-7.el7.noarch
    node2: ---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
    node2: --> Running transaction check
    node2: ---> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
    node2: ---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
    node2: ---> Package libcgroup.x86_64 0:0.41-20.el7 will be installed
    node2: ---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
    node2: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
    node2: ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
    node2: ---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed
    node2: --> Processing Dependency: python-ipaddress for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
    node2: --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
    node2: ---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
    node2: --> Running transaction check
    node2: ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
    node2: ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
    node2: --> Finished Dependency Resolution
    node2: 
    node2: Dependencies Resolved
    node2: 
    node2: ================================================================================
    node2:  Package              Arch   Version                           Repository  Size
    node2: ================================================================================
    node2: Installing:
    node2:  conntrack-tools      x86_64 1.4.4-4.el7                       base       186 k
    node2:  docker               x86_64 2:1.13.1-91.git07f3374.el7.centos extras      18 M
    node2:  kubeadm              x86_64 1.13.3-0                          kubernetes 7.9 M
    node2:  kubelet              x86_64 1.13.3-0                          kubernetes  21 M
    node2:  screen               x86_64 4.1.0-0.25.20120314git3c2946.el7  base       552 k
    node2:  socat                x86_64 1.7.3.2-2.el7                     base       290 k
    node2:  tree                 x86_64 1.6.0-10.el7                      base        46 k
    node2: Installing for dependencies:
    node2:  PyYAML               x86_64 3.10-11.el7                       base       153 k
    node2:  atomic-registries    x86_64 1:1.22.1-26.gitb507039.el7.centos extras      35 k
    node2:  audit-libs-python    x86_64 2.8.4-4.el7                       base        76 k
    node2:  checkpolicy          x86_64 2.5-8.el7                         base       295 k
    node2:  container-selinux    noarch 2:2.74-1.el7                      extras      38 k
    node2:  container-storage-setup
    node2:                       noarch 0.11.0-2.git5eaf76c.el7           extras      35 k
    node2:  containers-common    x86_64 1:0.1.31-8.gitb0b750d.el7.centos  extras      21 k
    node2:  cri-tools            x86_64 1.12.0-0                          kubernetes 4.2 M
    node2:  docker-client        x86_64 2:1.13.1-91.git07f3374.el7.centos extras     3.9 M
    node2:  docker-common        x86_64 2:1.13.1-91.git07f3374.el7.centos extras      95 k
    node2:  kubectl              x86_64 1.13.3-0                          kubernetes 8.5 M
    node2:  kubernetes-cni       x86_64 0.6.0-0                           kubernetes 8.6 M
    node2:  libcgroup            x86_64 0.41-20.el7                       base        66 k
    node2:  libnetfilter_cthelper
    node2:                       x86_64 1.0.0-9.el7                       base        18 k
    node2:  libnetfilter_cttimeout
    node2:                       x86_64 1.0.0-6.el7                       base        18 k
    node2:  libnetfilter_queue   x86_64 1.0.2-2.el7_2                     base        23 k
    node2:  libsemanage-python   x86_64 2.5-14.el7                        base       113 k
    node2:  libyaml              x86_64 0.1.4-11.el7_0                    base        55 k
    node2:  oci-register-machine x86_64 1:0-6.git2b44233.el7              extras     1.1 M
    node2:  oci-systemd-hook     x86_64 1:0.1.18-3.git8787307.el7_6       extras      34 k
    node2:  oci-umount           x86_64 2:2.3.4-2.git87f9237.el7          extras      32 k
    node2:  policycoreutils-python
    node2:                       x86_64 2.5-29.el7_6.1                    updates    456 k
    node2:  python-IPy           noarch 0.75-6.el7                        base        32 k
    node2:  python-backports     x86_64 1.0-8.el7                         base       5.8 k
    node2:  python-backports-ssl_match_hostname
    node2:                       noarch 3.5.0.1-1.el7                     base        13 k
    node2:  python-ipaddress     noarch 1.0.16-2.el7                      base        34 k
    node2:  python-pytoml        noarch 0.1.14-1.git7dea353.el7           extras      18 k
    node2:  python-setuptools    noarch 0.9.8-7.el7                       base       397 k
    node2:  setools-libs         x86_64 3.3.8-4.el7                       base       620 k
    node2:  subscription-manager-rhsm-certificates
    node2:                       x86_64 1.21.10-3.el7.centos              updates    207 k
    node2:  yajl                 x86_64 2.0.4-4.el7                       base        39 k
    node2: 
    node2: Transaction Summary
    node2: ================================================================================
    node2: Install  7 Packages (+31 Dependent packages)
    node2: 
    node2: Total download size: 76 M
    node2: Installed size: 321 M
    node2: Downloading packages:
    node1:  * base: mirror.fileplanet.com
    node1:  * epel: d2lzkl7pfhq30w.cloudfront.net
    node1:  * extras: mirrors.xtom.com
    node1:  * updates: mirror.fileplanet.com
    master:  * base: sjc.edge.kernel.org
    master:  * epel: mirror.prgmr.com
    master:  * extras: sjc.edge.kernel.org
    master:  * updates: mirror.keystealth.org
    node1: Package net-tools-2.0-0.24.20131004git.el7.x86_64 already installed and latest version
    node1: Package 1:telnet-0.17-64.el7.x86_64 already installed and latest version
    node1: Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
    node1: Resolving Dependencies
    node1: --> Running transaction check
    node1: ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed
    node1: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    master: Package net-tools-2.0-0.24.20131004git.el7.x86_64 already installed and latest version
    master: Package 1:telnet-0.17-64.el7.x86_64 already installed and latest version
    master: Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
    node1: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node1: --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node1: --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node1: --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node1: --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node1: ---> Package docker.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    node1: --> Processing Dependency: docker-common = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: docker-client = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: subscription-manager-rhsm-certificates for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: ---> Package kubeadm.x86_64 0:1.13.3-0 will be installed
    node1: --> Processing Dependency: kubernetes-cni >= 0.6.0 for package: kubeadm-1.13.3-0.x86_64
    node1: --> Processing Dependency: kubectl >= 1.6.0 for package: kubeadm-1.13.3-0.x86_64
    master: Resolving Dependencies
    master: --> Running transaction check
    master: ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed
    master: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    node1: --> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.13.3-0.x86_64
    node1: ---> Package kubelet.x86_64 0:1.13.3-0 will be installed
    node1: ---> Package screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7 will be installed
    node1: ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
    node1: ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
    node1: --> Running transaction check
    node1: ---> Package cri-tools.x86_64 0:1.12.0-0 will be installed
    node1: ---> Package docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    node1: ---> Package docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    node1: --> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: --> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    node1: ---> Package kubectl.x86_64 0:1.13.3-0 will be installed
    node1: ---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed
    node1: ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed
    node1: ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed
    node1: ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
    node1: ---> Package subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos will be installed
    node1: --> Running transaction check
    node1: ---> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
    node1: --> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    node1: --> Processing Dependency: python-setuptools for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    node1: --> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    node1: ---> Package container-selinux.noarch 2:2.74-1.el7 will be installed
    node1: --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.74-1.el7.noarch
    node1: ---> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
    node1: ---> Package containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos will be installed
    node1: ---> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
    node1: ---> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
    node1: --> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64
    node1: ---> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
    node1: --> Running transaction check
    node1: ---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
    node1: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
    node1: ---> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
    node1: --> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    node1: ---> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
    node1: ---> Package python-setuptools.noarch 0:0.9.8-7.el7 will be installed
    node1: --> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-7.el7.noarch
    node1: ---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
    node1: --> Running transaction check
    node1: ---> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
    node1: ---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
    node1: ---> Package libcgroup.x86_64 0:0.41-20.el7 will be installed
    node1: ---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
    node1: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
    node1: ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
    node1: ---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed
    node1: --> Processing Dependency: python-ipaddress for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
    node1: --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
    node1: ---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
    node1: --> Running transaction check
    node1: ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
    node1: ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
    master: --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    master: --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    master: --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    master: --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    master: --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64
    master: ---> Package docker.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    master: --> Processing Dependency: docker-common = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: docker-client = 2:1.13.1-91.git07f3374.el7.centos for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: subscription-manager-rhsm-certificates for package: 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64
    master: ---> Package kubeadm.x86_64 0:1.13.3-0 will be installed
    master: --> Processing Dependency: kubernetes-cni >= 0.6.0 for package: kubeadm-1.13.3-0.x86_64
    master: --> Processing Dependency: kubectl >= 1.6.0 for package: kubeadm-1.13.3-0.x86_64
    node1: --> Finished Dependency Resolution
    master: --> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.13.3-0.x86_64
    master: ---> Package kubelet.x86_64 0:1.13.3-0 will be installed
    master: ---> Package screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7 will be installed
    master: ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
    master: ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
    master: --> Running transaction check
    master: ---> Package cri-tools.x86_64 0:1.12.0-0 will be installed
    master: ---> Package docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    master: ---> Package docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos will be installed
    master: --> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    master: --> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64
    master: ---> Package kubectl.x86_64 0:1.13.3-0 will be installed
    master: ---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed
    master: ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed
    master: ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed
    master: ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
    master: ---> Package subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos will be installed
    master: --> Running transaction check
    master: ---> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
    master: --> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    master: --> Processing Dependency: python-setuptools for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    master: --> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
    master: ---> Package container-selinux.noarch 2:2.74-1.el7 will be installed
    master: --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.74-1.el7.noarch
    master: ---> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
    master: ---> Package containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos will be installed
    master: ---> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
    master: ---> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
    master: --> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64
    master: ---> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
    master: --> Running transaction check
    master: ---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
    master: --> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
    master: ---> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
    master: --> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
    master: ---> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
    master: ---> Package python-setuptools.noarch 0:0.9.8-7.el7 will be installed
    master: --> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-7.el7.noarch
    master: ---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
    master: --> Running transaction check
    master: ---> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
    master: ---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
    master: ---> Package libcgroup.x86_64 0:0.41-20.el7 will be installed
    master: ---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
    master: ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
    master: ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
    master: ---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed
    master: --> Processing Dependency: python-ipaddress for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
    master: --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
    node1: 
    node1: Dependencies Resolved
    master: ---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
    master: --> Running transaction check
    master: ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
    master: ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
    node1: 
    node1: ================================================================================
    node1:  Package              Arch   Version                           Repository  Size
    node1: ================================================================================
    node1: Installing:
    node1:  conntrack-tools      x86_64 1.4.4-4.el7                       base       186 k
    node1:  docker               x86_64 2:1.13.1-91.git07f3374.el7.centos extras      18 M
    node1:  kubeadm              x86_64 1.13.3-0                          kubernetes 7.9 M
    node1:  kubelet              x86_64 1.13.3-0                          kubernetes  21 M
    node1:  screen               x86_64 4.1.0-0.25.20120314git3c2946.el7  base       552 k
    node1:  socat                x86_64 1.7.3.2-2.el7                     base       290 k
    node1:  tree                 x86_64 1.6.0-10.el7                      base        46 k
    node1: Installing for dependencies:
    node1:  PyYAML               x86_64 3.10-11.el7                       base       153 k
    node1:  atomic-registries    x86_64 1:1.22.1-26.gitb507039.el7.centos extras      35 k
    node1:  audit-libs-python    x86_64 2.8.4-4.el7                       base        76 k
    node1:  checkpolicy          x86_64 2.5-8.el7                         base       295 k
    node1:  container-selinux    noarch 2:2.74-1.el7                      extras      38 k
    node1:  container-storage-setup
    node1:                       noarch 0.11.0-2.git5eaf76c.el7           extras      35 k
    node1:  containers-common    x86_64 1:0.1.31-8.gitb0b750d.el7.centos  extras      21 k
    node1:  cri-tools            x86_64 1.12.0-0                          kubernetes 4.2 M
    node1:  docker-client        x86_64 2:1.13.1-91.git07f3374.el7.centos extras     3.9 M
    node1:  docker-common        x86_64 2:1.13.1-91.git07f3374.el7.centos extras      95 k
    node1:  kubectl              x86_64 1.13.3-0                          kubernetes 8.5 M
    node1:  kubernetes-cni       x86_64 0.6.0-0                           kubernetes 8.6 M
    node1:  libcgroup            x86_64 0.41-20.el7                       base        66 k
    node1:  libnetfilter_cthelper
    node1:                       x86_64 1.0.0-9.el7                       base        18 k
    node1:  libnetfilter_cttimeout
    node1:                       x86_64 1.0.0-6.el7                       base        18 k
    node1:  libnetfilter_queue   x86_64 1.0.2-2.el7_2                     base        23 k
    node1:  libsemanage-python   x86_64 2.5-14.el7                        base       113 k
    node1:  libyaml              x86_64 0.1.4-11.el7_0                    base        55 k
    node1:  oci-register-machine x86_64 1:0-6.git2b44233.el7              extras     1.1 M
    node1:  oci-systemd-hook     x86_64 1:0.1.18-3.git8787307.el7_6       extras      34 k
    node1:  oci-umount           x86_64 2:2.3.4-2.git87f9237.el7          extras      32 k
    node1:  policycoreutils-python
    node1:                       x86_64 2.5-29.el7_6.1                    updates    456 k
    node1:  python-IPy           noarch 0.75-6.el7                        base        32 k
    node1:  python-backports     x86_64 1.0-8.el7                         base       5.8 k
    node1:  python-backports-ssl_match_hostname
    node1:                       noarch 3.5.0.1-1.el7                     base        13 k
    node1:  python-ipaddress     noarch 1.0.16-2.el7                      base        34 k
    node1:  python-pytoml        noarch 0.1.14-1.git7dea353.el7           extras      18 k
    node1:  python-setuptools    noarch 0.9.8-7.el7                       base       397 k
    node1:  setools-libs         x86_64 3.3.8-4.el7                       base       620 k
    node1:  subscription-manager-rhsm-certificates
    node1:                       x86_64 1.21.10-3.el7.centos              updates    207 k
    node1:  yajl                 x86_64 2.0.4-4.el7                       base        39 k
    node1: 
    node1: Transaction Summary
    node1: ================================================================================
    node1: Install  7 Packages (+31 Dependent packages)
    node1: Total download size: 76 M
    node1: Installed size: 321 M
    node1: Downloading packages:
    master: --> Finished Dependency Resolution
    master: 
    master: Dependencies Resolved
    master: 
    master: ================================================================================
    master:  Package              Arch   Version                           Repository  Size
    master: ================================================================================
    master: Installing:
    master:  conntrack-tools      x86_64 1.4.4-4.el7                       base       186 k
    master:  docker               x86_64 2:1.13.1-91.git07f3374.el7.centos extras      18 M
    master:  kubeadm              x86_64 1.13.3-0                          kubernetes 7.9 M
    master:  kubelet              x86_64 1.13.3-0                          kubernetes  21 M
    master:  screen               x86_64 4.1.0-0.25.20120314git3c2946.el7  base       552 k
    master:  socat                x86_64 1.7.3.2-2.el7                     base       290 k
    master:  tree                 x86_64 1.6.0-10.el7                      base        46 k
    master: Installing for dependencies:
    master:  PyYAML               x86_64 3.10-11.el7                       base       153 k
    master:  atomic-registries    x86_64 1:1.22.1-26.gitb507039.el7.centos extras      35 k
    master:  audit-libs-python    x86_64 2.8.4-4.el7                       base        76 k
    master:  checkpolicy          x86_64 2.5-8.el7                         base       295 k
    master:  container-selinux    noarch 2:2.74-1.el7                      extras      38 k
    master:  container-storage-setup
    master:                       noarch 0.11.0-2.git5eaf76c.el7           extras      35 k
    master:  containers-common    x86_64 1:0.1.31-8.gitb0b750d.el7.centos  extras      21 k
    master:  cri-tools            x86_64 1.12.0-0                          kubernetes 4.2 M
    master:  docker-client        x86_64 2:1.13.1-91.git07f3374.el7.centos extras     3.9 M
    master:  docker-common        x86_64 2:1.13.1-91.git07f3374.el7.centos extras      95 k
    master:  kubectl              x86_64 1.13.3-0                          kubernetes 8.5 M
    master:  kubernetes-cni       x86_64 0.6.0-0                           kubernetes 8.6 M
    master:  libcgroup            x86_64 0.41-20.el7                       base        66 k
    master:  libnetfilter_cthelper
    master:                       x86_64 1.0.0-9.el7                       base        18 k
    master:  libnetfilter_cttimeout
    master:                       x86_64 1.0.0-6.el7                       base        18 k
    master:  libnetfilter_queue   x86_64 1.0.2-2.el7_2                     base        23 k
    master:  libsemanage-python   x86_64 2.5-14.el7                        base       113 k
    master:  libyaml              x86_64 0.1.4-11.el7_0                    base        55 k
    master:  oci-register-machine x86_64 1:0-6.git2b44233.el7              extras     1.1 M
    master:  oci-systemd-hook     x86_64 1:0.1.18-3.git8787307.el7_6       extras      34 k
    master:  oci-umount           x86_64 2:2.3.4-2.git87f9237.el7          extras      32 k
    master:  policycoreutils-python
    master:                       x86_64 2.5-29.el7_6.1                    updates    456 k
    master:  python-IPy           noarch 0.75-6.el7                        base        32 k
    master:  python-backports     x86_64 1.0-8.el7                         base       5.8 k
    master:  python-backports-ssl_match_hostname
    master:                       noarch 3.5.0.1-1.el7                     base        13 k
    master:  python-ipaddress     noarch 1.0.16-2.el7                      base        34 k
    master:  python-pytoml        noarch 0.1.14-1.git7dea353.el7           extras      18 k
    master:  python-setuptools    noarch 0.9.8-7.el7                       base       397 k
    master:  setools-libs         x86_64 3.3.8-4.el7                       base       620 k
    master:  subscription-manager-rhsm-certificates
    master:                       x86_64 1.21.10-3.el7.centos              updates    207 k
    master:  yajl                 x86_64 2.0.4-4.el7                       base        39 k
    master: 
    master: Transaction Summary
    master: ================================================================================
    master: Install  7 Packages (+31 Dependent packages)
    master: Total download size: 76 M
    master: Installed size: 321 M
    master: Downloading packages:
    node2: --------------------------------------------------------------------------------
    node2: Total                                              4.3 MB/s |  76 MB  00:17     
    node2: Running transaction check
    node2: Running transaction test
    node2: Transaction test succeeded
    node2: Running transaction
    node2:   Installing : yajl-2.0.4-4.el7.x86_64                                     1/38
    node2:  
    node2:   Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64         2/38
    node2:  
    node2:   Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64                  3/38
    node1: --------------------------------------------------------------------------------
    node1: Total                                              4.9 MB/s |  76 MB  00:15     
    node1: Running transaction check
    node2:  
    node2:   Installing : socat-1.7.3.2-2.el7.x86_64                                  4/38
    node2:  
    node2:   Installing : python-ipaddress-1.0.16-2.el7.noarch                        5/38
    node1: Running transaction test
    node2:  
    node2:   Installing : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6    6/38
    node2:  
    node2:   Installing : libyaml-0.1.4-11.el7_0.x86_64                               7/38
    node1: Transaction test succeeded
    node1: Running transaction
    node2:  
    node2:   Installing : PyYAML-3.10-11.el7.x86_64                                   8/38
    node2:  
    node2:   Installing : audit-libs-python-2.8.4-4.el7.x86_64                        9/38
    node1:   Installing : yajl-2.0.4-4.el7.x86_64                                     1/38
    node2:  
    node2:   Installing : python-backports-1.0-8.el7.x86_64                          10/38
    node2:  
    node2:   Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch   11/38
    node1:  
    node1:   Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64         2/38
    node1:  
    node1:   Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64                  3/38
    node1:  
    node1:   Installing : socat-1.7.3.2-2.el7.x86_64                                  4/38
    node1:  
    node1:   Installing : python-ipaddress-1.0.16-2.el7.noarch                        5/38
    node1:  
    node1:   Installing : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6    6/38
    node2:  
    node2:   Installing : python-setuptools-0.9.8-7.el7.noarch                       12/38
    node1:  
    node1:   Installing : libyaml-0.1.4-11.el7_0.x86_64                               7/38
    node1:  
    node1:   Installing : PyYAML-3.10-11.el7.x86_64                                   8/38
    node1:  
    node1:   Installing : audit-libs-python-2.8.4-4.el7.x86_64                        9/38
    node1:  
    node1:   Installing : python-backports-1.0-8.el7.x86_64                          10/38
    node2:  
    node2:   Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64           13/38
    node1:  
    node1:   Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch   11/38
    node2:  
    node2:   Installing : libsemanage-python-2.5-14.el7.x86_64                       14/38
    node1:  
    node1:   Installing : python-setuptools-0.9.8-7.el7.noarch                       12/38
    node1:  
    node1:   Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64           13/38
    node1:  
    node1:   Installing : libsemanage-python-2.5-14.el7.x86_64                       14/38
    node2:  
    node2:   Installing : kubectl-1.13.3-0.x86_64                                    15/38
    node2:  
    node2:   Installing : setools-libs-3.3.8-4.el7.x86_64                            16/38
    node2:  
    node2:   Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch               17/38
    node2:  
    node2:   Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_   18/38
    node2:  
    node2:   Installing : python-IPy-0.75-6.el7.noarch                               19/38
    node2:  
    node2:   Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    20/38
    node1:  
    node1:   Installing : kubectl-1.13.3-0.x86_64                                    15/38
    node2:  
    node2:   Installing : checkpolicy-2.5-8.el7.x86_64                               21/38
    node2:  
    node2:   Installing : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen   22/38
    node2:  
    node2:   Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64                   23/38
    node1:  
    node1:   Installing : setools-libs-3.3.8-4.el7.x86_64                            16/38
    node1:  
    node1:   Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch               17/38
    node1:  
    node1:   Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_   18/38
    node1:  
    node1:   Installing : python-IPy-0.75-6.el7.noarch                               19/38
    node1:  
    node1:   Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    20/38
    node1:  
    node1:   Installing : checkpolicy-2.5-8.el7.x86_64                               21/38
    node1:  
    node1:   Installing : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen   22/38
    node1:  
    node1:   Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64                   23/38
    node2:  
    node2:   Installing : cri-tools-1.12.0-0.x86_64                                  24/38
    node2:  
    node2:   Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch     25/38
    node2:  
    node2:   Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                  26/38
    node2:  
    node2:   Installing : conntrack-tools-1.4.4-4.el7.x86_64                         27/38
    master: --------------------------------------------------------------------------------
    master: Total                                              4.0 MB/s |  76 MB  00:18     
    master: Running transaction check
    master: Running transaction test
    node1:  
    node1:   Installing : cri-tools-1.12.0-0.x86_64                                  24/38
    master: Transaction test succeeded
    master: Running transaction
    node1:  
    node1:   Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch     25/38
    node1:  
    node1:   Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                  26/38
    master:   Installing : yajl-2.0.4-4.el7.x86_64                                     1/38
    node1:  
    node1:   Installing : conntrack-tools-1.4.4-4.el7.x86_64                         27/38
    master:  
    master:   Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64         2/38
    master:  
    master:   Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64                  3/38
    master:  
    master:   Installing : socat-1.7.3.2-2.el7.x86_64                                  4/38
    master:  
    master:   Installing : python-ipaddress-1.0.16-2.el7.noarch                        5/38
    master:  
    master:   Installing : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6    6/38
    master:  
    master:   Installing : libyaml-0.1.4-11.el7_0.x86_64                               7/38
    master:  
    master:   Installing : PyYAML-3.10-11.el7.x86_64                                   8/38
    master:  
    master:   Installing : audit-libs-python-2.8.4-4.el7.x86_64                        9/38
    master:  
    master:   Installing : python-backports-1.0-8.el7.x86_64                          10/38
    master:  
    master:   Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch   11/38
    master:  
    master:   Installing : python-setuptools-0.9.8-7.el7.noarch                       12/38
    master:  
    master:   Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64           13/38
    master:  
    master:   Installing : libsemanage-python-2.5-14.el7.x86_64                       14/38
    node2:  
    node2:   Installing : kubernetes-cni-0.6.0-0.x86_64                              28/38
    node1:  
    node1:   Installing : kubernetes-cni-0.6.0-0.x86_64                              28/38
    master:  
    master:   Installing : kubectl-1.13.3-0.x86_64                                    15/38
    master:  
    master:   Installing : setools-libs-3.3.8-4.el7.x86_64                            16/38
    master:  
    master:   Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch               17/38
    master:  
    master:   Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_   18/38
    master:  
    master:   Installing : python-IPy-0.75-6.el7.noarch                               19/38
    master:  
    master:   Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    20/38
    master:  
    master:   Installing : checkpolicy-2.5-8.el7.x86_64                               21/38
    master:  
    master:   Installing : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen   22/38
    master:  
    master:   Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64                   23/38
    master:  
    master:   Installing : cri-tools-1.12.0-0.x86_64                                  24/38
    master:  
    master:   Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch     25/38
    master:  
    master:   Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                  26/38
    master:  
    master:   Installing : conntrack-tools-1.4.4-4.el7.x86_64                         27/38
    node2:  
    node2:   Installing : kubelet-1.13.3-0.x86_64                                    29/38
    master:  
    master:   Installing : kubernetes-cni-0.6.0-0.x86_64                              28/38
    node2:  
    node2:   Installing : libcgroup-0.41-20.el7.x86_64                               30/38
    node1:  
    node1:   Installing : kubelet-1.13.3-0.x86_64                                    29/38
    node2:  
    node2:   Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64               31/38
    node1:  
    node1:   Installing : libcgroup-0.41-20.el7.x86_64                               30/38
    node2:  
    node2:   Installing : 2:container-selinux-2.74-1.el7.noarch                      32/38
    node1:  
    node1:   Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64               31/38
    node1:  
    node1:   Installing : 2:container-selinux-2.74-1.el7.noarch                      32/38
    master:  
    master:   Installing : kubelet-1.13.3-0.x86_64                                    29/38
    master:  
    master:   Installing : libcgroup-0.41-20.el7.x86_64                               30/38
    master:  
    master:   Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64               31/38
    master:  
    master:   Installing : 2:container-selinux-2.74-1.el7.noarch                      32/38
    node2:  
    node2:   Installing : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64     33/38
    node1:  
    node1:   Installing : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64     33/38
    node2:  
    node2:   Installing : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64     34/38
    node1:  
    node1:   Installing : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64     34/38
    node2:  
    node2:   Installing : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64            35/38
    node1:  
    node1:   Installing : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64            35/38
    master:  
    master:   Installing : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64     33/38
    master:  
    master:   Installing : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64     34/38
    node2:  
    node2:   Installing : kubeadm-1.13.3-0.x86_64                                    36/38
    node2:  
    node2:   Installing : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64             37/38
    node2:  
    node2:   Installing : tree-1.6.0-10.el7.x86_64                                   38/38
    node1:  
    node1:   Installing : kubeadm-1.13.3-0.x86_64                                    36/38
    node2:  
    node2:   Verifying  : libcgroup-0.41-20.el7.x86_64                                1/38
    node2:  
    node2:   Verifying  : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch    2/38
    node2:  
    node2:   Verifying  : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                   3/38
    node2:  
    node2:   Verifying  : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch      4/38
    node2:  
    node2:   Verifying  : cri-tools-1.12.0-0.x86_64                                   5/38
    node2:  
    node2:   Verifying  : kubeadm-1.13.3-0.x86_64                                     6/38
    node2:  
    node2:   Verifying  : libnetfilter_cthelper-1.0.0-9.el7.x86_64                    7/38
    node2:  
    node2:   Verifying  : 2:container-selinux-2.74-1.el7.noarch                       8/38
    node2:  
    node2:   Verifying  : conntrack-tools-1.4.4-4.el7.x86_64                          9/38
    node2:  
    node2:   Verifying  : python-setuptools-0.9.8-7.el7.noarch                       10/38
    node2:  
    node2:   Verifying  : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64     11/38
    node2:  
    node2:   Verifying  : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen   12/38
    node1:  
    node1:   Installing : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64             37/38
    node2:  
    node2:   Verifying  : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64        13/38
    node2:  
    node2:   Verifying  : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64                 14/38
    node2:  
    node2:   Verifying  : checkpolicy-2.5-8.el7.x86_64                               15/38
    node2:  
    node2:   Verifying  : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    16/38
    node2:  
    node2:   Verifying  : tree-1.6.0-10.el7.x86_64                                   17/38
    node2:  
    node2:   Verifying  : python-IPy-0.75-6.el7.noarch                               18/38
    node1:  
    node1:   Installing : tree-1.6.0-10.el7.x86_64                                   38/38
    node2:  
    node2:   Verifying  : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64     19/38
    node2:  
    node2:   Verifying  : kubelet-1.13.3-0.x86_64                                    20/38
    node2:  
    node2:   Verifying  : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_   21/38
    node2:  
    node2:   Verifying  : python-pytoml-0.1.14-1.git7dea353.el7.noarch               22/38
    node2:  
    node2:   Verifying  : setools-libs-3.3.8-4.el7.x86_64                            23/38
    node2:  
    node2:   Verifying  : kubectl-1.13.3-0.x86_64                                    24/38
    node1:  
    node1:   Verifying  : libcgroup-0.41-20.el7.x86_64                                1/38
    node2:  
    node2:   Verifying  : policycoreutils-python-2.5-29.el7_6.1.x86_64               25/38
    node1:  
    node1:   Verifying  : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch    2/38
    node2:  
    node2:   Verifying  : libsemanage-python-2.5-14.el7.x86_64                       26/38
    node2:  
    node2:   Verifying  : 1:oci-register-machine-0-6.git2b44233.el7.x86_64           27/38
    node1:  
    node1:   Verifying  : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                   3/38
    node2:  
    node2:   Verifying  : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64             28/38
    node1:  
    node1:   Verifying  : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch      4/38
    node1:  
    node1:   Verifying  : cri-tools-1.12.0-0.x86_64                                   5/38
    node2:  
    node2:   Verifying  : python-backports-1.0-8.el7.x86_64                          29/38
    node2:  
    node2:   Verifying  : yajl-2.0.4-4.el7.x86_64                                    30/38
    node1:  
    node1:   Verifying  : kubeadm-1.13.3-0.x86_64                                     6/38
    node2:  
    node2:   Verifying  : audit-libs-python-2.8.4-4.el7.x86_64                       31/38
    node1:  
    node1:   Verifying  : libnetfilter_cthelper-1.0.0-9.el7.x86_64                    7/38
    node2:  
    node2:   Verifying  : libyaml-0.1.4-11.el7_0.x86_64                              32/38
    node1:  
    node1:   Verifying  : 2:container-selinux-2.74-1.el7.noarch                       8/38
    node2:  
    node2:   Verifying  : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6   33/38
    node1:  
    node1:   Verifying  : conntrack-tools-1.4.4-4.el7.x86_64                          9/38
    node2:  
    node2:   Verifying  : python-ipaddress-1.0.16-2.el7.noarch                       34/38
    node1:  
    node1:   Verifying  : python-setuptools-0.9.8-7.el7.noarch                       10/38
    node2:  
    node2:   Verifying  : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64            35/38
    node1:  
    node1:   Verifying  : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64     11/38
    node2:  
    node2:   Verifying  : PyYAML-3.10-11.el7.x86_64                                  36/38
    node1:  
    node1:   Verifying  : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen   12/38
    node2:  
    node2:   Verifying  : kubernetes-cni-0.6.0-0.x86_64                              37/38
    node1:  
    node1:   Verifying  : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64        13/38
    node2:  
    node2:   Verifying  : socat-1.7.3.2-2.el7.x86_64                                 38/38
    node1:  
    node1:   Verifying  : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64                 14/38
    node1:  
    node1:   Verifying  : checkpolicy-2.5-8.el7.x86_64                               15/38
    node1:  
    node1:   Verifying  : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    16/38
    node1:  
    node1:   Verifying  : tree-1.6.0-10.el7.x86_64                                   17/38
    node1:  
    node1:   Verifying  : python-IPy-0.75-6.el7.noarch                               18/38
    node1:  
    node1:   Verifying  : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64     19/38
    node1:  
    node1:   Verifying  : kubelet-1.13.3-0.x86_64                                    20/38
    node1:  
    node1:   Verifying  : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_   21/38
    node1:  
    node1:   Verifying  : python-pytoml-0.1.14-1.git7dea353.el7.noarch               22/38
    node1:  
    node1:   Verifying  : setools-libs-3.3.8-4.el7.x86_64                            23/38
    node1:  
    node1:   Verifying  : kubectl-1.13.3-0.x86_64                                    24/38
    node2:  
    node2: 
    node2: Installed:
    node2:   conntrack-tools.x86_64 0:1.4.4-4.el7                                          
    node2:   docker.x86_64 2:1.13.1-91.git07f3374.el7.centos                               
    node2:   kubeadm.x86_64 0:1.13.3-0                                                     
    node2:   kubelet.x86_64 0:1.13.3-0                                                     
    node2:   screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7                              
    node2:   socat.x86_64 0:1.7.3.2-2.el7                                                  
    node2:   tree.x86_64 0:1.6.0-10.el7                                                    
    node2: 
    node2: Dependency Installed:
    node2:   PyYAML.x86_64 0:3.10-11.el7                                                   
    node2:   atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos                    
    node2:   audit-libs-python.x86_64 0:2.8.4-4.el7                                        
    node2:   checkpolicy.x86_64 0:2.5-8.el7                                                
    node2:   container-selinux.noarch 2:2.74-1.el7                                         
    node2:   container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7                      
    node2:   containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos                     
    node2:   cri-tools.x86_64 0:1.12.0-0                                                   
    node2:   docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos                        
    node2:   docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos                        
    node2:   kubectl.x86_64 0:1.13.3-0                                                     
    node2:   kubernetes-cni.x86_64 0:0.6.0-0                                               
    node2:   libcgroup.x86_64 0:0.41-20.el7                                                
    node2:   libnetfilter_cthelper.x86_64 0:1.0.0-9.el7                                    
    node2:   libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7                                   
    node2:   libnetfilter_queue.x86_64 0:1.0.2-2.el7_2                                     
    node2:   libsemanage-python.x86_64 0:2.5-14.el7                                        
    node2:   libyaml.x86_64 0:0.1.4-11.el7_0                                               
    node2:   oci-register-machine.x86_64 1:0-6.git2b44233.el7                              
    node2:   oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6                           
    node2:   oci-umount.x86_64 2:2.3.4-2.git87f9237.el7                                    
    node2:   policycoreutils-python.x86_64 0:2.5-29.el7_6.1                                
    node2:   python-IPy.noarch 0:0.75-6.el7                                                
    node2:   python-backports.x86_64 0:1.0-8.el7                                           
    node2:   python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7                    
    node2:   python-ipaddress.noarch 0:1.0.16-2.el7                                        
    node2:   python-pytoml.noarch 0:0.1.14-1.git7dea353.el7                                
    node2:   python-setuptools.noarch 0:0.9.8-7.el7                                        
    node2:   setools-libs.x86_64 0:3.3.8-4.el7                                             
    node2:   subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos          
    node2:   yajl.x86_64 0:2.0.4-4.el7                                                     
    node2: Complete!
    node1:  
    node1:   Verifying  : policycoreutils-python-2.5-29.el7_6.1.x86_64               25/38
    node1:  
    node1:   Verifying  : libsemanage-python-2.5-14.el7.x86_64                       26/38
    node1:  
    node1:   Verifying  : 1:oci-register-machine-0-6.git2b44233.el7.x86_64           27/38
    node1:  
    node1:   Verifying  : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64             28/38
    node1:  
    node1:   Verifying  : python-backports-1.0-8.el7.x86_64                          29/38
    node2: ++ systemctl enable kubelet
    node1:  
    node1:   Verifying  : yajl-2.0.4-4.el7.x86_64                                    30/38
    node2: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    node1:  
    node1:   Verifying  : audit-libs-python-2.8.4-4.el7.x86_64                       31/38
    node1:  
    node1:   Verifying  : libyaml-0.1.4-11.el7_0.x86_64                              32/38
    node1:  
    node1:   Verifying  : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6   33/38
    node1:  
    node1:   Verifying  : python-ipaddress-1.0.16-2.el7.noarch                       34/38
    node1:  
    node1:   Verifying  : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64            35/38
    node1:  
    node1:   Verifying  : PyYAML-3.10-11.el7.x86_64                                  36/38
    node1:  
    node1:   Verifying  : kubernetes-cni-0.6.0-0.x86_64                              37/38
    node2: ++ systemctl start kubelet
    node1:  
    node1:   Verifying  : socat-1.7.3.2-2.el7.x86_64                                 38/38
    node2: ++ systemctl enable docker
    node2: Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    node2: ++ systemctl start docker
    node1:  
    node1: 
    node1: Installed:
    node1:   conntrack-tools.x86_64 0:1.4.4-4.el7                                          
    node1:   docker.x86_64 2:1.13.1-91.git07f3374.el7.centos                               
    node1:   kubeadm.x86_64 0:1.13.3-0                                                     
    node1:   kubelet.x86_64 0:1.13.3-0                                                     
    node1:   screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7                              
    node1:   socat.x86_64 0:1.7.3.2-2.el7                                                  
    node1:   tree.x86_64 0:1.6.0-10.el7                                                    
    node1: 
    node1: Dependency Installed:
    node1:   PyYAML.x86_64 0:3.10-11.el7                                                   
    node1:   atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos                    
    node1:   audit-libs-python.x86_64 0:2.8.4-4.el7                                        
    node1:   checkpolicy.x86_64 0:2.5-8.el7                                                
    node1:   container-selinux.noarch 2:2.74-1.el7                                         
    node1:   container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7                      
    node1:   containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos                     
    node1:   cri-tools.x86_64 0:1.12.0-0                                                   
    node1:   docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos                        
    node1:   docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos                        
    node1:   kubectl.x86_64 0:1.13.3-0                                                     
    node1:   kubernetes-cni.x86_64 0:0.6.0-0                                               
    node1:   libcgroup.x86_64 0:0.41-20.el7                                                
    node1:   libnetfilter_cthelper.x86_64 0:1.0.0-9.el7                                    
    node1:   libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7                                   
    node1:   libnetfilter_queue.x86_64 0:1.0.2-2.el7_2                                     
    node1:   libsemanage-python.x86_64 0:2.5-14.el7                                        
    node1:   libyaml.x86_64 0:0.1.4-11.el7_0                                               
    node1:   oci-register-machine.x86_64 1:0-6.git2b44233.el7                              
    node1:   oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6                           
    node1:   oci-umount.x86_64 2:2.3.4-2.git87f9237.el7                                    
    node1:   policycoreutils-python.x86_64 0:2.5-29.el7_6.1                                
    node1:   python-IPy.noarch 0:0.75-6.el7                                                
    node1:   python-backports.x86_64 0:1.0-8.el7                                           
    node1:   python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7                    
    node1:   python-ipaddress.noarch 0:1.0.16-2.el7                                        
    node1:   python-pytoml.noarch 0:0.1.14-1.git7dea353.el7                                
    node1:   python-setuptools.noarch 0:0.9.8-7.el7                                        
    node1:   setools-libs.x86_64 0:3.3.8-4.el7                                             
    node1:   subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos          
    node1:   yajl.x86_64 0:2.0.4-4.el7                                                     
    node1: Complete!
    node1: ++ systemctl enable kubelet
    node1: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    node1: ++ systemctl start kubelet
    node1: ++ systemctl enable docker
    node1: Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    node1: ++ systemctl start docker
    master:  
    master:   Installing : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64            35/38
==> node2: Running provisioner: shell...
==> node1: Running provisioner: shell...
    node2: Running: inline script
    node2: Client:
    node2:  Version:         1.13.1
    node2:  API version:     1.26
    node2:  Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node2:  Go version:      go1.10.3
    node2:  Git commit:      07f3374/1.13.1
    node2:  Built:           Wed Feb 13 17:10:12 2019
    node2:  OS/Arch:         linux/amd64
    node2: 
    node2: Server:
    node2:  Version:         1.13.1
    node2:  API version:     1.26 (minimum version 1.12)
    node2:  Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node2:  Go version:      go1.10.3
    node2:  Git commit:      07f3374/1.13.1
    node2:  Built:           Wed Feb 13 17:10:12 2019
    node2:  OS/Arch:         linux/amd64
    node2:  Experimental:    false
    node2: kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
    node2: Kubernetes v1.13.3
==> node2: Running provisioner: diskandreboot...
Halting vm node2 (0c3820b6-51da-4adb-944e-12af8663ccd8)
    node1: Running: inline script
    node1: Client:
    node1:  Version:         1.13.1
    node1:  API version:     1.26
    node1:  Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node1:  Go version:      go1.10.3
    node1:  Git commit:      07f3374/1.13.1
    node1:  Built:           Wed Feb 13 17:10:12 2019
    node1:  OS/Arch:         linux/amd64
    node1: 
    node1: 
    node1: Server:
    node1:  Version:         1.13.1
    node1:  API version:     1.26 (minimum version 1.12
    node1: )
    node1:  Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
    node1:  Go version:      go1.10.3
    node1:  Git commit:      07f3374/1.13.1
    node1:  Built:           Wed Feb 13 17:10:12 2019
    node1:  OS/Arch:         linux/amd64
    node1:  Experimental:    false
    node1: kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
    node1: Kubernetes v1.13.3
==> node1: Running provisioner: diskandreboot...
Halting vm node1 (37795969-8969-42c3-b16d-a685ad88bad6)
    master:  
    master:   Installing : kubeadm-1.13.3-0.x86_64                                    36/38
    master:  
    master:   Installing : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64             37/38
    master:  
    master:   Installing : tree-1.6.0-10.el7.x86_64                                   38/38
    master:  
    master:   Verifying  : libcgroup-0.41-20.el7.x86_64                                1/38
    master:  
    master:   Verifying  : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch    2/38
    master:  
    master:   Verifying  : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                   3/38
    master:  
    master:   Verifying  : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch      4/38
    master:  
    master:   Verifying  : cri-tools-1.12.0-0.x86_64                                   5/38
    master:  
    master:   Verifying  : kubeadm-1.13.3-0.x86_64                                     6/38
    master:  
    master:   Verifying  : libnetfilter_cthelper-1.0.0-9.el7.x86_64                    7/38
    master:  
    master:   Verifying  : 2:container-selinux-2.74-1.el7.noarch                       8/38
    master:  
    master:   Verifying  : conntrack-tools-1.4.4-4.el7.x86_64                          9/38
    master:  
    master:   Verifying  : python-setuptools-0.9.8-7.el7.noarch                       10/38
    master:  
    master:   Verifying  : 2:docker-client-1.13.1-91.git07f3374.el7.centos.x86_64     11/38
    master:  
    master:   Verifying  : subscription-manager-rhsm-certificates-1.21.10-3.el7.cen   12/38
    master:  
    master:   Verifying  : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64        13/38
    master:  
    master:   Verifying  : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64                 14/38
==> node2: Attempting graceful shutdown of VM...
    master:  
    master:   Verifying  : checkpolicy-2.5-8.el7.x86_64                               15/38
    master:  
    master:   Verifying  : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    16/38
    master:  
    master:   Verifying  : tree-1.6.0-10.el7.x86_64                                   17/38
    master:  
    master:   Verifying  : python-IPy-0.75-6.el7.noarch                               18/38
    master:  
    master:   Verifying  : 2:docker-common-1.13.1-91.git07f3374.el7.centos.x86_64     19/38
    master:  
    master:   Verifying  : kubelet-1.13.3-0.x86_64                                    20/38
    master:  
    master:   Verifying  : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_   21/38
    master:  
    master:   Verifying  : python-pytoml-0.1.14-1.git7dea353.el7.noarch               22/38
    master:  
    master:   Verifying  : setools-libs-3.3.8-4.el7.x86_64                            23/38
    master:  
    master:   Verifying  : kubectl-1.13.3-0.x86_64                                    24/38
    master:  
    master:   Verifying  : policycoreutils-python-2.5-29.el7_6.1.x86_64               25/38
    master:  
    master:   Verifying  : libsemanage-python-2.5-14.el7.x86_64                       26/38
    master:  
    master:   Verifying  : 1:oci-register-machine-0-6.git2b44233.el7.x86_64           27/38
    master:  
    master:   Verifying  : screen-4.1.0-0.25.20120314git3c2946.el7.x86_64             28/38
    master:  
    master:   Verifying  : python-backports-1.0-8.el7.x86_64                          29/38
    master:  
    master:   Verifying  : yajl-2.0.4-4.el7.x86_64                                    30/38
    master:  
    master:   Verifying  : audit-libs-python-2.8.4-4.el7.x86_64                       31/38
    master:  
    master:   Verifying  : libyaml-0.1.4-11.el7_0.x86_64                              32/38
    master:  
    master:   Verifying  : 1:containers-common-0.1.31-8.gitb0b750d.el7.centos.x86_6   33/38
==> node1: Attempting graceful shutdown of VM...
    master:  
    master:   Verifying  : python-ipaddress-1.0.16-2.el7.noarch                       34/38
    master:  
    master:   Verifying  : 2:docker-1.13.1-91.git07f3374.el7.centos.x86_64            35/38
    master:  
    master:   Verifying  : PyYAML-3.10-11.el7.x86_64                                  36/38
    master:  
    master:   Verifying  : kubernetes-cni-0.6.0-0.x86_64                              37/38
    master:  
    master:   Verifying  : socat-1.7.3.2-2.el7.x86_64                                 38/38
    master:  
    master: 
    master: Installed:
    master:   conntrack-tools.x86_64 0:1.4.4-4.el7                                          
    master:   docker.x86_64 2:1.13.1-91.git07f3374.el7.centos                               
    master:   kubeadm.x86_64 0:1.13.3-0                                                     
    master:   kubelet.x86_64 0:1.13.3-0                                                     
    master:   screen.x86_64 0:4.1.0-0.25.20120314git3c2946.el7                              
    master:   socat.x86_64 0:1.7.3.2-2.el7                                                  
    master:   tree.x86_64 0:1.6.0-10.el7                                                    
    master: 
    master: Dependency Installed:
    master:   PyYAML.x86_64 0:3.10-11.el7                                                   
    master:   atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos                    
    master:   audit-libs-python.x86_64 0:2.8.4-4.el7                                        
    master:   checkpolicy.x86_64 0:2.5-8.el7                                                
    master:   container-selinux.noarch 2:2.74-1.el7                                         
    master:   container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7                      
    master:   containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos                     
    master:   cri-tools.x86_64 0:1.12.0-0                                                   
    master:   docker-client.x86_64 2:1.13.1-91.git07f3374.el7.centos                        
    master:   docker-common.x86_64 2:1.13.1-91.git07f3374.el7.centos                        
    master:   kubectl.x86_64 0:1.13.3-0                                                     
    master:   kubernetes-cni.x86_64 0:0.6.0-0                                               
    master:   libcgroup.x86_64 0:0.41-20.el7                                                
    master:   libnetfilter_cthelper.x86_64 0:1.0.0-9.el7                                    
    master:   libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7                                   
    master:   libnetfilter_queue.x86_64 0:1.0.2-2.el7_2                                     
    master:   libsemanage-python.x86_64 0:2.5-14.el7                                        
    master:   libyaml.x86_64 0:0.1.4-11.el7_0                                               
    master:   oci-register-machine.x86_64 1:0-6.git2b44233.el7                              
    master:   oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6                           
    master:   oci-umount.x86_64 2:2.3.4-2.git87f9237.el7                                    
    master:   policycoreutils-python.x86_64 0:2.5-29.el7_6.1                                
    master:   python-IPy.noarch 0:0.75-6.el7                                                
    master:   python-backports.x86_64 0:1.0-8.el7                                           
    master:   python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7                    
    master:   python-ipaddress.noarch 0:1.0.16-2.el7                                        
    master:   python-pytoml.noarch 0:0.1.14-1.git7dea353.el7                                
    master:   python-setuptools.noarch 0:0.9.8-7.el7                                        
    master:   setools-libs.x86_64 0:3.3.8-4.el7                                             
    master:   subscription-manager-rhsm-certificates.x86_64 0:1.21.10-3.el7.centos          
    master:   yajl.x86_64 0:2.0.4-4.el7                                                     
    master: Complete!
    master: ++ systemctl enable kubelet
    master: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    master: ++ systemctl start kubelet
    master: ++ systemctl enable docker
    master: Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    master: ++ systemctl start docker
==> master: Running provisioner: shell...
    master: Running: inline script
    master: Client:
    master:  Version:         1.13.1
    master:  API version:     1.26
    master:  Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
    master:  Go version:      go1.10.3
    master:  Git commit:      07f3374/1.13.1
    master:  Built:           Wed Feb 13 17:10:12 2019
    master:  OS/Arch:         linux/amd64
    master: 
    master: 
    master: Server:
    master:  Version:         1.13.1
    master:  API version:     1.26 (minimum version 1.12)
    master:  Package version: docker-1.13.1-91.git07f3374.el7.centos.x86_64
    master:  Go version:      go1.10.3
    master:  Git commit:      07f3374/1.13.1
    master:  Built:           Wed Feb 13 17:10:12 2019
    master:  OS/Arch:         linux/amd64
    master:  Experimental:    false
    master: kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
    master: Kubernetes v1.13.3
==> master: Running provisioner: diskandreboot...
Halting vm master (3c95d4ed-547e-4fba-ad48-88234a29e313)
==> master: Attempting graceful shutdown of VM...
Adding storage controller
Added storage controller
Adding disk 1
Creating disk 1 for node1
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: d2d214e6-bdfa-44b3-9d7c-9c90fa3d7af5
Created disk 1 for node1
Added disk 1
Starting vm node1
==> node1: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node1: Clearing any previously set forwarded ports...
==> node1: Clearing any previously set network interfaces...
==> node1: Preparing network interfaces based on configuration...
    node1: Adapter 1: nat
    node1: Adapter 2: hostonly
==> node1: Forwarding ports...
    node1: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Running 'pre-boot' VM customizations...
==> node1: Booting VM...
==> node1: Waiting for machine to boot. This may take a few minutes...
Adding storage controller
Added storage controller
Adding disk 1
Creating disk 1 for node2
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: fb913b22-4b9b-4dc4-b57b-510543ee41de
Created disk 1 for node2
Added disk 1
Starting vm node2
==> node2: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> node2: Clearing any previously set forwarded ports...
==> node2: Clearing any previously set network interfaces...
==> node2: Preparing network interfaces based on configuration...
    node2: Adapter 1: nat
    node2: Adapter 2: hostonly
==> node2: Forwarding ports...
    node2: 22 (guest) => 2201 (host) (adapter 1)
==> node2: Running 'pre-boot' VM customizations...
==> node2: Booting VM...
==> node2: Waiting for machine to boot. This may take a few minutes...
Adding storage controller
Added storage controller
Adding disk 1
Creating disk 1 for master
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 861d2382-f01a-4e53-8584-d2510fc13812
Created disk 1 for master
Added disk 1
Starting vm master
==> master: Checking if box 'generic/centos7' version '1.9.2' is up to date...
==> master: Clearing any previously set forwarded ports...
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
    master: Adapter 1: nat
    master: Adapter 2: hostonly
==> master: Forwarding ports...
    master: 22 (guest) => 2200 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
==> node1: Machine booted and ready!
==> node1: Checking for guest additions in VM...
    node1: The guest additions on this VM do not match the installed version of
    node1: VirtualBox! In most cases this is fine, but in rare cases it can
    node1: prevent things such as shared folders from working properly. If you see
    node1: shared folder errors, please make sure the guest additions within the
    node1: virtual machine match the version of VirtualBox you have installed on
    node1: your host and reload your VM.
    node1: 
    node1: Guest Additions Version: 5.1.38
    node1: VirtualBox Version: 6.0
==> node1: Setting hostname...
==> node1: Configuring and enabling network interfaces...
==> node1: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node1/ => /data
==> node1: Machine not provisioned because `--no-provision` is specified.
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
    master: The guest additions on this VM do not match the installed version of
    master: VirtualBox! In most cases this is fine, but in rare cases it can
    master: prevent things such as shared folders from working properly. If you see
    master: shared folder errors, please make sure the guest additions within the
    master: virtual machine match the version of VirtualBox you have installed on
    master: your host and reload your VM.
    master: 
    master: Guest Additions Version: 5.1.38
    master: VirtualBox Version: 6.0
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-master/ => /data
==> node2: Machine booted and ready!
==> node2: Checking for guest additions in VM...
    node2: The guest additions on this VM do not match the installed version of
    node2: VirtualBox! In most cases this is fine, but in rare cases it can
    node2: prevent things such as shared folders from working properly. If you see
    node2: shared folder errors, please make sure the guest additions within the
    node2: virtual machine match the version of VirtualBox you have installed on
    node2: your host and reload your VM.
    node2: 
    node2: Guest Additions Version: 5.1.38
    node2: VirtualBox Version: 6.0
==> node2: Setting hostname...
==> master: Machine not provisioned because `--no-provision` is specified.
==> node2: Configuring and enabling network interfaces...
==> node2: Rsyncing folder: /Users/rameshkumar/k8s/k8s-vagrant-multi-node/data/centos-node2/ => /data
==> node2: Machine not provisioned because `--no-provision` is specified.
==> node1: Running provisioner: shell...
    node1: Running: inline script
    node1: ++ kubeadm reset -f
    node1: [preflight] running pre-flight checks
    node1: [reset] no etcd config found. Assuming external etcd
    node1: [reset] please manually reset etcd to prevent further issues
    node1: [reset] stopping the kubelet service
    node1: [reset] unmounting mounted directories in "/var/lib/kubelet"
    node1: [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
    node1: [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    node1: [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    node1: 
    node1: The reset process does not reset or clean up iptables rules or IPVS tables.
    node1: If you wish to reset iptables, you must do so manually.
    node1: For example: 
    node1: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
    node1: 
    node1: If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    node1: to reset your system's IPVS tables.
    node1: ++ retries=5
    node1: ++ (( i=0 ))
    node1: ++ (( i<retries ))
    node1: ++ kubeadm join --ignore-preflight-errors=SystemVerification --discovery-token-unsafe-skip-ca-verification --token ldvls1.07bnzoriqs1ruyrb 192.168.26.10:6443
    node1: [preflight] Running pre-flight checks
    node1: [discovery] Trying to connect to API Server "192.168.26.10:6443"
    node1: [discovery] Created cluster-info discovery client, requesting info from "https://192.168.26.10:6443"
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
==> master: Running provisioner: shell...
    master: Running: inline script
    master: ++ kubeadm reset -f
    master: [preflight] running pre-flight checks
    master: [reset] no etcd config found. Assuming external etcd
    master: [reset] please manually reset etcd to prevent further issues
    master: [reset] stopping the kubelet service
    master: [reset] unmounting mounted directories in "/var/lib/kubelet"
    master: [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
    master: [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    master: [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    master: 
    master: The reset process does not reset or clean up iptables rules or IPVS tables.
    master: If you wish to reset iptables, you must do so manually.
    master: For example: 
    master: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
    master: 
    master: If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    master: to reset your system's IPVS tables.
    master: ++ retries=5
    master: ++ (( i=0 ))
    master: ++ (( i<retries ))
    master: ++ kubeadm init --kubernetes-version=1.13.3 --ignore-preflight-errors=SystemVerification --apiserver-advertise-address=192.168.26.10 --pod-network-cidr=10.244.0.0/16 --token ldvls1.07bnzoriqs1ruyrb --token-ttl 0
    master: [init] Using Kubernetes version: v1.13.3
    master: [preflight] Running pre-flight checks
    master: 	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    master: [preflight] Pulling images required for setting up a Kubernetes cluster
    master: [preflight] This might take a minute or two, depending on the speed of your internet connection
    master: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
==> node2: Running provisioner: shell...
    node2: Running: inline script
    node2: ++ kubeadm reset -f
    node2: [preflight] running pre-flight checks
    node2: [reset] no etcd config found. Assuming external etcd
    node2: [reset] please manually reset etcd to prevent further issues
    node2: [reset] stopping the kubelet service
    node2: [reset] unmounting mounted directories in "/var/lib/kubelet"
    node2: [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
    node2: [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    node2: [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    node2: 
    node2: The reset process does not reset or clean up iptables rules or IPVS tables.
    node2: If you wish to reset iptables, you must do so manually.
    node2: For example: 
    node2: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
    node2: 
    node2: If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    node2: to reset your system's IPVS tables.
    node2: ++ retries=5
    node2: ++ (( i=0 ))
    node2: ++ (( i<retries ))
    node2: ++ kubeadm join --ignore-preflight-errors=SystemVerification --discovery-token-unsafe-skip-ca-verification --token ldvls1.07bnzoriqs1ruyrb 192.168.26.10:6443
    node2: [preflight] Running pre-flight checks
    node2: [discovery] Trying to connect to API Server "192.168.26.10:6443"
    node2: [discovery] Created cluster-info discovery client, requesting info from "https://192.168.26.10:6443"
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
^[[A^[[A    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    master: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    master: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    master: [kubelet-start] Activating the kubelet service
    master: [certs] Using certificateDir folder "/etc/kubernetes/pki"
    master: [certs] Generating "front-proxy-ca" certificate and key
    master: [certs] Generating "front-proxy-client" certificate and key
    master: [certs] Generating "ca" certificate and key
    master: [certs] Generating "apiserver-kubelet-client" certificate and key
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    master: [certs] Generating "apiserver" certificate and key
    master: [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.26.10]
    master: [certs] Generating "etcd/ca" certificate and key
    master: [certs] Generating "etcd/server" certificate and key
    master: [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.26.10 127.0.0.1 ::1]
    master: [certs] Generating "apiserver-etcd-client" certificate and key
    master: [certs] Generating "etcd/peer" certificate and key
    master: [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.26.10 127.0.0.1 ::1]
    master: [certs] Generating "etcd/healthcheck-client" certificate and key
    master: [certs] Generating "sa" key and public key
    master: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    master: [kubeconfig] Writing "admin.conf" kubeconfig file
    master: [kubeconfig] Writing "kubelet.conf" kubeconfig file
    master: [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    master: [kubeconfig] Writing "scheduler.conf" kubeconfig file
    master: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    master: [control-plane] Creating static Pod manifest for "kube-apiserver"
    master: [control-plane] Creating static Pod manifest for "kube-controller-manager"
    master: [control-plane] Creating static Pod manifest for "kube-scheduler"
    master: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    master: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    master: [apiclient] All control plane components are healthy after 19.014793 seconds
    master: [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    master: [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
    master: [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
    master: [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    master: [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    master: [bootstrap-token] Using token: ldvls1.07bnzoriqs1ruyrb
    master: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    master: [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    master: [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    master: [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    master: [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    master: [addons] Applied essential addon: CoreDNS
    master: [addons] Applied essential addon: kube-proxy
    master: 
    master: Your Kubernetes master has initialized successfully!
    master: 
    master: To start using your cluster, you need to run the following as a regular user:
    master: 
    master:   mkdir -p $HOME/.kube
    master:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    master:   sudo chown $(id -u):$(id -g) $HOME/.kube/config
    master: 
    master: You should now deploy a pod network to the cluster.
    master: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    master:   https://kubernetes.io/docs/concepts/cluster-administration/addons/
    master: 
    master: You can now join any number of machines by running the following on each node
    master: as root:
    master: 
    master:   kubeadm join 192.168.26.10:6443 --token ldvls1.07bnzoriqs1ruyrb --discovery-token-ca-cert-hash sha256:e1f3a689f46caece2701c209db5eac272a788b7f87a551a037ce88bfd09d14d3
    master: ++ break
    master: ++ [[ 5 -eq i ]]
    master: ++ KUBELET_EXTRA_ARGS_FILE=/etc/sysconfig/kubelet
    master: ++ '[' '!' -f /etc/sysconfig/kubelet ']'
    master: ++ grep -q -- --node-ip= /etc/sysconfig/kubelet
    master: ++ sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=--node-ip=192.168.26.10 /' /etc/sysconfig/kubelet
    master: ++ systemctl daemon-reload
    master: ++ systemctl restart kubelet.service
    master: ++ mkdir -p /root/.kube
    master: ++ cp -Rf /etc/kubernetes/admin.conf /root/.kube/config
    master: +++ id -u
    master: +++ id -g
    master: ++ chown 0:0 /root/.kube/config
    master: ++ '[' flannel == flannel ']'
    master: ++ curl --retry 5 --fail -s https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    master: ++ awk '/- --kube-subnet-mgr/{print "        - --iface=eth1"}1'
    master: ++ kubectl apply -f -
    master: podsecuritypolicy.extensions/psp.flannel.unprivileged created
    master: clusterrole.rbac.authorization.k8s.io/flannel created
    master: clusterrolebinding.rbac.authorization.k8s.io/flannel created
    master: serviceaccount/flannel created
    master: configmap/kube-flannel-cfg created
    master: daemonset.extensions/kube-flannel-ds-amd64 created
    master: daemonset.extensions/kube-flannel-ds-arm64 created
    master: daemonset.extensions/kube-flannel-ds-arm created
    master: daemonset.extensions/kube-flannel-ds-ppc64le created
    master: daemonset.extensions/kube-flannel-ds-s390x created
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node1: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
    node2: [discovery] Failed to request cluster info, will try again: [Get https://192.168.26.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.26.10:6443: connect: no route to host]
^C/opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `synchronize': can't be called from trap context (ThreadError)
	from /opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `[]='
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:358:in `normalize_key'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:298:in `normalize_keys'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/simple.rb:84:in `lookup'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/base.rb:30:in `translate'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:185:in `block in translate'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `catch'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `translate'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:59:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `block in fire_callbacks'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `each'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `fire_callbacks'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:33:in `block (2 levels) in register'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `join'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `each'
/opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `synchronize': 	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:280:in `block (2 levels) in batch'
can't be called from trap context (ThreadError	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `tap'
)
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `block in batch'
	from /opt/vagrant/embedded/gems/2.2.3/gems/concurrent-ruby-1.1.4/lib/concurrent/collection/map/mri_map_backend.rb:18:in `[]='
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `synchronize'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:358:in `normalize_key'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `batch'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:298:in `normalize_keys'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/up/command.rb:97:in `execute'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/simple.rb:84:in `lookup'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n/backend/base.rb:30:in `translate'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:185:in `block in translate'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `catch'
	from /opt/vagrant/embedded/gems/2.2.3/gems/i18n-1.1.1/lib/i18n.rb:181:in `translate'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/cli.rb:58:in `execute'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:59:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:291:in `cli'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `block in fire_callbacks'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:182:in `<main>'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `each'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:49:in `fire_callbacks'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:33:in `block (2 levels) in register'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `join'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:127:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `each'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:65:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:280:in `block (2 levels) in batch'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `tap'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:275:in `block in batch'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `synchronize'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:274:in `batch'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/up/command.rb:97:in `execute'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/cli.rb:58:in `execute'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:291:in `cli'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:182:in `<main>'
make[2]: *** [start-node-1] Error 1
make[2]: *** [start-node-2] Error 1
make[1]: *** [start] Interrupt: 2
make: *** [up] Interrupt: 2

kylix3511.mylabserver.com >>>> 



Add details for network, dashboard and K8S version choices

Is this a bug report or feature request?

  • Bug Report

Bug Report

Expected behavior:

  • K8S_DASHBOARD=true: I'd expect I should be able to run kubectl proxy and use it, but I couldn't make that work. Now I just (re) install it myself (and it works) after it's supposedly been installed. If this choice results in K8S dash being installed, where is it and how to use it?
  • KUBERNETES_VERSION - it's not clear how this should be set and what happens if it's not. It says: "KUBEADM_INIT_FLAGS will be set to --kubernetes-version=$KUBERNETES_VERSION if unset" sounds as if not providing this results in --kubernetes-version='' (I know it doesn't). It is also not clear whether this should be v1.23 or 1.23. K8S_DASHBOARD_VERSION has a version string example, but this variable does not. I know I can dig through scripts or log to figure it out, but if this was documented it'd be better.
  • CLUSTER_NAME - it seems default upstream behavior is that cluster name is also set in Dashboard (Settings > Global Settings > Cluster name) but in this case that setting is left empty when cluster name isn't set.
    image
  • NODE_IP_NW - maybe some detail on network types and impact of those choices would be helpful

Deviation from expected behavior:

  • K8S_DASHBOARD=true - it's hard to see the effects of this setting
  • KUBERNETES_VERSION=1.15 - causes install to fail. Educated guess would be v1.15, but it'd be better have it spelled out.
  • CLUSTER_NAME= - if dashboard is installed, dashboard should pick cluster name
  • NODE_IP_NW - I wasn't sure what the installer would do, but I thought it might create a bridged network if I set this range to my LAN range, or that it'd use intnet type (rather than vboxnet), or maybe ask me to pick one.

Environment:

  • Bionic 18.04
  • kernel 4.15.0-54
  • make versions output:
=== BEGIN Version Info ===
Repo state: 62d403b37d433db7d3eed6d8a98136837441aadb (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===

Deleting kubectl config on clean

Is this a bug report or feature request?

  • Feature Request

Are there any similar features already existing:

What should the feature do:
I'm using on my e2e test, but everytime I created a cluster and delete it, kubectl config still remains. It will be great If the config is deleted.

What would be solved through this feature:

Does this have an impact on existing features:

Fail to start VMs with multiple disks

Attempting to use multiple disks, e.g., with DISK_COUNT=3 make up will fail to start the VMs with the following error message:

A customization command failed:

["storageattach", :id, "--storagectl", "SATAController", "--port", 1, "--device", 1, "--type", "hdd", "--medium", ".vagrant/master-disk-2.vdi"]

The following error was experienced:

#<Vagrant::Errors::VBoxManageError: There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["storageattach", "8f34c8d7-b3ca-4a35-8138-43ee27c93fa9", "--storagectl", "SATAController", "--port", "1", "--device", "1", "--type", "hdd", "--medium", ".vagrant/master-disk-2.vdi"]

Stderr: VBoxManage: error: The port and/or device parameter are out of range: port=1 (must be in range [0, 29]), device=1 (must be in range [0, 0])
VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component StorageControllerWrap, interface IStorageController, callee nsISupports
VBoxManage: error: Context: "AttachDevice(Bstr(pszCtl).raw(), port, device, DeviceType_HardDisk, pMedium2Mount)" at line 776 of file VBoxManageStorageController.cpp
>

Please fix this customization and try again.
make: *** [start-master] Error 1

Unable to install docker-ce on Centos 8 box image.

Bug Report
As Red Hat explicitly excluded Docker CE on CentOS 8 in favor of Podman, docker-ce doesn't get installed while using BOX_IMAGE="centos/8"
logs: https://hastebin.com/elocuceyeq

Expected behavior:
Docker gets installed successfully.

Deviation from expected behavior:

    master: Error: 
    master:  Problem: package docker-ce-3:19.03.11-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
    master:   - cannot install the best candidate for the job
    master:   - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
    master:   - package containerd.io-1.2.13-3.1.el7.x86_64 is excluded
    master:   - package containerd.io-1.2.13-3.2.el7.x86_64 is excluded

How to reproduce it (minimal and precise):

make up BOX_OS=centos VAGRANT_DEFAULT_PROVIDER=libvirt KUBERNETES_VERSION="v1.17.5" MASTER_CPUS=2 NODE_CPUS=2 MASTER_MEMORY_SIZE_GB=4 NODE_COUNT=1 NODE_MEMORY_SIZE_GB=4 DISK_COUNT=2 DISK_SIZE_GB=25 POD_NW_CIDR=192.168.123.0/24 KUBE_NETWORK="calico" BOX_IMAGE="centos/8"

Environment:
server type: gusty.ci.centos.org
see: https://wiki.centos.org/QaWiki/PubHardware

  • OS of the machine (e.g. from /etc/os-release): Centos 8
  • Kernel of the machine (e.g. uname -a): Linux n59.gusty.ci.centos.org 4.18.0-147.8.1.el8_1.x86_64 #1 SMP Thu Apr 9 13:49:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • make versions output:
=== BEGIN Version Info ===
Repo state: 8ed4982bd0e596ea1331a6f95304ea1130185015 (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.7
=== END Version Info ===

cni failed to set up pod

Hi, default install. I installed helm and then tried to provision the stable/nginx-ingress chart.

NODE_MEMORY_SIZE_GB=3 NODE_CPUS=2 NODE_COUNT=3 make up -j4
kubectl create serviceaccount tiller --namespace kube-system
kubectl create -f /tmp/rbac-config.yaml 
$ cat /tmp/rbac-config.yaml
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system 
helm init --service-account tiller
helm upgrade --install nginx-ingress --namespace ingress --set controller.kind=DaemonSet --set controller.daemonset.useHostPort=true stable/nginx-ingress

I'm seeing the following in the controller pods, they are premantly in ContainerCreating status.

 Warning  FailedCreatePodSandBox  39s                 kubelet, node1     Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "6d408c26782737725756dcdbaf543206433cbaab253de34fabef7eb737a6955e" network for pod "nginx-ingress-controller-5zp68": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5zp68_ingress" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "6d408c26782737725756dcdbaf543206433cbaab253de34fabef7eb737a6955e" network for pod "nginx-ingress-controller-5zp68": NetworkPlugin cni failed to teardown pod "nginx-ingress-controller-5zp68_ingress" network: failed to get IP addresses for "eth0": <nil>]
  Normal   SandboxChanged          35s (x12 over 46s)  kubelet, node1     Pod sandbox changed, it will be killed and re-created.

`make up` hangs sometimes

Bug Report

Expected behavior:

make up not to hang.

Deviation from expected behavior:

Running make up seems to hang sometimes on my test system (make version: GNU make 4.3). remake works fine..

How to reproduce it (minimal and precise):

Run, e.g., make up -j6

Environment:

  • OS of the machine (e.g. from /etc/os-release): Manjaro Linux
  • Kernel of the machine (e.g. uname -a): 5.5.11
  • make versions output:
=== BEGIN Version Info ===
Repo state: bb8e2652802b823748f49b276f270202f31649ca (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /usr/bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.7
=== END Version Info ===

@woohhan have you experienced make up hanging in your environment(s)? I'm trying to make sure it is just my machine having this issue right now.

node kubelet will not stay running: "unable to load client CA file"

The worker node VMs are started, but something is preventing them from joining the kubernetes cluster:

> make status
master                    running (virtualbox)
node1                     running (virtualbox)
node2                     running (virtualbox)

> kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    9m        v1.10.5

The status of the kubelet service on node1 shows that it has exited:

[vagrant@node1 ~]$ systemctl status kubelet
โ— kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           โ””โ”€10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Tue 2018-06-26 20:27:18 UTC; 5s ago
     Docs: http://kubernetes.io/docs/
  Process: 4951 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 4951 (code=exited, status=255)

Journal entries for the last attempt of the kubelet service to run:

Jun 26 20:26:16 node1 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jun 26 20:26:16 node1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 26 20:26:16 node1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Jun 26 20:26:16 node1 kubelet[4922]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jun 26 20:26:16 node1 kubelet[4922]: Flag --allow-privileged has been deprecated, will be removed in a future version
Jun 26 20:26:16 node1 kubelet[4922]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jun 26 20:26:16 node1 kubelet[4922]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jun 26 20:26:16 node1 kubelet[4922]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information
Jun 26 20:26:16 node1 kubelet[4922]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jun 26 20:26:16 node1 kubelet[4922]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
Jun 26 20:26:16 node1 kubelet[4922]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jun 26 20:26:16 node1 kubelet[4922]: I0626 20:26:16.718539    4922 feature_gate.go:226] feature gates: &{{} map[]}
Jun 26 20:26:16 node1 kubelet[4922]: F0626 20:26:16.718847    4922 server.go:218] unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Jun 26 20:26:16 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 26 20:26:16 node1 systemd[1]: Unit kubelet.service entered failed state.
Jun 26 20:26:16 node1 systemd[1]: kubelet.service failed.

GPU Passtrough support via kubevirt

First of all thank you for this quite robust k8s config.
Feature Request
My Gentoo based workstation system has 2 gpu cards, 1 nvidia 1030 used for display and another nvidia Tesla (Architecture: Pascal
Variant: GP100-885-A1
Cuda cores: 2 x 3072). I would like to be able to bind this nvidia tesla to be available on the worker nodes, so I can deploy tensorflow models and use it on the cluster.

What should the feature do:
There are 2 projects which focus on gpu support in k8s nodes:
https://kubevirt.io/tag/passthrough
https://github.com/NVIDIA/kubevirt-gpu-device-plugin

I would like to use those for my aproach. Also note I use libvirt provider by default.

What would be solved through this feature:
GPU support on kube cluster.

Does this have an impact on existing features:
I don't think so, its an extending option to other features.

Is kubectl a pre-requisite?

I ran "make up" on a fresh MacBook Pro (MacOS 10.14) and it failed when trying to use kubectl to set up the cluster (after the VMs were created):
kubectl
config set-cluster
etc, etc
I ran "brew install kubernetes-cli" (could have used "port install kubectl" if I had been using Macports), and re-ran the "make up" and all was well.

Network selection

Is this a bug report or feature request?

  • Feature Request

How to reproduce it (minimal and precise):

  • If you want containers to access some external network, it becomes quite hard to accommodate that after K8S has been set up.

Environment:

  • Bionic 18.04
  • 4.15.0-54
  • make versions output:
=== BEGIN Version Info ===
Repo state: 62d403b37d433db7d3eed6d8a98136837441aadb (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===

Feature Request

Consider the possibility of either allowing additional NICs, or choosing the type of 2nd network (eth1), to more easily access other networks from VMs (and Pod Network).
Currently MASTER_IP uses eth1 and NODE_IP_NW can't "collide" with host network (although it wouldn't necesarily collide if it was a bridged network), so one can't have Pods on default network.

Are there any similar features already existing:

Manual tinkering with Vagrant files.

What should the feature do:

One of the following with the option to use NODE_IP_NW on that network:

  1. Allow the selection of eth1 NIC type (intnet or bridged, for example)
  2. Add the third NIC (eth2) using bridged adapter (maybe make that default VM route)

What would be solved through this feature:

Access from and to other networks, both on hypervisor, but also external. Currently if one has existing services on intnet or vboxnet15 and vbox37 and this project has to pick one vboxnet, it becomes necessary to install multiple clusters or edit Vagrant networks which either in this VM or existing VMs.

Does this have an impact on existing features:

I can't think of anything that stands out. If Pod Network was bridged, we'd have to ask for a range of unallocated IPs (NODE_IP_NW, documentation) and maybe ping-probe the range for availability before deployment.

Installed failed in Ubuntu OS (Not a MacBook)

Hi Alex,

I have new use case and planning to k8s-vagrant-multi-node process.

Host OS : Ubuntu 18.04.2 LTS (Not a MacBook)
Vagrant : Vagrant 2.0.2
For some reason, I can't install VirtualBox, when I tried, it delete the vagrant ... or vise-versa.

Long story short, The installation goes smooth but failed


/home/kylix3511/k8s/k8s-vagrant-multi-node/vagrantfiles/vagrant-provision-disk-and-reboot-plugin.rb:33:in `provision': undefined method `uuid' for #

Entire process/error message.


root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node# vagrant --version
Vagrant 2.0.2
root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node#
root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node# vagrant plugin list
vagrant-libvirt (0.0.43, system)
root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node#
root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node# BOX_OS=ubuntu  KUBERNETES_VERSION=1.14.1  CLUSTER_NAME=local-k8s-ubuntu NODE_MEMORY_SIZE_GB=3 MASTER_MEMORY_SIZE_GB=3 NODE_CPUS=2 MASTER_CPUS=2 NODE_COUNT=2  make -j8 clean
vagrant destroy -f
NODE=1 vagrant destroy -f node1
NODE=2 vagrant destroy -f node2
rm -v -rf "/home/kylix3511/k8s/k8s-vagrant-multi-node/data/"*
rm -v -rf "/home/kylix3511/k8s/k8s-vagrant-multi-node/.vagrant/KUBETOKEN"
removed '/home/kylix3511/k8s/k8s-vagrant-multi-node/.vagrant/KUBETOKEN'
==> master: Removing domain...
==> master: Running cleanup tasks for 'diskandreboot' provisioner...
==> node2: Removing domain...
==> node1: Removing domain...
==> node2: Running cleanup tasks for 'diskandreboot' provisioner...
==> node1: Running cleanup tasks for 'diskandreboot' provisioner...
root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node# BOX_OS=ubuntu  KUBERNETES_VERSION=1.14.1  CLUSTER_NAME=local-k8s-ubuntu NODE_MEMORY_SIZE_GB=3 MASTER_MEMORY_SIZE_GB=3 NODE_CPUS=2 MASTER_CPUS=2 NODE_COUNT=2  make -j8 up
make[1]: Entering directory '/home/kylix3511/k8s/k8s-vagrant-multi-node'
Checking for updates to 'generic/ubuntu1804'
Latest installed version: 1.9.8
Version constraints: > 1.9.8
Provider: libvirt
Box 'generic/ubuntu1804' (v1.9.8) is running the latest version.
make[2]: Entering directory '/home/kylix3511/k8s/k8s-vagrant-multi-node'
vagrant up
NODE=1 vagrant up
NODE=2 vagrant up
Bringing machine 'node2' up with 'libvirt' provider...
==> node2: Checking if box 'generic/ubuntu1804' is up to date...
Bringing machine 'node1' up with 'libvirt' provider...
Bringing machine 'master' up with 'libvirt' provider...
==> node1: Checking if box 'generic/ubuntu1804' is up to date...
==> master: Checking if box 'generic/ubuntu1804' is up to date...
==> node2: Creating image (snapshot of base box volume).
==> node2: Creating domain with the following settings...
==> node2:  -- Name:              k8s-vagrant-multi-node_node2
==> node2:  -- Domain type:       kvm
==> node2:  -- Cpus:              2
==> node2:
==> node2:  -- Feature:           acpi
==> node2:  -- Feature:           apic
==> node2:  -- Feature:           pae
==> node2:  -- Memory:            2048M
==> node2:  -- Management MAC:
==> node2:  -- Loader:
==> node2:  -- Base box:          generic/ubuntu1804
==> node2:  -- Storage pool:      default
==> node2:  -- Image:             /var/lib/libvirt/images/k8s-vagrant-multi-node_node2.img (32G)
==> node2:  -- Volume Cache:      default
==> node2:  -- Kernel:
==> node2:  -- Initrd:
==> node2:  -- Graphics Type:     vnc
==> node2:  -- Graphics Port:     -1
==> node2:  -- Graphics IP:       127.0.0.1
==> node2:  -- Graphics Password: Not defined
==> node2:  -- Video Type:        cirrus
==> node2:  -- Video VRAM:        256
==> node2:  -- Sound Type:
==> node2:  -- Keymap:            en-us
==> node2:  -- TPM Path:
==> node2:  -- INPUT:             type=mouse, bus=ps2
==> node2: Creating shared folders metadata...
==> node2: Starting domain.
==> master: Creating image (snapshot of base box volume).
==> node1: Creating image (snapshot of base box volume).
==> node1: Creating domain with the following settings...
==> node1:  -- Name:              k8s-vagrant-multi-node_node1
==> node1:  -- Domain type:       kvm
==> node1:  -- Cpus:              2
==> node1:
==> node1:  -- Feature:           acpi
==> node1:  -- Feature:           apic
==> node1:  -- Feature:           pae
==> node1:  -- Memory:            2048M
==> node1:  -- Management MAC:
==> node1:  -- Loader:
==> node1:  -- Base box:          generic/ubuntu1804
==> node1:  -- Storage pool:      default
==> node1:  -- Image:             /var/lib/libvirt/images/k8s-vagrant-multi-node_node1.img (32G)
==> node1:  -- Volume Cache:      default
==> node1:  -- Kernel:
==> node1:  -- Initrd:
==> node1:  -- Graphics Type:     vnc
==> node1:  -- Graphics Port:     -1
==> node1:  -- Graphics IP:       127.0.0.1
==> node1:  -- Graphics Password: Not defined
==> node1:  -- Video Type:        cirrus
==> node1:  -- Video VRAM:        256
==> node1:  -- Sound Type:
==> node1:  -- Keymap:            en-us
==> node1:  -- TPM Path:
==> node2: Waiting for domain to get an IP address...
==> node1:  -- INPUT:             type=mouse, bus=ps2
==> master: Creating domain with the following settings...
==> master:  -- Name:              k8s-vagrant-multi-node_master
==> master:  -- Domain type:       kvm
==> master:  -- Cpus:              2
==> master:
==> master:  -- Feature:           acpi
==> master:  -- Feature:           apic
==> master:  -- Feature:           pae
==> master:  -- Memory:            2048M
==> master:  -- Management MAC:
==> master:  -- Loader:
==> master:  -- Base box:          generic/ubuntu1804
==> master:  -- Storage pool:      default
==> master:  -- Image:             /var/lib/libvirt/images/k8s-vagrant-multi-node_master.img (32G)
==> master:  -- Volume Cache:      default
==> master:  -- Kernel:
==> master:  -- Initrd:
==> master:  -- Graphics Type:     vnc
==> master:  -- Graphics Port:     -1
==> master:  -- Graphics IP:       127.0.0.1
==> master:  -- Graphics Password: Not defined
==> master:  -- Video Type:        cirrus
==> master:  -- Video VRAM:        256
==> master:  -- Sound Type:
==> master:  -- Keymap:            en-us
==> master:  -- TPM Path:
==> master:  -- INPUT:             type=mouse, bus=ps2
==> master: Creating shared folders metadata...
==> node1: Creating shared folders metadata...
==> master: Starting domain.
==> node1: Starting domain.
==> master: Waiting for domain to get an IP address...
==> node1: Waiting for domain to get an IP address...
==> master: Waiting for SSH to become available...
==> node2: Waiting for SSH to become available...
==> node1: Waiting for SSH to become available...
    node2:
    node2: Vagrant insecure key detected. Vagrant will automatically replace
    node2: this with a newly generated keypair for better security.
    master:
    master: Vagrant insecure key detected. Vagrant will automatically replace
    master: this with a newly generated keypair for better security.
    node2:
    node2: Inserting generated public key within guest...
    master:
    master: Inserting generated public key within guest...
    node2: Removing insecure key from the guest if it's present...
    master: Removing insecure key from the guest if it's present...
    node2: Key inserted! Disconnecting and reconnecting using new SSH key...
    master: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node2: Setting hostname...
==> master: Setting hostname...
==> node2: Configuring and enabling network interfaces...
==> master: Configuring and enabling network interfaces...
    node1:
    node1: Vagrant insecure key detected. Vagrant will automatically replace
    node1: this with a newly generated keypair for better security.
==> node2: Rsyncing folder: /home/kylix3511/k8s/k8s-vagrant-multi-node/data/ubuntu-node2/ => /data
==> master: Rsyncing folder: /home/kylix3511/k8s/k8s-vagrant-multi-node/data/ubuntu-master/ => /data
    node1:
    node1: Inserting generated public key within guest...
    node1: Removing insecure key from the guest if it's present...
    node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node2: Running provisioner: shell...
==> master: Running provisioner: shell...
    node2: Running: inline script
    master: Running: inline script
==> node1: Setting hostname...
    node2: net.ipv6.conf.all.disable_ipv6 = 0
    node2: net.ipv6.conf.default.disable_ipv6 = 0
    node2: net.ipv6.conf.lo.disable_ipv6 = 0
    node2: net.ipv6.conf.all.accept_dad = 0
    node2: net.ipv6.conf.default.accept_dad = 0
    node2: net.bridge.bridge-nf-call-iptables = 1
    master: net.ipv6.conf.all.disable_ipv6 = 0
    master: net.ipv6.conf.default.disable_ipv6 = 0
    master: net.ipv6.conf.lo.disable_ipv6 = 0
    master: net.ipv6.conf.all.accept_dad = 0
    master: net.ipv6.conf.default.accept_dad = 0
    master: net.bridge.bridge-nf-call-iptables = 1
    node2: Created symlink /etc/systemd/system/default.target.wants/ip-set-mtu.service -> /etc/systemd/system/ip-set-mtu.service.
    master: Created symlink /etc/systemd/system/default.target.wants/ip-set-mtu.service -> /etc/systemd/system/ip-set-mtu.service.
==> node2: Running provisioner: shell...
==> node1: Configuring and enabling network interfaces...
==> master: Running provisioner: shell...
    node2: Running: inline script
    master: Running: inline script
    node2: ++ retries=5
    node2: ++ (( i=0 ))
    node2: ++ (( i<retries ))
    node2: ++ apt-get update
    master: ++ retries=5
    master: ++ (( i=0 ))
    master: ++ (( i<retries ))
    master: ++ apt-get update
    node2: Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
    master: Hit:1 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    master: Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
    node2: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node2: Get:3 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
    master: Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
    master: Get:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
==> node1: Rsyncing folder: /home/kylix3511/k8s/k8s-vagrant-multi-node/data/ubuntu-node1/ => /data
    node2: Get:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
    master: Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [487 kB]
    node2: Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [574 kB]
    master: Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [574 kB]
    node2: Get:6 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [296 kB]
    master: Get:7 http://us.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [211 kB]
    node2: Get:7 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [487 kB]
    master: Get:8 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [6,996 B]
    master: Get:9 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted i386 Packages [6,960 B]
    master: Get:10 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [744 kB]
    node2: Get:8 http://us.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [211 kB]
    node2: Get:9 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [6,996 B]
    node2: Get:10 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted i386 Packages [6,960 B]
    node2: Get:11 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [755 kB]
    master: Get:11 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [755 kB]
    master: Get:12 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [201 kB]
    master: Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [6,388 B]
    master: Get:14 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse i386 Packages [6,544 B]
    master: Get:15 http://us.archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [1,024 B]
    master: Get:16 http://us.archive.ubuntu.com/ubuntu bionic-backports/main i386 Packages [1,024 B]
    node2: Get:12 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [744 kB]
    master: Get:17 http://us.archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [448 B]
    master: Get:18 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [3,468 B]
    master: Get:19 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe i386 Packages [3,460 B]
    node2: Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [201 kB]
    node2: Get:14 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse i386 Packages [6,544 B]
    node2: Get:15 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [6,388 B]
    node2: Get:16 http://us.archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [1,024 B]
    node2: Get:17 http://us.archive.ubuntu.com/ubuntu bionic-backports/main i386 Packages [1,024 B]
    node2: Get:18 http://us.archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [448 B]
    node2: Get:19 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [3,468 B]
    node2: Get:20 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe i386 Packages [3,460 B]
    master: Get:20 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [216 kB]
    node2: Get:21 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [216 kB]
    master: Get:21 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [296 kB]
    node2: Get:22 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [106 kB]
    node2: Get:23 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [4,296 B]
    node2: Get:24 http://security.ubuntu.com/ubuntu bionic-security/restricted i386 Packages [4,280 B]
    node2: Get:25 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [2,192 B]
    node2: Get:26 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [131 kB]
    master: Get:22 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [106 kB]
    master: Get:23 http://security.ubuntu.com/ubuntu bionic-security/restricted i386 Packages [4,280 B]
    master: Get:24 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [4,296 B]
    master: Get:25 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [2,192 B]
    node2: Get:27 http://security.ubuntu.com/ubuntu bionic-security/universe i386 Packages [127 kB]
    master: Get:26 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [131 kB]
    master: Get:27 http://security.ubuntu.com/ubuntu bionic-security/universe i386 Packages [127 kB]
    master: Get:28 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [74.1 kB]
    master: Get:29 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [3,748 B]
    master: Get:30 http://security.ubuntu.com/ubuntu bionic-security/multiverse i386 Packages [3,900 B]
    master: Get:31 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [1,952 B]
    node2: Get:28 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [74.1 kB]
    node2: Get:29 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [3,748 B]
    node2: Get:30 http://security.ubuntu.com/ubuntu bionic-security/multiverse i386 Packages [3,900 B]
    node2: Get:31 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [1,952 B]
==> node1: Running provisioner: shell...
    node1: Running: inline script
    node1: net.ipv6.conf.all.disable_ipv6 = 0
    node1: net.ipv6.conf.default.disable_ipv6 = 0
    node1: net.ipv6.conf.lo.disable_ipv6 = 0
    node1: net.ipv6.conf.all.accept_dad = 0
    node1: net.ipv6.conf.default.accept_dad = 0
    node1: net.bridge.bridge-nf-call-iptables = 1
    node1: Created symlink /etc/systemd/system/default.target.wants/ip-set-mtu.service -> /etc/systemd/system/ip-set-mtu.service.
==> node1: Running provisioner: shell...
    node1: Running: inline script
    node1: ++ retries=5
    node1: ++ (( i=0 ))
    node1: ++ (( i<retries ))
    node1: ++ apt-get update
    node1: Hit:1 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node1: Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
    node1: Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
    node1: Get:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
    node1: Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [574 kB]
    node1: Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [487 kB]
    node1: Get:7 http://us.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [211 kB]
    node1: Get:8 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [6,996 B]
    node1: Get:9 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted i386 Packages [6,960 B]
    node1: Get:10 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [744 kB]
    node1: Get:11 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [755 kB]
    node1: Get:12 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [201 kB]
    node1: Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [6,388 B]
    node1: Get:14 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse i386 Packages [6,544 B]
    node1: Get:15 http://us.archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [1,024 B]
    node1: Get:16 http://us.archive.ubuntu.com/ubuntu bionic-backports/main i386 Packages [1,024 B]
    node1: Get:17 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [296 kB]
    node1: Get:18 http://us.archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [448 B]
    node1: Get:19 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe i386 Packages [3,460 B]
    node1: Get:20 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [3,468 B]
    node1: Get:21 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [216 kB]
    node1: Get:22 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [106 kB]
    node1: Get:23 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [4,296 B]
    node1: Get:24 http://security.ubuntu.com/ubuntu bionic-security/restricted i386 Packages [4,280 B]
    node1: Get:25 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [2,192 B]
    node1: Get:26 http://security.ubuntu.com/ubuntu bionic-security/universe i386 Packages [127 kB]
    node1: Get:27 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [131 kB]
    node1: Get:28 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [74.1 kB]
    node1: Get:29 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [3,748 B]
    node1: Get:30 http://security.ubuntu.com/ubuntu bionic-security/multiverse i386 Packages [3,900 B]
    node1: Get:31 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [1,952 B]
    node2: Fetched 4,233 kB in 4s (1,176 kB/s)
    node2: Reading package lists...
    master: Fetched 4,233 kB in 3s (1,278 kB/s)
    master: Reading package lists...
    node2: ++ apt-get -y install apt-transport-https curl software-properties-common ca-certificates
    node2: Reading package lists...
    node2: Building dependency tree...
    node2: Reading state information...
    master: ++ apt-get -y install apt-transport-https curl software-properties-common ca-certificates
    master: Reading package lists...
    master: Building dependency tree...
    node2: ca-certificates is already the newest version (20180409).
    node2: curl is already the newest version (7.58.0-2ubuntu3.6).
    node2: software-properties-common is already the newest version (0.96.24.32.7).
    node2: The following NEW packages will be installed:
    node2:   apt-transport-https
    master:
    master: Reading state information...
    node2: 0 upgraded, 1 newly installed, 0 to remove and 54 not upgraded.
    node2: Need to get 1,692 B of archives.
    node2: After this operation, 153 kB of additional disk space will be used.
    node2: Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.10 [1,692 B]
    master: ca-certificates is already the newest version (20180409).
    master: curl is already the newest version (7.58.0-2ubuntu3.6).
    master: software-properties-common is already the newest version (0.96.24.32.7).
    master: The following NEW packages will be installed:
    master:   apt-transport-https
    node2: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    node2: Fetched 1,692 B in 1s (2,622 B/s)
    node2: Selecting previously unselected package apt-transport-https.
    node2: (Reading database ...
    node2: (Reading database ... 5%
    node2: (Reading database ... 10%
    node2: (Reading database ... 15%
    node2: (Reading database ... 20%
    node2: (Reading database ... 25%
    node2: (Reading database ... 30%
    node2: (Reading database ... 35%
    node2: (Reading database ... 40%
    node2: (Reading database ... 45%
    node2: (Reading database ... 50%
    node2: (Reading database ... 55%
    node2: (Reading database ... 60%
    node2: (Reading database ... 65%
    node2: (Reading database ... 70%
    node2: (Reading database ... 75%
    node2: (Reading database ... 80%
    master: 0 upgraded, 1 newly installed, 0 to remove and 54 not upgraded.
    master: Need to get 1,692 B of archives.
    master: After this operation, 153 kB of additional disk space will be used.
    master: Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.10 [1,692 B]
    node2: (Reading database ... 85%
    node2: (Reading database ... 90%
    node2: (Reading database ... 95%
    node2: (Reading database ... 100%
    node2: (Reading database ...
    node2: 105316 files and directories currently installed.)
    node2: Preparing to unpack .../apt-transport-https_1.6.10_all.deb ...
    node2: Unpacking apt-transport-https (1.6.10) ...
    master: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    master: Fetched 1,692 B in 1s (2,466 B/s)
    node2: Setting up apt-transport-https (1.6.10) ...
    master: Selecting previously unselected package apt-transport-https.
    master: (Reading database ...
    master: (Reading database ... 5%
(Reading database ... 25%base ... 10%
(Reading database ... 50%base ... 30%
    master: (Reading database ... 55%
    master: (Reading database ... 60%
    master: (Reading database ... 65%
    master: (Reading database ... 70%
    master: (Reading database ... 75%
    master: (Reading database ... 80%
    master: (Reading database ... 85%
    master: (Reading database ... 90%
    master: (Reading database ... 95%
(Reading database ... 105316 files and directories currently installed.)
    master: Preparing to unpack .../apt-transport-https_1.6.10_all.deb ...
    master: Unpacking apt-transport-https (1.6.10) ...
    master: Setting up apt-transport-https (1.6.10) ...
    node2: ++ break
    node2: ++ [[ 5 -eq i ]]
    node2: ++ curl -fsSL https://download.docker.com/linux/ubuntu/gpg
    node2: ++ sudo apt-key add -
    node2: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node2: OK
    node2: +++ lsb_release -cs
    node2: ++ add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu    bionic    stable'
    node1: Fetched 4,233 kB in 3s (1,369 kB/s)
    node1: Reading package lists...
    node2: Get:1 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
    node2: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node2: Get:3 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [5,673 B]
    node2: Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node2: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    node2: Hit:6 http://security.ubuntu.com/ubuntu bionic-security InRelease
    master: ++ break
    master: ++ [[ 5 -eq i ]]
    master: ++ sudo apt-key add -
    master: ++ curl -fsSL https://download.docker.com/linux/ubuntu/gpg
    master: Warning: apt-key output should not be parsed (stdout is not a terminal)
    master: OK
    master: +++ lsb_release -cs
    master: ++ add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu    bionic    stable'
    node1: ++ apt-get -y install apt-transport-https curl software-properties-common ca-certificates
    node1: Reading package lists...
    node1: Building dependency tree...
    master: Get:1 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
    node1: Reading state information...
    master: Get:2 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [5,673 B]
    master: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    master: Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    master: Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node1: ca-certificates is already the newest version (20180409).
    node1: curl is already the newest version (7.58.0-2ubuntu3.6).
    node1: software-properties-common is already the newest version (0.96.24.32.7).
    node1: The following NEW packages will be installed:
    node1:   apt-transport-https
    master: Hit:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    node1: 0 upgraded, 1 newly installed, 0 to remove and 54 not upgraded.
    node1: Need to get 1,692 B of archives.
    node1: After this operation, 153 kB of additional disk space will be used.
    node1: Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.10 [1,692 B]
    node1: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    node1: Fetched 1,692 B in 0s (3,957 B/s)
    node1: Selecting previously unselected package apt-transport-https.
    node1: (Reading database ...
    node1: (Reading database ... 5%
    node1: (Reading database ... 10%
    node1: (Reading database ... 15%
    node1: (Reading database ... 20%
    node1: (Reading database ... 25%
    node1: (Reading database ... 30%
    node1: (Reading database ... 35%
    node1: (Reading database ... 40%
    node1: (Reading database ... 45%
    node1: (Reading database ... 50%
    node1: (Reading database ... 55%
    node1: (Reading database ... 60%
    node1: (Reading database ... 65%
    node1: (Reading database ... 70%
    node1: (Reading database ... 75%
    node1: (Reading database ... 80%
    node1: (Reading database ... 85%
    node1: (Reading database ... 90%
    node1: (Reading database ... 95%
    node1: (Reading database ... 100%
    node1: (Reading database ...
    node1: 105316 files and directories currently installed.)
    node1: Preparing to unpack .../apt-transport-https_1.6.10_all.deb ...
    node1: Unpacking apt-transport-https (1.6.10) ...
    node1: Setting up apt-transport-https (1.6.10) ...
    node2: Fetched 70.1 kB in 1s (60.3 kB/s)
    node2: Reading package lists...
    node1: ++ break
    node1: ++ [[ 5 -eq i ]]
    node1: ++ curl -fsSL https://download.docker.com/linux/ubuntu/gpg
    node1: ++ sudo apt-key add -
    node1: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node1: OK
    node1: +++ lsb_release -cs
    node1: ++ add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu    bionic    stable'
    node2: ++ curl curl --retry 5 --fail -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
    node2: ++ apt-key add -
    node2: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node1: Get:1 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
    node1: Get:2 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [5,673 B]
    node1: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node1: Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node1: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node1: Hit:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    master: Fetched 70.1 kB in 1s (71.9 kB/s)
    master: Reading package lists...
    master: ++ curl curl --retry 5 --fail -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
    master: ++ apt-key add -
    master: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node1: Fetched 70.1 kB in 1s (80.7 kB/s)
    node1: Reading package lists...
    node1: ++ apt-key add -
    node1: ++ curl curl --retry 5 --fail -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
    node1: Warning: apt-key output should not be parsed (stdout is not a terminal)
    node2: OK
    node2: ++ cat
    node2: ++ '[' -n 1.14.1 ']'
    node2: ++ KUBERNETES_PACKAGES='kubelet=1.14.1-00 kubeadm=1.14.1-00 kubectl=1.14.1-00'
    node2: ++ retries=5
    node2: ++ (( i=0 ))
    node2: ++ (( i<retries ))
    node2: ++ apt-get update
    node2: Hit:1 https://download.docker.com/linux/ubuntu bionic InRelease
    node2: Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node2: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node2: Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node2: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    node2: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
    node2: Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [25.2 kB]
    master: OK
    master: ++ cat
    master: ++ '[' -n 1.14.1 ']'
    master: ++ KUBERNETES_PACKAGES='kubelet=1.14.1-00 kubeadm=1.14.1-00 kubectl=1.14.1-00'
    master: ++ retries=5
    master: ++ (( i=0 ))
    master: ++ (( i<retries ))
    master: ++ apt-get update
    master: Hit:1 https://download.docker.com/linux/ubuntu bionic InRelease
    master: Hit:2 http://security.ubuntu.com/ubuntu bionic-security InRelease
    master: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    master: Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    master: Hit:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    master: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
    master: Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [25.2 kB]
    node2: Fetched 34.2 kB in 2s (21.6 kB/s)
    node2: Reading package lists...
    node1: OK
    node1: ++ cat
    node1: ++ '[' -n 1.14.1 ']'
    node1: ++ KUBERNETES_PACKAGES='kubelet=1.14.1-00 kubeadm=1.14.1-00 kubectl=1.14.1-00'
    node1: ++ retries=5
    node1: ++ (( i=0 ))
    node1: ++ (( i<retries ))
    node1: ++ apt-get update
    node1: Hit:1 https://download.docker.com/linux/ubuntu bionic InRelease
    node1: Hit:3 http://us.archive.ubuntu.com/ubuntu bionic InRelease
    node1: Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
    node2: ++ apt-get -y install screen telnet conntrack socat docker-ce=5:18.09.1~3-0~ubuntu-bionic kubelet=1.14.1-00 kubeadm=1.14.1-00 kubectl=1.14.1-00
    node1: Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease
    node2: Reading package lists...
    node1: Hit:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
    node1: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
    node2: Building dependency tree...
    node2:
    node2: Reading state information...
    node1: Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [25.2 kB]
    node2: telnet is already the newest version (0.17-41).
    node2: screen is already the newest version (4.6.2-1ubuntu1).
    node2: The following additional packages will be installed:
    node2:   aufs-tools cgroupfs-mount containerd.io cri-tools docker-ce-cli
    node2:   kubernetes-cni libltdl7 pigz
    node2: The following NEW packages will be installed:
    node2:   aufs-tools cgroupfs-mount conntrack containerd.io cri-tools docker-ce
    node2:   docker-ce-cli kubeadm kubectl kubelet kubernetes-cni libltdl7 pigz socat
    node2: 0 upgraded, 14 newly installed, 0 to remove and 54 not upgraded.
    node2: Need to get 101 MB of archives.
    node2: After this operation, 534 MB of additional disk space will be used.
    node2: Get:1 https://download.docker.com/linux/ubuntu bionic/stable amd64 containerd.io amd64 1.2.5-1 [19.9 MB]
    node2: Get:7 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
    node2: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.12.0-00 [5,343 kB]
    master: Fetched 34.2 kB in 1s (28.7 kB/s)
    master: Reading package lists...
    node2: Get:8 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce-cli amd64 5:18.09.4~3-0~ubuntu-bionic [13.2 MB]
    node2: Get:9 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
    node2: Get:10 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 cgroupfs-mount all 1.4 [6,320 B]
    node2: Get:11 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]
    node2: Get:12 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
    node2: Get:13 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]
    node2: Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]
    node2: Get:14 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce amd64 5:18.09.1~3-0~ubuntu-bionic [17.4 MB]
    node2: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.14.1-00 [21.5 MB]
    master: ++ apt-get -y install screen telnet conntrack socat docker-ce=5:18.09.1~3-0~ubuntu-bionic kubelet=1.14.1-00 kubeadm=1.14.1-00 kubectl=1.14.1-00
    master: Reading package lists...
    master: Building dependency tree...
    master:
    master: Reading state information...
    node2: Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.14.1-00 [8,806 kB]
    master: telnet is already the newest version (0.17-41).
    master: screen is already the newest version (4.6.2-1ubuntu1).
    master: The following additional packages will be installed:
    master:   aufs-tools cgroupfs-mount containerd.io cri-tools docker-ce-cli
    master:   kubernetes-cni libltdl7 pigz
    master: The following NEW packages will be installed:
    node1: Fetched 34.2 kB in 1s (27.8 kB/s)
    node1: Reading package lists...
    master:   aufs-tools cgroupfs-mount conntrack containerd.io cri-tools docker-ce
    master:   docker-ce-cli kubeadm kubectl kubelet kubernetes-cni libltdl7 pigz socat
    master: 0 upgraded, 14 newly installed, 0 to remove and 54 not upgraded.
    master: Need to get 101 MB of archives.
    master: After this operation, 534 MB of additional disk space will be used.
    master: Get:1 https://download.docker.com/linux/ubuntu bionic/stable amd64 containerd.io amd64 1.2.5-1 [19.9 MB]
    node2: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.14.1-00 [8,150 kB]
    master: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.12.0-00 [5,343 kB]
    master: Get:7 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce-cli amd64 5:18.09.4~3-0~ubuntu-bionic [13.2 MB]
    master: Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]
    master: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.14.1-00 [21.5 MB]
    master: Get:8 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce amd64 5:18.09.1~3-0~ubuntu-bionic [17.4 MB]
    node2: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    master: Get:9 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
    node2: Fetched 101 MB in 5s (21.5 MB/s)
    node2: Selecting previously unselected package pigz.
    node2: (Reading database ...
(Reading database ... 55%ase ... 5%
    node2: (Reading database ... 60%
    node2: (Reading database ... 65%
    node2: (Reading database ... 70%
    node2: (Reading database ... 75%
    node2: (Reading database ... 80%
    node2: (Reading database ... 85%
    node2: (Reading database ... 90%
    node2: (Reading database ... 95%
(Reading database ... 105320 files and directories currently installed.)
    node2: Preparing to unpack .../00-pigz_2.4-1_amd64.deb ...
    node2: Unpacking pigz (2.4-1) ...
    node2: Selecting previously unselected package aufs-tools.
    master: Get:10 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
    node2: Preparing to unpack .../01-aufs-tools_1%3a4.9+20170918-1ubuntu1_amd64.deb ...
    node2: Unpacking aufs-tools (1:4.9+20170918-1ubuntu1) ...
    master: Get:11 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 cgroupfs-mount all 1.4 [6,320 B]
    master: Get:12 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]
    master: Get:13 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
    master: Get:14 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]
    node2: Selecting previously unselected package cgroupfs-mount.
    node2: Preparing to unpack .../02-cgroupfs-mount_1.4_all.deb ...
    node2: Unpacking cgroupfs-mount (1.4) ...
    node2: Selecting previously unselected package conntrack.
    node2: Preparing to unpack .../03-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
    master: Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.14.1-00 [8,806 kB]
    node2: Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    node2: Selecting previously unselected package containerd.io.
    master: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.14.1-00 [8,150 kB]
    node2: Preparing to unpack .../04-containerd.io_1.2.5-1_amd64.deb ...
    node2: Unpacking containerd.io (1.2.5-1) ...
    node1: ++ apt-get -y install screen telnet conntrack socat docker-ce=5:18.09.1~3-0~ubuntu-bionic kubelet=1.14.1-00 kubeadm=1.14.1-00 kubectl=1.14.1-00
    node1: Reading package lists...
    node1: Building dependency tree...
    node1:
    node1: Reading state information...
    node1: telnet is already the newest version (0.17-41).
    node1: screen is already the newest version (4.6.2-1ubuntu1).
    node1: The following additional packages will be installed:
    node1:   aufs-tools cgroupfs-mount containerd.io cri-tools docker-ce-cli
    node1:   kubernetes-cni libltdl7 pigz
    node1: The following NEW packages will be installed:
    node1:   aufs-tools cgroupfs-mount conntrack containerd.io cri-tools docker-ce
    node1:   docker-ce-cli kubeadm kubectl kubelet kubernetes-cni libltdl7 pigz socat
    node1: 0 upgraded, 14 newly installed, 0 to remove and 54 not upgraded.
    node1: Need to get 101 MB of archives.
    node1: After this operation, 534 MB of additional disk space will be used.
    node1: Get:1 https://download.docker.com/linux/ubuntu bionic/stable amd64 containerd.io amd64 1.2.5-1 [19.9 MB]
    master: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    master: Fetched 101 MB in 3s (32.5 MB/s)
    master: Selecting previously unselected package pigz.
    master: (Reading database ...
(Reading database ... 55%base ... 5%
    master: (Reading database ... 60%
    master: (Reading database ... 65%
    master: (Reading database ... 70%
    master: (Reading database ... 75%
    node1: Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.12.0-00 [5,343 kB]
    master: (Reading database ... 80%
    master: (Reading database ... 85%
    master: (Reading database ... 90%
    master: (Reading database ... 95%
(Reading database ... 105320 files and directories currently installed.)
    master: Preparing to unpack .../00-pigz_2.4-1_amd64.deb ...
    master: Unpacking pigz (2.4-1) ...
    node1: Get:7 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
    node1: Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]
    node1: Get:8 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce-cli amd64 5:18.09.4~3-0~ubuntu-bionic [13.2 MB]
    master: Selecting previously unselected package aufs-tools.
    master: Preparing to unpack .../01-aufs-tools_1%3a4.9+20170918-1ubuntu1_amd64.deb ...
    node1: Get:9 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
    master: Unpacking aufs-tools (1:4.9+20170918-1ubuntu1) ...
    node1: Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.14.1-00 [21.5 MB]
    node1: Get:10 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 cgroupfs-mount all 1.4 [6,320 B]
    node1: Get:11 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]
    node1: Get:12 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
    node1: Get:13 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]
    node1: Get:14 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce amd64 5:18.09.1~3-0~ubuntu-bionic [17.4 MB]
    master: Selecting previously unselected package cgroupfs-mount.
    master: Preparing to unpack .../02-cgroupfs-mount_1.4_all.deb ...
    node1: Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.14.1-00 [8,806 kB]
    master: Unpacking cgroupfs-mount (1.4) ...
    node1: Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.14.1-00 [8,150 kB]
    master: Selecting previously unselected package conntrack.
    master: Preparing to unpack .../03-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
    master: Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    master: Selecting previously unselected package containerd.io.
    master: Preparing to unpack .../04-containerd.io_1.2.5-1_amd64.deb ...
    master: Unpacking containerd.io (1.2.5-1) ...
    node1: dpkg-preconfigure: unable to re-open stdin: No such file or directory
    node1: Fetched 101 MB in 2s (49.1 MB/s)
    node1: Selecting previously unselected package pigz.
    node1: (Reading database ...
(Reading database ... 55%ase ... 5%
    node1: (Reading database ... 60%
    node1: (Reading database ... 65%
(Reading database ... 75%ase ... 70%
(Reading database ... 85%ase ... 80%
    node1: (Reading database ... 90%
    node1: (Reading database ... 95%
(Reading database ... 105320 files and directories currently installed.)
    node1: Preparing to unpack .../00-pigz_2.4-1_amd64.deb ...
    node1: Unpacking pigz (2.4-1) ...
    node1: Selecting previously unselected package aufs-tools.
    node1: Preparing to unpack .../01-aufs-tools_1%3a4.9+20170918-1ubuntu1_amd64.deb ...
    node1: Unpacking aufs-tools (1:4.9+20170918-1ubuntu1) ...
    node1: Selecting previously unselected package cgroupfs-mount.
    node1: Preparing to unpack .../02-cgroupfs-mount_1.4_all.deb ...
    node1: Unpacking cgroupfs-mount (1.4) ...
    node1: Selecting previously unselected package conntrack.
    node1: Preparing to unpack .../03-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
    node1: Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    node1: Selecting previously unselected package containerd.io.
    node1: Preparing to unpack .../04-containerd.io_1.2.5-1_amd64.deb ...
    node1: Unpacking containerd.io (1.2.5-1) ...
    node2: Selecting previously unselected package cri-tools.
    node2: Preparing to unpack .../05-cri-tools_1.12.0-00_amd64.deb ...
    node2: Unpacking cri-tools (1.12.0-00) ...
    master: Selecting previously unselected package cri-tools.
    master: Preparing to unpack .../05-cri-tools_1.12.0-00_amd64.deb ...
    master: Unpacking cri-tools (1.12.0-00) ...
    node2: Selecting previously unselected package docker-ce-cli.
    node2: Preparing to unpack .../06-docker-ce-cli_5%3a18.09.4~3-0~ubuntu-bionic_amd64.deb ...
    node2: Unpacking docker-ce-cli (5:18.09.4~3-0~ubuntu-bionic) ...
    master: Selecting previously unselected package docker-ce-cli.
    master: Preparing to unpack .../06-docker-ce-cli_5%3a18.09.4~3-0~ubuntu-bionic_amd64.deb ...
    master: Unpacking docker-ce-cli (5:18.09.4~3-0~ubuntu-bionic) ...
    node1: Selecting previously unselected package cri-tools.
    node1: Preparing to unpack .../05-cri-tools_1.12.0-00_amd64.deb ...
    node1: Unpacking cri-tools (1.12.0-00) ...
    node1: Selecting previously unselected package docker-ce-cli.
    node1: Preparing to unpack .../06-docker-ce-cli_5%3a18.09.4~3-0~ubuntu-bionic_amd64.deb ...
    node1: Unpacking docker-ce-cli (5:18.09.4~3-0~ubuntu-bionic) ...
    master: Selecting previously unselected package docker-ce.
    node2: Selecting previously unselected package docker-ce.
    master: Preparing to unpack .../07-docker-ce_5%3a18.09.1~3-0~ubuntu-bionic_amd64.deb ...
    node2: Preparing to unpack .../07-docker-ce_5%3a18.09.1~3-0~ubuntu-bionic_amd64.deb ...
    node2: Unpacking docker-ce (5:18.09.1~3-0~ubuntu-bionic) ...
    master: Unpacking docker-ce (5:18.09.1~3-0~ubuntu-bionic) ...
    node1: Selecting previously unselected package docker-ce.
    node1: Preparing to unpack .../07-docker-ce_5%3a18.09.1~3-0~ubuntu-bionic_amd64.deb ...
    node1: Unpacking docker-ce (5:18.09.1~3-0~ubuntu-bionic) ...
    node2: Selecting previously unselected package kubernetes-cni.
    node2: Preparing to unpack .../08-kubernetes-cni_0.7.5-00_amd64.deb ...
    node2: Unpacking kubernetes-cni (0.7.5-00) ...
    node2: Selecting previously unselected package socat.
    node2: Preparing to unpack .../09-socat_1.7.3.2-2ubuntu2_amd64.deb ...
    node2: Unpacking socat (1.7.3.2-2ubuntu2) ...
    node2: Selecting previously unselected package kubelet.
    node2: Preparing to unpack .../10-kubelet_1.14.1-00_amd64.deb ...
    node2: Unpacking kubelet (1.14.1-00) ...
    node1: Selecting previously unselected package kubernetes-cni.
    node1: Preparing to unpack .../08-kubernetes-cni_0.7.5-00_amd64.deb ...
    node1: Unpacking kubernetes-cni (0.7.5-00) ...
    master: Selecting previously unselected package kubernetes-cni.
    master: Preparing to unpack .../08-kubernetes-cni_0.7.5-00_amd64.deb ...
    master: Unpacking kubernetes-cni (0.7.5-00) ...
    node1: Selecting previously unselected package socat.
    node1: Preparing to unpack .../09-socat_1.7.3.2-2ubuntu2_amd64.deb ...
    node1: Unpacking socat (1.7.3.2-2ubuntu2) ...
    node1: Selecting previously unselected package kubelet.
    node1: Preparing to unpack .../10-kubelet_1.14.1-00_amd64.deb ...
    node1: Unpacking kubelet (1.14.1-00) ...
    master: Selecting previously unselected package socat.
    master: Preparing to unpack .../09-socat_1.7.3.2-2ubuntu2_amd64.deb ...
    master: Unpacking socat (1.7.3.2-2ubuntu2) ...
    master: Selecting previously unselected package kubelet.
    master: Preparing to unpack .../10-kubelet_1.14.1-00_amd64.deb ...
    master: Unpacking kubelet (1.14.1-00) ...
    node2: Selecting previously unselected package kubectl.
    node2: Preparing to unpack .../11-kubectl_1.14.1-00_amd64.deb ...
    node2: Unpacking kubectl (1.14.1-00) ...
    node2: Selecting previously unselected package kubeadm.
    node2: Preparing to unpack .../12-kubeadm_1.14.1-00_amd64.deb ...
    node2: Unpacking kubeadm (1.14.1-00) ...
    node1: Selecting previously unselected package kubectl.
    node1: Preparing to unpack .../11-kubectl_1.14.1-00_amd64.deb ...
    node1: Unpacking kubectl (1.14.1-00) ...
    node2: Selecting previously unselected package libltdl7:amd64.
    node2: Preparing to unpack .../13-libltdl7_2.4.6-2_amd64.deb ...
    node2: Unpacking libltdl7:amd64 (2.4.6-2) ...
    node2: Setting up aufs-tools (1:4.9+20170918-1ubuntu1) ...
    node2: Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    node2: Setting up kubernetes-cni (0.7.5-00) ...
    node2: Setting up containerd.io (1.2.5-1) ...
    node1: Selecting previously unselected package kubeadm.
    node1: Preparing to unpack .../12-kubeadm_1.14.1-00_amd64.deb ...
    node1: Unpacking kubeadm (1.14.1-00) ...
    node2: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service -> /lib/systemd/system/containerd.service.
    node2: Setting up cri-tools (1.12.0-00) ...
    node2: Processing triggers for ureadahead (0.100.0-20) ...
    master: Selecting previously unselected package kubectl.
    node2: Setting up socat (1.7.3.2-2ubuntu2) ...
    master: Preparing to unpack .../11-kubectl_1.14.1-00_amd64.deb ...
    master: Unpacking kubectl (1.14.1-00) ...
    node2: Setting up cgroupfs-mount (1.4) ...
    node2: Setting up kubelet (1.14.1-00) ...
    node2: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service -> /lib/systemd/system/kubelet.service.
    node2: Processing triggers for libc-bin (2.27-3ubuntu1) ...
    node2: Processing triggers for systemd (237-3ubuntu10.13) ...
    node2: Setting up libltdl7:amd64 (2.4.6-2) ...
    node2: Setting up kubectl (1.14.1-00) ...
    node2: Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
    node1: Selecting previously unselected package libltdl7:amd64.
    node1: Preparing to unpack .../13-libltdl7_2.4.6-2_amd64.deb ...
    node1: Unpacking libltdl7:amd64 (2.4.6-2) ...
    node1: Setting up aufs-tools (1:4.9+20170918-1ubuntu1) ...
    node1: Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    node1: Setting up kubernetes-cni (0.7.5-00) ...
    node1: Setting up containerd.io (1.2.5-1) ...
    node1: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service -> /lib/systemd/system/containerd.service.
    node1: Setting up cri-tools (1.12.0-00) ...
    node1: Processing triggers for ureadahead (0.100.0-20) ...
    node1: Setting up socat (1.7.3.2-2ubuntu2) ...
    node1: Setting up cgroupfs-mount (1.4) ...
    node1: Setting up kubelet (1.14.1-00) ...
    master: Selecting previously unselected package kubeadm.
    master: Preparing to unpack .../12-kubeadm_1.14.1-00_amd64.deb ...
    node1: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service -> /lib/systemd/system/kubelet.service.
    master: Unpacking kubeadm (1.14.1-00) ...
    node1: Processing triggers for libc-bin (2.27-3ubuntu1) ...
    node1: Processing triggers for systemd (237-3ubuntu10.13) ...
    node1: Setting up libltdl7:amd64 (2.4.6-2) ...
    node1: Setting up kubectl (1.14.1-00) ...
    node1: Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
    node2: Setting up docker-ce-cli (5:18.09.4~3-0~ubuntu-bionic) ...
    node2: Setting up kubeadm (1.14.1-00) ...
    node2: Setting up pigz (2.4-1) ...
    node2: Setting up docker-ce (5:18.09.1~3-0~ubuntu-bionic) ...
    node2: update-alternatives:
    node2: using /usr/bin/dockerd-ce to provide /usr/bin/dockerd (dockerd) in auto mode
    node2: Created symlink /etc/systemd/system/multi-user.target.wants/docker.service -> /lib/systemd/system/docker.service.
    node2: Created symlink /etc/systemd/system/sockets.target.wants/docker.socket -> /lib/systemd/system/docker.socket.
    master: Selecting previously unselected package libltdl7:amd64.
    master: Preparing to unpack .../13-libltdl7_2.4.6-2_amd64.deb ...
    master: Unpacking libltdl7:amd64 (2.4.6-2) ...
    master: Setting up aufs-tools (1:4.9+20170918-1ubuntu1) ...
    master: Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    master: Setting up kubernetes-cni (0.7.5-00) ...
    master: Setting up containerd.io (1.2.5-1) ...
    master: Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service -> /lib/systemd/system/containerd.service.
    node1: Setting up docker-ce-cli (5:18.09.4~3-0~ubuntu-bionic) ...
    node1: Setting up kubeadm (1.14.1-00) ...
    node1: Setting up pigz (2.4-1) ...
    node1: Setting up docker-ce (5:18.09.1~3-0~ubuntu-bionic) ...
    master: Setting up cri-tools (1.12.0-00) ...
    node1: update-alternatives: using /usr/bin/dockerd-ce to provide /usr/bin/dockerd (dockerd) in auto mode
    master: Processing triggers for ureadahead (0.100.0-20) ...
    node1: Created symlink /etc/systemd/system/multi-user.target.wants/docker.service -> /lib/systemd/system/docker.service.
    master: Setting up socat (1.7.3.2-2ubuntu2) ...
    master: Setting up cgroupfs-mount (1.4) ...
    node1: Created symlink /etc/systemd/system/sockets.target.wants/docker.socket -> /lib/systemd/system/docker.socket.
    master: Setting up kubelet (1.14.1-00) ...
    master: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service -> /lib/systemd/system/kubelet.service.
    master: Processing triggers for libc-bin (2.27-3ubuntu1) ...
    master: Processing triggers for systemd (237-3ubuntu10.13) ...
    master: Setting up libltdl7:amd64 (2.4.6-2) ...
    master: Setting up kubectl (1.14.1-00) ...
    node2: Processing triggers for ureadahead (0.100.0-20) ...
    master: Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
    node1: Processing triggers for ureadahead (0.100.0-20) ...
    node1: Processing triggers for libc-bin (2.27-3ubuntu1) ...
    node1: Processing triggers for systemd (237-3ubuntu10.13) ...
    node2: Processing triggers for libc-bin (2.27-3ubuntu1) ...
    node2: Processing triggers for systemd (237-3ubuntu10.13) ...
    master: Setting up docker-ce-cli (5:18.09.4~3-0~ubuntu-bionic) ...
    master: Setting up kubeadm (1.14.1-00) ...
    master: Setting up pigz (2.4-1) ...
    master: Setting up docker-ce (5:18.09.1~3-0~ubuntu-bionic) ...
    master: update-alternatives: using /usr/bin/dockerd-ce to provide /usr/bin/dockerd (dockerd) in auto mode
    master: Created symlink /etc/systemd/system/multi-user.target.wants/docker.service -> /lib/systemd/system/docker.service.
    master: Created symlink /etc/systemd/system/sockets.target.wants/docker.socket -> /lib/systemd/system/docker.socket.
    node1: ++ break
    node1: ++ [[ 5 -eq i ]]
    node1: ++ apt-mark hold kubelet kubeadm kubectl
    node1: kubelet set on hold.
    node1: kubeadm set on hold.
    node1: kubectl set on hold.
    node1: ++ echo 'tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=614460k,mode=755'
==> node1: Running provisioner: shell...
    node1: Running: inline script
    node1: Client:
    node1:  Version:           18.09.4
    node1:  API version:       1.39
    node1:  Go version:        go1.10.8
    node1:  Git commit:        d14af54266
    node1:  Built:             Wed Mar 27 18:35:44 2019
    node1:  OS/Arch:           linux/amd64
    node1:  Experimental:      false
    node1:
    node1: Server: Docker Engine - Community
    node1:  Engine:
    node1:   Version:          18.09.1
    node1:   API version:      1.39 (minimum version 1.12)
    node1:   Go version:       go1.10.6
    node1:   Git commit:       4c52b90
    node1:   Built:            Wed Jan  9 19:02:44 2019
    node1:   OS/Arch:          linux/amd64
    node1:   Experimental:     false
    node1: kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    node1: Kubernetes v1.14.1
==> node1: Running provisioner: diskandreboot...
    master: Processing triggers for ureadahead (0.100.0-20) ...
    node2: ++ break
    node2: ++ [[ 5 -eq i ]]
    node2: ++ apt-mark hold kubelet kubeadm kubectl
    master: Processing triggers for libc-bin (2.27-3ubuntu1) ...
    master: Processing triggers for systemd (237-3ubuntu10.13) ...
    node2: kubelet set on hold.
    node2: kubeadm set on hold.
    node2: kubectl set on hold.
    node2: ++ echo 'tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=614460k,mode=755'
==> node2: Running provisioner: shell...
==> node1: Removing domain...
==> node1: Running cleanup tasks for 'diskandreboot' provisioner...
#<Thread:0x000055d94bf59360@/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:71 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	48: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
	47: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
	46: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
	45: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
	44: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
	43: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
	42: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	41: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	40: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	39: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	38: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	37: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	36: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	35: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:79:in `call'
	34: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	33: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
	32: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	31: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	30: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	29: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	28: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	27: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	26: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	25: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
	24: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	23: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
	22: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	21: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	20: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	19: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
	18: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	17: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
	16: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	15: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
	14: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	13: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `call'
	12: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `each'
	11: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
	10: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `call'
	 9: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:504:in `hook'
	 8: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	 7: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	 6: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	 5: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	 4: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	 3: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	 2: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `call'
	 1: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/home/kylix3511/k8s/k8s-vagrant-multi-node/vagrantfiles/vagrant-provision-disk-and-reboot-plugin.rb:33:in `provision': undefined method `uuid' for #<VagrantPlugins::ProviderLibvirt::Driver:0x000055d94c1f2c48> (NoMethodError)
Traceback (most recent call last):
	48: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
	47: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
	46: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
	45: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
	44: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
	43: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
	42: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	41: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	40: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	39: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	38: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	37: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	36: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	35: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:79:in `call'
	34: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	33: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
	32: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	31: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	30: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	29: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	28: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	27: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	26: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	25: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
	24: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	23: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
	22: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	21: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	20: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	19: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
	18: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	17: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
	16: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	15: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
	14: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	13: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `call'
	12: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `each'
	11: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
	10: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `call'
	 9: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:504:in `hook'
	 8: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	 7: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	 6: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	 5: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	 4: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	 3: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	 2: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `call'
	 1: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/home/kylix3511/k8s/k8s-vagrant-multi-node/vagrantfiles/vagrant-provision-disk-and-reboot-plugin.rb:33:in `provision': undefined method `uuid' for #<VagrantPlugins::ProviderLibvirt::Driver:0x000055d94c1f2c48> (NoMethodError)
Makefile:136: recipe for target 'start-node-1' failed
make[2]: *** [start-node-1] Error 1
make[2]: *** Waiting for unfinished jobs....
    node2: Running: inline script
    master: ++ break
    master: ++ [[ 5 -eq i ]]
    master: ++ apt-mark hold kubelet kubeadm kubectl
    master: kubelet set on hold.
    master: kubeadm set on hold.
    master: kubectl set on hold.
    node2: Client:
    node2:  Version:           18.09.4
    node2:  API version:       1.39
    node2:  Go version:        go1.10.8
    node2:  Git commit:        d14af54266
    node2:  Built:
    node2:      Wed Mar 27 18:35:44 2019
    node2:  OS/Arch:           linux/amd64
    node2:  Experimental:      false
    node2: Server: Docker Engine - Community
    node2:  Engine:
    node2:   Version:          18.09.1
    node2:   API version:      1.39 (minimum version 1.12)
    node2:   Go version:       go1.10.6
    node2:   Git commit:       4c52b90
    master: ++ echo 'tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=614460k,mode=755'
    node2:   Built:            Wed Jan  9 19:02:44 2019
    node2:   OS/Arch:          linux/amd64
    node2:   Experimental:     false
    node2: kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    node2: Kubernetes v1.14.1
==> node2: Running provisioner: diskandreboot...
==> master: Running provisioner: shell...
==> node2: Removing domain...
==> node2: Running cleanup tasks for 'diskandreboot' provisioner...
#<Thread:0x000055e2098e4130@/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:71 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	48: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
	47: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
	46: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
	45: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
	44: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
	43: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
	42: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	41: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	40: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	39: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	38: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	37: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	36: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	35: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:79:in `call'
	34: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	33: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
	32: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	31: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	30: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	29: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	28: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	27: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	26: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	25: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
	24: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	23: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
	22: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	21: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	20: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	19: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
	18: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	17: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
	16: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	15: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
	14: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	13: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `call'
	12: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `each'
	11: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
	10: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `call'
	 9: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:504:in `hook'
	 8: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	 7: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	 6: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	 5: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	 4: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	 3: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	 2: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `call'
	 1: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/home/kylix3511/k8s/k8s-vagrant-multi-node/vagrantfiles/vagrant-provision-disk-and-reboot-plugin.rb:33:in `provision': undefined method `uuid' for #<VagrantPlugins::ProviderLibvirt::Driver:0x000055e2097b1b00> (NoMethodError)
Traceback (most recent call last):
	48: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
	47: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
	46: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
	45: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
	44: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
	43: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
	42: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	41: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	40: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	39: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	38: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	37: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	36: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	35: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:79:in `call'
	34: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	33: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
	32: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	31: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	30: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	29: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	28: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	27: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	26: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	25: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
	24: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	23: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
	22: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	21: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	20: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	19: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
	18: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	17: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
	16: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	15: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
	14: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	13: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `call'
	12: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `each'
	11: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
	10: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `call'
	 9: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:504:in `hook'
	 8: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	 7: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	 6: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	 5: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	 4: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	 3: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	 2: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `call'
	 1: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/home/kylix3511/k8s/k8s-vagrant-multi-node/vagrantfiles/vagrant-provision-disk-and-reboot-plugin.rb:33:in `provision': undefined method `uuid' for #<VagrantPlugins::ProviderLibvirt::Driver:0x000055e2097b1b00> (NoMethodError)
Makefile:136: recipe for target 'start-node-2' failed
make[2]: *** [start-node-2] Error 1
    master: Running: inline script
    master: Client:
    master:  Version:
    master:
    master:
    master: 18.09.4
    master:  API version:
    master:
    master: 1.39
    master:  Go version:
    master:
    master: go1.10.8
    master:  Git commit:
    master:
    master: d14af54266
    master:  Built:
    master:
    master:
    master: Wed Mar 27 18:35:44 2019
    master:  OS/Arch:
    master:
    master:
    master: linux/amd64
    master:  Experimental:
    master:
    master: false
    master: Server: Docker Engine - Community
    master:  Engine:
    master:   Version:
    master:
    master:
    master: 18.09.1
    master:   API version:
    master:
    master: 1.39 (minimum version 1.12)
    master:   Go version:
    master:
    master: go1.10.6
    master:   Git commit:
    master:
    master: 4c52b90
    master:   Built:
    master:
    master:
    master: Wed Jan  9 19:02:44 2019
    master:   OS/Arch:
    master:
    master:
    master: linux/amd64
    master:   Experimental:
    master:
    master: false
    master: kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    master: Kubernetes v1.14.1
==> master: Running provisioner: diskandreboot...
==> master: Removing domain...
==> master: Running cleanup tasks for 'diskandreboot' provisioner...
#<Thread:0x00005580fd7f3270@/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:71 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	48: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
	47: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
	46: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
	45: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
	44: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
	43: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
	42: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	41: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	40: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	39: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	38: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	37: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	36: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	35: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:79:in `call'
	34: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	33: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
	32: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	31: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	30: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	29: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	28: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	27: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	26: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	25: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
	24: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	23: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
	22: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	21: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	20: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	19: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
	18: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	17: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
	16: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	15: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
	14: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	13: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `call'
	12: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `each'
	11: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
	10: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `call'
	 9: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:504:in `hook'
	 8: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	 7: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	 6: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	 5: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	 4: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	 3: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	 2: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `call'
	 1: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/home/kylix3511/k8s/k8s-vagrant-multi-node/vagrantfiles/vagrant-provision-disk-and-reboot-plugin.rb:33:in `provision': undefined method `uuid' for #<VagrantPlugins::ProviderLibvirt::Driver:0x00005580fd6c4278> (NoMethodError)
Traceback (most recent call last):
	48: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
	47: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
	46: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
	45: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
	44: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
	43: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
	42: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	41: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	40: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	39: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	38: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	37: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	36: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	35: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:79:in `call'
	34: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	33: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
	32: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	31: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	30: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	29: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	28: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	27: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	26: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	25: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
	24: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	23: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
	22: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	21: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	20: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	19: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
	18: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	17: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
	16: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	15: from /usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
	14: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	13: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `call'
	12: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:103:in `each'
	11: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `block in call'
	10: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:126:in `call'
	 9: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:504:in `hook'
	 8: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
	 7: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
	 6: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
	 5: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
	 4: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
	 3: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	 2: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `call'
	 1: from /usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:138:in `run_provisioner'
/home/kylix3511/k8s/k8s-vagrant-multi-node/vagrantfiles/vagrant-provision-disk-and-reboot-plugin.rb:33:in `provision': undefined method `uuid' for #<VagrantPlugins::ProviderLibvirt::Driver:0x00005580fd6c4278> (NoMethodError)
Makefile:133: recipe for target 'start-master' failed
make[2]: *** [start-master] Error 1
make[2]: Leaving directory '/home/kylix3511/k8s/k8s-vagrant-multi-node'
Makefile:67: recipe for target 'start' failed
make[1]: *** [start] Error 2
make[1]: Leaving directory '/home/kylix3511/k8s/k8s-vagrant-multi-node'
Makefile:64: recipe for target 'up' failed
make: *** [up] Error 2
root@kylixlab:/home/kylix3511/k8s/k8s-vagrant-multi-node#



Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.