Giter VIP home page Giter VIP logo

terraform-provider-kubeadm's Introduction

Terraform kubeadm plugin

Build Status

A Terraform resource definition and provisioner that lets you install Kubernetes on a cluster.

The underlying resources where the provisioner runs could be things like AWS instances, libvirt machines, LXD containers or any other resource that supports SSH-like connections. The kubeadm provisioner will run over this SSH connection all the commands necessary for installing Kubernetes in those resources, according to the configuration specified in the resource "kubeadm" block.

Example

Here is an example that will setup Kubernetes in a cluster created with the Terraform libvirt provider:

resource "kubeadm" "main" {
  api {
    external = "loadbalancer.external.com"   # external address for accessing the API server
  }
  
  cni {
    plugin = "flannel"   # could be 'weave' as well...
  }
  
  network {
    dns_domain = "my_cluster.local"  
    services = "10.25.0.0/16"
  }
  
  # install some extras: helm, the dashboard...
  helm      { install = "true" }
  dashboard { install = "true" }
}

# from the libvirt provider
resource "libvirt_domain" "master" {
  name = "master"
  memory = 1024
  
  # this provisioner will start a Kubernetes master in this machine,
  # with the help of "kubeadm" 
  provisioner "kubeadm" {
    # there is no "join", so this will be the first node in the cluster: the seeder
    config = "${kubeadm.main.config}"

    # when creating multiple masters, the first one (the _seeder_) must join="",
    # and the rest will join it afterwards...
    join      = "${count.index == 0 ? "" : libvirt_domain.master.network_interface.0.addresses.0}"
    role      = "master"

    install {
      # this will try to install "kubeadm" automatically in this machine
      auto = true
    }
  }

  # provisioner for removing the node from the cluster
  provisioner "kubeadm" {
    when   = "destroy"
    config = "${kubeadm.main.config}"
    drain  = true
  }
}

# from the libvirt provider
resource "libvirt_domain" "minion" {
  count      = 3
  name       = "minion${count.index}"
  
  # this provisioner will start a Kubernetes worker in this machine,
  # with the help of "kubeadm"
  provisioner "kubeadm" {
    config = "${kubeadm.main.config}"

    # this will make this minion "join" the cluster started by the "master"
    join = "${libvirt_domain.master.network_interface.0.addresses.0}"
    install {
      # this will try to install "kubeadm" automatically in this machine
      auto = true
    }
  }

  # provisioner for removing the node from the cluster
  provisioner "kubeadm" {
    when   = "destroy"
    config = "${kubeadm.main.config}"
    drain  = true
  }
}

Note well that:

  • all the provisioners must specify the config = ${kubeadm.XXX.config},
  • any other nodes that join the seeder must specify the join attribute pointing to the <IP/name> they must join. You can use the optional role parameter for specifying whether it is joining as a master or as a worker.

Now you can see the plan, apply it, and then destroy the infrastructure:

$ terraform plan
$ terraform apply
$ terraform destroy

You can find examples of the privider/provisioner in other environments like OpenStack, LXD, etc. in the examples directory)

Features

  • Easy deployment of kubernetes clusters in any platform supported by Terraform, just adding our provisioner "kubeadm" in the machines you want to be part of the cluster.
    • All operations are performed through the SSH connection created by Terraform, so you can create a k8s cluster in completely isolated machines.
  • Multi-master deployments. Just add a Load Balancer that points to your masters and you will have a HA cluster!.
  • Easy scale-up/scale-down of the cluster by just changing the count of your masters or workers.
  • Use the kubeadm attributes in other parts of your Terraform script. This makes it easy to do things like:
    • enabling SSL termination by using the certificates generated for kubeadm in the code you have for creating your Load Balancer.
    • create machine templates (for example, cloud-init code) that can be used for creating machines dynamically, without Terraform being involved (like autoscaling groups in AWS).
  • Automatic rolling upgrade of the cluster by just changing the base image of your machines. Terraform will take care of replacing old nodes with upgraded ones, and this provider will take care of draining the nodes.
  • Automatic deployment of some addons, like CNI drivers, the k8s Dashboard, Helm, etc.

(check the TODO for an updated list of features).

Status

This provider/provisioner is being actively developed, but I would still consider it ALPHA, so there can be many rough edges and some things can change without any previous notice. To see what is left or planned, see the issues list and the roadmap.

Requirements

Quick start

$ mkdir -p $HOME/.terraform.d/plugins
$ # with go>=1.12
$ go build -v -o $HOME/.terraform.d/plugins/terraform-provider-kubeadm \
    github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provider-kubeadm
$ go build -v -o $HOME/.terraform.d/plugins/terraform-provisioner-kubeadm \
    github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provisioner-kubeadm

Documentation

Running the tests

You can run the unit tests with:

$ make test

There are end-to-end tests as well, that can be launched with

$ make tests-e2e

Author(s)

License

  • Apache 2.0, See LICENSE file

terraform-provider-kubeadm's People

Contributors

brandonros avatar dblencowe avatar inercia avatar zachomedia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

terraform-provider-kubeadm's Issues

Evaluate if we should switch from `kubectl` commands to API calls

We are currenlty running all the kubectl commands remotely: we upload a valid kubeconfig to the remote host and then run kubectl with that file. This makes things easier in some scenarios, like when the API server is not directly reachable from the machine where Terraform is being run.

However, this is also cumbersome and limits the possibilities for the future. For example, for applying some manifest, we must 1) generate a manifest file 2) upload it to the remote host 3) upload the kubeconfig 4) run kubectl apply in the remote machine 5) delete all these files. This could be straightforward if we could just use the kubernetes client-go and access the API server from the local machine.

And we can think on more complex scenarios in the future, like the creation of machine descriptions for Cluster API and so on...

So we should evaluate if it is really worth keeping the remote kubectl commands or just switch to API accesses...

mv: cannot move to '/etc/cni/net.d/99-loopback.conf': Permission denied

Hi there!

I'm currently unable to spin up a cluster on AWS as when provisioning the master node on a t2.medium instance, running Ubuntu I get the following error:

Error: Command "sudo --non-interactive -E mkdir -p \"/etc/cni/net.d\" && mv -f \"/tmp/tmpfile-ef03c9.tmp\" \"/etc/cni/net.d/99-loopback.conf\"" exited with non-zero exit status: 1

Terraform output:

module.cluster.null_resource.masters (kubeadm): - kubeadm found
module.cluster.null_resource.masters (kubeadm): - kubectl found
module.cluster.null_resource.masters (kubeadm): Uploading to "/etc/cni/net.d/99-loopback.conf"
module.cluster.null_resource.masters (kubeadm): mv: cannot move '/tmp/tmpfile-ef03c9.tmp' to '/etc/cni/net.d/99-loopback.conf': Permission denied

Variables file:

variable "keypair_name" {
  type = string
}

variable "root_domain" {
  type = string
}

variable "hosted_zone_id" {
  type = string
}

variable "stack_name" {
  description = "identifier to make all your resources unique and avoid clashes with other users of this terraform project"
}

variable "aws_region" {
  default     = "eu-west-1"
  description = "Name of the region to be used"
}

variable "aws_az" {
  type        = "string"
  description = "AWS Availability Zone"
  default     = "eu-west-1a"
}

variable "ami_distro" {
  default     = "ubuntu"
  description = "AMI distro"
}

variable "kubeconfig" {
  default     = "kubeconfig.local"
  description = "A local copy of the admin kubeconfig created after the cluster initialization"
}

variable "cni" {
  default     = "flannel"
  description = "CNI driver"
}

variable "vpc_cidr" {
  type        = "string"
  default     = "10.1.0.0/16"
  description = "Subnet CIDR"
}

variable "public_subnet" {
  type        = "string"
  description = "CIDR blocks for each public subnet of vpc"
  default     = "10.1.1.0/24"
}

variable "private_subnet" {
  type        = "string"
  description = "Private subnet of vpc"
  default     = "10.1.4.0/24"
}

variable "master_size" {
  default     = "t2.medium"
  description = "Size of the master nodes"
}

variable "worker_size" {
  default     = "t2.medium"
  description = "Size of the worker nodes"
}

variable "worker_count" {
  default     = 2
  description = "Number of worker nodes"
}

variable "tags" {
  type        = "map"
  default     = {}
  description = "Extra tags used for the AWS resources created"
}

variable "authorized_keys" {
  type        = "list"
  default     = []
  description = "ssh keys to inject into all the nodes. First key will be used for creating a keypair."
}

Any input would be appreciated. Thanks!

Run `kubectl` in the remote machine instead of locally

We are currently running kubectl commands locally with ssh.DoLocalKubectlApply. However, some clusters do not expose the API server in a IP:port reachable from the local host where Terraform is being run.

We should instead:

  1. upload the kubeconfig to a temporary directory
  2. run any manifest necessary
  3. run kubectl in the remote machine.

We should also use the External Control Plane FQDN as the API server address...

Support user provided certificates

We are currently generating all the cerificates in the resource CreateCerts function. Maybe we could:

  • check if the user is providing some certifciate, ie d.GetOk("certs.ca_crt")
  • if the user is providing that certificate, save it to the temporary directory, so the certificates generation function can detect it.
  • we will re-load it anyway later on, when calling certsConfig.FromDisk(certsDir)

no required module provides package

Trying to build provider by the instruction

> go version
go version go1.16 darwin/amd64
> mkdir -p $HOME/.terraform.d/plugins
> go build -v -o $HOME/.terraform.d/plugins/terraform-provider-kubeadm \
    github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provider-kubeadm
no required module provides package github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provisioner-kubeadm: working directory is not part of a module

Use kubeadm for rotating certificates

We should be able to detect if certificates have expired and then use kubeadm for rotating them.

We should investigate if we can do that as nodes are not accesible once they have been provisioned. Maybe we will need some triggers block in a null_resource, like this:

resource "null_resource" "renew_certs" {
  count = "${var.master_count}"

  # Changes are triggered by any change in the CA
  triggers = {
    cluster_instance_ids = "${tls_self_signed_cert.ca.id}"
  }

  # Connect to one of the masters
  connection {
    host = "${element(aws_instance.master.*.public_ip, count.index)}"
  }

  provisioner "kubeadm" {
    certs {
      # we should detect the certificates have changed, and then run the right `kubeadm` command...
      ca_crt = "${tls_self_signed_cert.ca.cert_pem}"
      ca_key = "${tls_private_key.ca.private_key_pem}"
    }
  }
}

From the docs:

  • You can renew your certificates manually at any time with the kubeadm alpha certs renew
  • This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki.
  • Warning: If you are running an HA cluster, this command needs to be executed on all the control-plane nodes.

Open questions:

  • does this update all the certs in the cluster, or just in the nodes where we run the kubeadm command?

An instance is not created in OpenStack

Hi friends, I installed OpenStask. All modules were installed without problems. Volums, network, subnet, ports, images, flavors are all created without errors. But the instance is not created; upon creation, it immediately gives an error:

 fault | {u'message ': u'Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance 1451041c-da04-478c-93dd-0acb60dd149a. ', U'code': 500, u'details ': u' File "/usr/lib/python2.7/site-packages /nova/conductor/manager.py ", line 627, in build_instances \ n raise exception.MaxRetriesExceeded (reason = msg) \ n ', u'created': u'2019-08-19T12: 54: 37Z '}

I ask for help in solving the problem.
Thank!!!

I checked the neutron section in nova.conf with the definition of user_domain_name = Default and project_domain_name = Default, everything is fine there.

In the logs /var/log/nova/nova-compute.log there is this:

2019-08-19 15:54:32.151 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Attempting claim on node compute.test.local: memory 300 MB, disk 3 GB, vcpus 1 CPU
2019-08-19 15:54:32.153 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Total memory: 4095 MB, used: 512.00 MB
2019-08-19 15:54:32.153 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] memory limit not specified, defaulting to unlimited
2019-08-19 15:54:32.153 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Total disk: 16 GB, used: 0.00 GB
2019-08-19 15:54:32.154 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] disk limit not specified, defaulting to unlimited
2019-08-19 15:54:32.154 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Total vcpu: 4 VCPU, used: 0.00 VCPU
2019-08-19 15:54:32.154 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] vcpu limit not specified, defaulting to unlimited
2019-08-19 15:54:32.156 1437 INFO nova.compute.claims [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Claim successful on node compute.test.local
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] Instance failed network setup after 1 attempt(s): PortNotUsable: Port 62347407-0d4e-438c-adf3-e0fd1011a6cd not usable for instance 1451041c-da04-478c-93dd-0acb60dd149a.
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager Traceback (most recent call last):
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1521, in _allocate_network_async
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager     resource_provider_mapping=resource_provider_mapping)
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1064, in allocate_for_instance
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager     context, instance, neutron, requested_networks, attach=attach))
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 749, in _validate_requested_port_ids
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager     instance=instance.uuid)
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager PortNotUsable: Port 62347407-0d4e-438c-adf3-e0fd1011a6cd not usable for instance 1451041c-da04-478c-93dd-0acb60dd149a.
2019-08-19 15:54:32.437 1437 ERROR nova.compute.manager 
2019-08-19 15:54:32.609 1437 INFO nova.virt.libvirt.driver [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Creating image
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Instance failed to spawn: PortNotUsable: Port 62347407-0d4e-438c-adf3-e0fd1011a6cd not usable for instance 1451041c-da04-478c-93dd-0acb60dd149a.
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Traceback (most recent call last):
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2495, in _build_resources
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     yield resources
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2256, in _build_and_run_instance
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     block_device_info=block_device_info)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3167, in spawn
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     mdevs=mdevs)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5498, in _get_guest_xml
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     network_info_str = str(network_info)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 570, in __str__
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     return self._sync_wrapper(fn, *args, **kwargs)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 553, in _sync_wrapper
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     self.wait()
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 585, in wait
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     self[:] = self._gt.wait()
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 180, in wait
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     return self._exit_event.wait()
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 132, in wait
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     current.throw(*self._exc)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 219, in main
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     result = function(*args, **kwargs)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 800, in context_wrapper
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     return func(*args, **kwargs)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1538, in _allocate_network_async
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     six.reraise(*exc_info)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1521, in _allocate_network_async
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     resource_provider_mapping=resource_provider_mapping)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1064, in allocate_for_instance
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     context, instance, neutron, requested_networks, attach=attach))
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 749, in _validate_requested_port_ids
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a]     instance=instance.uuid)
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a] PortNotUsable: Port 62347407-0d4e-438c-adf3-e0fd1011a6cd not usable for instance 1451041c-da04-478c-93dd-0acb60dd149a.
2019-08-19 15:54:35.251 1437 ERROR nova.compute.manager [instance: 1451041c-da04-478c-93dd-0acb60dd149a] 
2019-08-19 15:54:35.255 1437 INFO nova.compute.manager [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Terminating instance
2019-08-19 15:54:35.264 1437 INFO nova.virt.libvirt.driver [-] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Instance destroyed successfully.
2019-08-19 15:54:35.265 1437 INFO nova.virt.libvirt.driver [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Deleting instance files /var/lib/nova/instances/1451041c-da04-478c-93dd-0acb60dd149a_del
2019-08-19 15:54:35.265 1437 INFO nova.virt.libvirt.driver [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Deletion of /var/lib/nova/instances/1451041c-da04-478c-93dd-0acb60dd149a_del complete
2019-08-19 15:54:35.385 1437 INFO nova.compute.manager [req-5138fd5d-0f86-4a7e-ad64-06e739d6c0ae 85e9ede8610641dcb75e67b76e0a8833 bf6641220c7747fa9810449e45b7a232 - 387dbcd0ba384713ad51666944113ab8 387dbcd0ba384713ad51666944113ab8] [instance: 1451041c-da04-478c-93dd-0acb60dd149a] Took 0.12 seconds to destroy the instance on the hypervisor.
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api [-] Unable to clear device ID for port '62347407-0d4e-438c-adf3-e0fd1011a6cd': BadRequest: Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-c1d9a12b-4f86-45d7-a879-a4f6cf22c3a3)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api Traceback (most recent call last):
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 686, in _unbind_ports
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     port_client.update_port(port_id, port_req_body)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 127, in wrapper
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 808, in update_port
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     revision_number=revision_number)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 127, in wrapper
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 2389, in _update_resource
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     return self.put(path, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 127, in wrapper
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 363, in put
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     headers=headers, params=params)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 127, in wrapper
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 331, in retry_request
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     headers=headers, params=params)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 127, in wrapper
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 282, in do_request
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     headers=headers)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 340, in do_request
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     return self.request(url, method, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 328, in request
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     resp = super(SessionClient, self).request(*args, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 237, in request
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     return self.session.request(url, method, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 704, in request
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     auth_headers = self.get_auth_headers(auth)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1097, in get_auth_headers
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     return auth.get_headers(self, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/plugin.py", line 95, in get_headers
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     token = self.get_token(session)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 88, in get_token
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     return self.get_access(session).auth_token
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 134, in get_access
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     self.auth_ref = self.get_auth_ref(session)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 208, in get_auth_ref
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     return self._plugin.get_auth_ref(session, **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", line 178, in get_auth_ref
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     authenticated=False, log=False, **rkwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1045, in post
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     return self.request(url, 'POST', **kwargs)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 890, in request
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api     raise exceptions.from_response(resp, method, url)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api BadRequest: Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-c1d9a12b-4f86-45d7-a879-a4f6cf22c3a3)
2019-08-19 15:54:36.424 1437 ERROR nova.network.neutronv2.api 

List pip freeze:

alembic==1.0.7
amqp==2.4.1
aniso8601==0.82
appdirs==1.4.0
asn1crypto==0.24.0
automaton==1.16.0
Babel==2.6.0
backports.ssl-match-hostname==3.5.0.1
bcrypt==3.1.6
Beaker==1.5.4
beautifulsoup4==4.6.0
boto==2.45.0
Bottleneck==0.7.0
cachetools==3.1.0
castellan==1.2.2
cffi==1.11.2
chardet==3.0.4
cinder==14.0.1
click==6.7
cliff==2.14.1
cmd2==0.8.8
configobj==4.7.2
configshell-fb==1.1.23
contextlib2==0.5.5
cryptography==2.5
cursive==0.2.2
cycler==0.10.0
debtcollector==1.21.0
decorator==3.4.0
defusedxml==0.5.0
dnspython==1.15.0
dogpile.cache==0.6.8
enum34==1.0.4
etcd3gw==0.2.4
ethtool==0.8
eventlet==0.24.1
fasteners==0.14.1
Flask==1.0.2
Flask-RESTful==0.3.6
funcsigs==1.0.2
functools32==3.2.3.post2
future==0.17.0
futures==3.1.1
futurist==1.8.1
gevent==1.1.2
glance==18.0.0
glance-store==0.28.0
google-api-python-client==1.6.3
greenlet==0.4.12
httplib2==0.9.2
idna==2.5
iniparse==0.4
ipaddress==1.0.18
iso8601==0.1.11
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.0
jsonpatch==1.21
jsonpointer==1.10
jsonschema==2.6.0
jwcrypto==0.4.2
kazoo==2.2.1
keyring==5.7.1
keystone==15.0.0
keystoneauth1==3.13.1
keystonemiddleware==6.0.0
kitchen==1.1.1
kmod==0.1
kombu==4.2.2
ldappool==2.4.0
libvirt-python==4.5.0
logutils==0.3.3
lxml==3.2.1
M2Crypto==0.21.1
Mako==0.8.1
MarkupSafe==1.1.0
matplotlib==2.0.0
microversion-parse==0.2.1
monotonic==1.5
msgpack==0.6.1
munch==2.2.0
netaddr==0.7.19
netifaces==0.10.4
networkx==2.2
neutron==14.0.2
neutron-lib==1.25.0
nose==1.3.7
nova==19.0.1
numexpr==2.6.1
numpy==1.14.5
oauth2client==4.0.0
oauthlib==2.0.1
olefile==0.46
openstacksdk==0.27.0
os-brick==2.8.2
os-client-config==1.32.0
os-ken==0.3.1
os-resource-classes==0.3.0
os-service-types==1.6.0
os-traits==0.11.0
os-vif==1.15.1
os-win==4.2.0
os-xenapi==0.3.4
osc-lib==1.12.1
oslo.cache==1.33.3
oslo.concurrency==3.29.1
oslo.config==6.8.1
oslo.context==2.22.1
oslo.db==4.45.0
oslo.i18n==3.23.1
oslo.log==3.42.3
oslo.messaging==9.5.0
oslo.middleware==3.37.1
oslo.policy==2.1.1
oslo.privsep==1.32.1
oslo.reports==1.29.2
oslo.rootwrap==5.15.2
oslo.serialization==2.28.2
oslo.service==1.38.0
oslo.upgradecheck==0.2.1
oslo.utils==3.40.3
oslo.versionedobjects==1.35.1
oslo.vmware==2.32.2
osprofiler==2.6.0
ovs==2.11.0
ovsdbapp==0.15.0
oz==0.15.0
pandas==0.19.1
paramiko==2.4.2
passlib==1.7.1
Paste==1.7.5.1
PasteDeploy==1.5.2
pbr==5.1.2
pecan==1.3.2
perf==0.1
Pillow==5.4.1
ply==3.4
prettytable==0.7.2
psutil==5.5.1
pyasn1==0.3.7
pyasn1-modules==0.1.5
pycadf==2.9.0
pycparser==2.14
pycrypto==2.6.1
pycurl==7.19.0
pydot==1.4.1
pygobject==3.22.0
pygpgme==0.3
pyinotify==0.9.4
PyJWT==1.6.1
pyliblzma==0.5.3
PyMySQL==0.9.2
PyNaCl==1.3.0
pyngus==2.3.0
pyOpenSSL==19.0.0
pyparsing==2.3.1
pyperclip==1.6.4
pyroute2==0.5.3
pysaml2==4.6.5
pysendfile==2.0.0
PySocks==1.6.8
python-barbicanclient==4.8.1
python-cinderclient==4.2.1
python-dateutil==2.8.0
python-designateclient==2.11.0
python-editor==0.4
python-gflags==2.0
python-glanceclient==2.16.0
python-keystoneclient==3.19.0
python-ldap==3.1.0
python-linux-procfs==0.4.9
python-memcached==1.58
python-neutronclient==6.12.0
python-novaclient==13.0.1
python-openstackclient==3.18.0
python-qpid-proton==0.28.0
python-swiftclient==3.7.0
pytz==2016.10
pyudev==0.15
pyxattr==0.5.1
PyYAML==3.10
pyzmq==14.7.0
redis==3.1.0
repoze.lru==0.4
requests==2.21.0
requestsexceptions==1.4.0
retrying==1.2.3
rfc3986==1.3.0
Routes==2.4.1
rsa==3.4.1
rtslib-fb==2.1.63
schedutils==0.4
scipy==0.18.0
scrypt==0.8.0
setproctitle==1.1.9
simplegeneric==0.8
simplejson==3.10.0
singledispatch==3.4.0.3
six==1.12.0
slip==0.4.0
slip.dbus==0.4.0
SQLAlchemy==1.2.17
sqlalchemy-migrate==0.11.0
sqlparse==0.1.18
statsd==3.2.1
stevedore==1.30.1
subprocess32==3.2.6
suds-jurko==0.7.dev0
tables==3.3.0
targetcli-fb===2.1.fb46
taskflow==3.5.0
Tempita==0.5.1
tenacity==5.0.2
tinyrpc==0.6.dev0
tooz==1.64.2
unicodecsv==0.14.1
uritemplate==3.0.0
urlgrabber==3.10
urllib3==1.24.1
urwid==1.1.1
vine==1.2.0
voluptuous==0.11.5
waitress==0.8.9
warlock==1.0.1
wcwidth==0.1.7
weakrefmethod==1.0.2
WebOb==1.8.5
websockify==0.8.0
WebTest==2.0.23
Werkzeug==0.14.1
wrapt==1.11.1
WSME==0.9.3
yappi==1.0
yum-metadata-parser==1.1.4
zake==0.2.2

flannel configuration options

We are currently hardcoding the flannel configuration, bt it would be nice to have some configurability in the options, so we could choose the backend or the subnets

No longer in use?

Seeing as even 1.16 version support of kubeadm is not merged, I gather this is not actually being used by the author anymore?

Wait for readiness of nodes, deployments, etc...

We can wait for nodes to be ready with kubectl wait --for=condition=ready nodes --timeout=5m, and we can also wait for deployments. We should evaluate if we shoyuld wait for some of these thimgs in order to increase reliability.

Drain nodes when destroying resources

We would need to destroy cluster nodes in an ordered way. That would mean we would have to add call the provisioner on destroy, like this:

resource "libvirt_domain" "minion" {
  count      = 3
  name       = "minion${count.index}"
  ...
  # provisioner for adding the node to the cluster
  provisioner "kubeadm" {
    config = "${data.kubeadm.main.config}"
    join = "${libvirt_domain.master.network_interface.0.addresses.0}"
  }

  # provisioner for removing the node from the cluster
  provisioner "kubeadm" {
    when = "destroy"
    drain = true
  }
}

The destruction provisioner would have to

  • detect if etcd is running in this node and then remove the instance from the etcd cluster.
  • run a kubectl drain on the node
  • cleanup certificates and config files.
  • do a kubeadm reset (just to be sure)

Config options for cloud providers

We should add some options in the resource "kubeadm" for configuring the cloud provider...

  • config options for some cloud providers (ie, OpenStack) (#22)
  • config options for other providers that require more configuration actions (ie, AWS)

Config options for OIDC

We should be able to configure the API server for accepting OpenID Connect Tokens, by adding flags like these:

--oidc-issuer-url=https://keycloak.example.com/auth/realms/YOUR_REALM
--oidc-client-id=kubernetes
--oidc-groups-claim=groups

to the API server.

As an example, check the config used by kubelogin.

Issue with Openstack

Running into an issue on openstack:
kubeadm.tf >

resource "kubeadm" "main" {
  config_path = var.kubeconfig

  api {
    external = openstack_lb_loadbalancer_v2.k8s-lb.vip_address
  }

  network {
    dns_domain = var.domain_name
    services   = var.serviceCIDR
    pods       = var.podCIDR
  }

  cloud {
    provider = "openstack"
  }

  runtime {
    engine = "docker"
  }

  cni {
    plugin = "flannel"
  }

  helm {
    install = true
  }

  dashboard {
    install = false
  }

}

resource "null_resource" "masters" {
  count = var.master_count

  connection {
    type         = "ssh"
    user         = var.default_user
    agent        = true
    host         = element(openstack_networking_port_v2.k8s_master_port.*.all_fixed_ips.0, count.index)
    private_key  = var.privkey
    bastion_host = var.bastion_dependencies["loadbalancer-external-ip"]
  }

  provisioner "kubeadm" {
    config = "${kubeadm.main.config}"

    ignore_checks = [
      "KubeletVersion",
    ]

    nodename = element(openstack_compute_instance_v2.k8s_master.*.network.name, count.index)
    role     = "master"
    join     = "${count.index == 0 ? "" : openstack_networking_port_v2.k8s_master_port.0.all_fixed_ips.0}"

    install {
      auto = true
    }
  }

  provisioner "kubeadm" {
    when   = "destroy"
    config = "${kubeadm.main.config}"
    drain  = true
  }
}

result of terraform plan && terraform apply >

module.k8s.kubeadm.main: Creating...
module.k8s.kubeadm.main: Creation complete after 1s [id=e46a744c6466176ebc207c51f1dc0de9]
module.k8s.null_resource.masters[1]: Creating...
module.k8s.null_resource.masters[2]: Creating...
module.k8s.null_resource.masters[0]: Creating...

Error: 3 problems:

- Unsupported attribute: This value does not have any attributes.
- Unsupported attribute: This value does not have any attributes.
- Unsupported attribute: This value does not have any attributes.



Error: 3 problems:

- Unsupported attribute: This value does not have any attributes.
- Unsupported attribute: This value does not have any attributes.
- Unsupported attribute: This value does not have any attributes.



Error: 3 problems:

- Unsupported attribute: This value does not have any attributes.
- Unsupported attribute: This value does not have any attributes.
- Unsupported attribute: This value does not have any attributes.

Docker in Docker currently broken 2020-03-29

  1. I had to tweak the cluster.tf file because terraform 0.12 doesn't like your use of labels { ... }, it needs to be labels = { ... } I'm guessing.

  2. I had to get rid of your definition of host in the docker provider

  3. It hits a file conflict during install:

Checking for file conflicts: [..........error]
Detected 2 file conflicts:

File /usr/bin/kubelet
  from install of
     kubernetes-kubelet-1.17.0-lp151.2.1.x86_64 (kubic)
  conflicts with file from install of
     kubernetes-kubelet-common-1.17.4-2.1.x86_64 (extra-repo0)

File /usr/share/man/man1/kubelet.1.gz
  from install of
     kubernetes-kubelet-1.17.0-lp151.2.1.x86_64 (kubic)
  conflicts with file from install of
     kubernetes-kubelet-common-1.17.4-2.1.x86_64 (extra-repo0)

File conflicts happen when two packages attempt to install files with the same name but different contents. If you continue, conflicting files will be replaced losing the previous content.
Continue? [yes/no] (no): no

Problem occurred during or after installation or removal of packages:
Installation has been aborted as directed.
History:
 - ABORT request: 

Please see the above error message for a hint.
The command '/bin/sh -c zypper ar --refresh --enable --no-gpgcheck         https://download.opensuse.org/tumbleweed/repo/oss extra-repo0     && zypper ar --refresh --enable --no-gpgcheck         https://download.opensuse.org/repositories/devel:/kubic/openSUSE_Leap_15.1 kubic     && zypper ref -r extra-repo0     && zypper ref -r kubic     && zypper in -y --no-recommends         ca-certificates curl gpg2 lsb-release         systemd systemd-sysvinit libsystemd0         conntrack-tools iptables iproute2         ethtool socat util-linux ebtables udev kmod         bash rsync         docker         openssh         kubernetes-kubeadm         kubernetes-kubelet         kubernetes-client         kmod         cni         cni-plugins     && zypper clean -a     && rm -f /lib/systemd/system/multi-user.target.wants/*     && rm -f /etc/systemd/system/*.wants/*     && rm -f /lib/systemd/system/local-fs.target.wants/*     && rm -f /lib/systemd/system/sockets.target.wants/*udev*     && rm -f /lib/systemd/system/sockets.target.wants/*initctl*     && rm -f /lib/systemd/system/basic.target.wants/*     && echo "ReadKMsg=no" >> /etc/systemd/journald.conf     && systemctl enable docker.service     && systemctl enable sshd.service' returned a non-zero code: 8
make: *** [image] Error 8


Nodes NotReady - Using version v1.16.*

So right now if you try to run this provider using version v1.16.* ... it will install the latest kubeadm and kubelet packages (1.17) and they are not compatible with Kubernetes api version 1.16.* ...

I'm checking the scripts and the version for the provider is not used to install the packages (it will always grab the latest one) ... I'm trying to prepare a fix that will use the version of the api to install the same packages (kubelet and kubeadm)

This is the error I'm getting in the kubelet: kubernetes/kubernetes#86094

Provision cluster in existing VPS

Is there any provider which can provision nodes in existing VPS, by making the host os run kubeadm. (i.e without any virtualization on top of it)

CoreOS support

Hi @inercia ,
I'm working on a project to create a k8s cluster in a baremetal servers. Your project could help me a lot to deploy a cluster, but I have some doubts...

  1. is there support for coreOS OS?
  2. I will use ssh-connections as a providers, is it possible right?

Thank you in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.