Giter VIP home page Giter VIP logo

k0sctl's People

Contributors

0skillallluck avatar abubyr avatar andrelaszlo avatar bephinix avatar christian-rau avatar daniel-naegele avatar danj-replicated avatar dependabot[bot] avatar emosbaugh avatar erdii avatar hrenard avatar irumaru avatar jabbrwcky avatar janartodesk avatar jasmingacic avatar jnummelin avatar juanluisvaladas avatar kaplan-michael avatar kke avatar mic92 avatar msa0311 avatar mviitane avatar oogy avatar ricardomaraschini avatar rtsp avatar s0j avatar trawler avatar twz123 avatar walf443 avatar ydkn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k0sctl's Issues

Fix Typo in Readme

There is a typo in our readme for k0sctl config file:
"instalFlags" should be "installFlags"

Get Kubeconfig

Ability for k0sctl to fetch kubeconfig from the controller node.

Implement backup

Implement cluster backup for easy recovery if things go wrong. These backups can also be used for creating similar clusters elsewhere.

Add smoke tests

A simple apply should do.

I guess, to get a full exercise, it should have two managers and at least one worker, but less will probably suffice for now.

#33 would speed it up a lot, but it then needs to be run with uploadBinary: true and the downloads will not be exercised.

CoreOS support

Hey folks, I was wondering if you plan to support coreos (Fedora & Rhel).
Any idea what would it take to add this?

As of now coreos is discovered as Fedora, k0sctl then tries to install stuff using yum which is not present in coreos, so it fails.

Thanks

Server's IP not added in the certificate

I've created a single cluster with the following configuration (only the IP addresses are different from the ones provided in the default config).

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 163.172.190.5
      user: root
      port: 22
      keyPath: /Users/luc/.ssh/id_rsa
    role: server
  - ssh:
      address: 163.172.133.74
      user: root
      port: 22
      keyPath: /Users/luc/.ssh/id_rsa
    role: worker
  k0s:
    version: 0.10.0

Everything went fine:

Capture d’écran 2021-02-06 à 19 32 53

But when getting the kubeconfig, it seems the IP provided cannot be used to query the API Server.

k0sctl kubeconfig -c k0sctl.yaml > kubeconfig
export KUBECONFIG=$PWD/kubeconfig

kubectl get po -A
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 10.71.94.49, 127.0.0.1, 10.96.0.1, not 163.172.190.5

Should the master's external IP be added in the SANs so it is added to the certificate and can be reached from the outside ? Any hints on what I'm missing ?

Error "unknown field 'Direct'" on Apple Silicon

On Apple Silicon

❯ GO111MODULE=on go get github.com/k0sproject/k0sctl@latest
...
# github.com/k0sproject/k0sctl/integration/segment
../../../go/pkg/mod/github.com/k0sproject/[email protected]/integration/segment/segment.go:29:2: unknown field 'Direct' in struct literal of type "github.com/segmentio/analytics-go".Context

I've noticed that Apple Silicon support is very new (#114) and this issue looks related to the change in #85

❯ go version
go version go1.16.2 darwin/arm64

Add a mac homebrew formula

Analytics key will need to be hardcoded unless the formula is made to be a brew "cask", which is quite uncommon for open source.

Aborting k0s download should remove partial files

  • Aborting the download to local cache can't be resolved except by manually removing the file, the broken partial will be uploaded to hosts
  • Likely, aborting the remote download will make k0s version fail and a broken version will remain on the host and maybe interfere with the next gather facts.

Documentation

Documentation - README

  • Prerequisites
  • Install guide
  • Config reference
  • Command reference

Backup k0s.yaml before overwriting

Generated configs should maybe have something like:

# generated-by-k0sctl $TIMESTAMP
apiVersion: ...

Before writing out a k0s.yaml, k0sctl would do:

grep -q "generated-by-k0sctl" /etc/k0s/k0s.yaml || \
mv /etc/k0s/k0s.yaml /etc/k0s/k0s.yaml.old

Release pipeline

We need a release pipeline, basically from a creation of a tag to having k0sctl bins available for download in GH releases

Multi-node cluster

Hey guys, I'm trying to deploy a multinode cluster for HA with the following config:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  k0s:
    version: 0.11.0
  hosts:
  - ssh:
      address: 172.17.10.103
      user: platform
      port: 22
    role: controller
  - ssh:
      address: 172.17.10.104
      user: platform
      port: 22
    role: controller
  - ssh:
      address: 172.17.10.105
      user: platform
      port: 22
    role: controller
  - ssh:
      address: 172.17.10.106
      user: platform
      port: 22
    role: worker
  - ssh:
      address: 172.17.10.107
      user: platform
      port: 22
    role: worker
  - ssh:
      address: 172.17.10.109
      user: platform
      port: 22
    role: worker

The command k0sctl apply --config k0sconfig.yml results in:

k0sctl apply --config k0sctl.yaml

⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███          ███    ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███          ███    ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███          ███    ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████    ███    ██████████

k0sctl v0.5.0 Copyright 2021, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO ==> Running phase: Connect to hosts
INFO [ssh] 172.17.10.109:22: connected
INFO [ssh] 172.17.10.106:22: connected
INFO [ssh] 172.17.10.105:22: connected
INFO [ssh] 172.17.10.107:22: connected
INFO [ssh] 172.17.10.104:22: connected
INFO [ssh] 172.17.10.103:22: connected
INFO ==> Running phase: Detect host operating systems
INFO [ssh] 172.17.10.109:22: is running Ubuntu 20.04.2 LTS
INFO [ssh] 172.17.10.103:22: is running Ubuntu 20.04.2 LTS
INFO [ssh] 172.17.10.105:22: is running Ubuntu 20.04.2 LTS
INFO [ssh] 172.17.10.104:22: is running Ubuntu 20.04.2 LTS
INFO [ssh] 172.17.10.107:22: is running Ubuntu 20.04.2 LTS
INFO [ssh] 172.17.10.106:22: is running Ubuntu 20.04.2 LTS
INFO ==> Running phase: Prepare hosts
INFO ==> Running phase: Gather host facts
INFO [ssh] 172.17.10.103:22: discovered ens160 as private interface
INFO [ssh] 172.17.10.105:22: discovered ens160 as private interface
INFO [ssh] 172.17.10.104:22: discovered ens160 as private interface
INFO ==> Running phase: Validate hosts
INFO ==> Running phase: Gather k0s facts
INFO [ssh] 172.17.10.103:22: found existing configuration
INFO [ssh] 172.17.10.104:22: found existing configuration
INFO [ssh] 172.17.10.105:22: found existing configuration
INFO ==> Running phase: Validate facts
INFO ==> Running phase: Configure k0s
INFO [ssh] 172.17.10.103:22: validating configuration
INFO [ssh] 172.17.10.103:22: configuration was changed
INFO [ssh] 172.17.10.104:22: validating configuration
INFO [ssh] 172.17.10.105:22: validating configuration
INFO ==> Running phase: Initialize the k0s cluster
INFO [ssh] 172.17.10.103:22: installing k0s controller
INFO [ssh] 172.17.10.103:22: waiting for the k0s service to start
INFO [ssh] 172.17.10.103:22: waiting for kubernetes api to respond
INFO ==> Running phase: Install controllers
INFO [ssh] 172.17.10.103:22: generating token
INFO [ssh] 172.17.10.104:22: writing join token
INFO [ssh] 172.17.10.104:22: installing k0s controller
INFO [ssh] 172.17.10.104:22: starting service
INFO [ssh] 172.17.10.104:22: waiting for the k0s service to start
INFO [ssh] 172.17.10.104:22: waiting for kubernetes api to respond
INFO [ssh] 172.17.10.103:22: generating token
INFO [ssh] 172.17.10.105:22: writing join token
INFO [ssh] 172.17.10.105:22: installing k0s controller
INFO [ssh] 172.17.10.105:22: starting service
INFO [ssh] 172.17.10.105:22: waiting for the k0s service to start
INFO [ssh] 172.17.10.105:22: waiting for kubernetes api to respond
INFO ==> Running phase: Install workers
INFO [ssh] 172.17.10.103:22: generating token
INFO [ssh] 172.17.10.107:22: writing join token
INFO [ssh] 172.17.10.109:22: writing join token
INFO [ssh] 172.17.10.106:22: writing join token
INFO [ssh] 172.17.10.107:22: installing k0s worker
INFO [ssh] 172.17.10.106:22: installing k0s worker
INFO [ssh] 172.17.10.109:22: installing k0s worker
INFO [ssh] 172.17.10.109:22: starting service
INFO [ssh] 172.17.10.107:22: starting service
INFO [ssh] 172.17.10.106:22: starting service
INFO [ssh] 172.17.10.109:22: waiting for node to become ready
INFO [ssh] 172.17.10.107:22: waiting for node to become ready
INFO [ssh] 172.17.10.106:22: waiting for node to become ready
INFO ==> Running phase: Disconnect from hosts
INFO ==> Finished in 1m52s
INFO k0s cluster version 0.11.0 is now installed
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using:
INFO      k0sctl kubeconfig

But If I go to controller 1 node and execute tail /var/log/syslog I got the following output (write a lot):

Mar  4 10:11:27 platform-devlab-node01 k0s[1928]: time="2021-03-04 10:11:27" level=info msg="I0304 10:11:27.218833    1975 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k0s/konnectivity-server/konnectivity-server.sock  <nil> 0 <nil>}] <nil> <nil>}" component=kube-apiserver
Mar  4 10:11:27 platform-devlab-node01 k0s[1928]: time="2021-03-04 10:11:27" level=info msg="I0304 10:11:27.219077    1975 clientconn.go:948] ClientConn switching balancer to \"pick_first\"" component=kube-apiserver
Mar  4 10:11:27 platform-devlab-node01 k0s[1928]: time="2021-03-04 10:11:27" level=info msg="I0304 10:11:27.219349    1975 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick" component=kube-apiserver
Mar  4 10:11:27 platform-devlab-node01 k0s[1928]: time="2021-03-04 10:11:27" level=info msg="E0304 10:11:27.221480    1984 server.go:276] \"Failed to get a backend\" err=\"No backend available\"" component=konnectivity

If I go to controller 2 and 3 I got the following output:

Mar  4 10:14:41 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:41" level=info msg="E0304 10:14:41.422303    2001 server.go:569] \"could not get frontent client\" err=\"can't find connID 8 in the frontends[83794678-9c68-429a-a810-bb781003f76e]\"" component=konnectivity
Mar  4 10:14:45 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:45" level=info msg="current config matches existing, not gonna do anything" component=coredns
Mar  4 10:14:45 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:45" level=info msg="current config matches existing, not gonna do anything" component=calico
Mar  4 10:14:45 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:45" level=info msg="current config matches existing, not gonna do anything" component=kubeproxy


Mar  4 10:14:47 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:47" level=info msg="E0304 10:14:47.683364    2001 server.go:245] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Mar  4 10:14:47 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:47" level=info msg="E0304 10:14:47.683987    2001 server.go:569] \"could not get frontent client\" err=\"can't find connID 6 in the frontends[d0c61a27-8735-4adc-9318-62f3327ae7b5]\"" component=konnectivity
Mar  4 10:14:48 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:48" level=info msg="E0304 10:14:48.718225    2012 scheduler.go:379] scheduler cache AssumePod failed: pod 5f6dc6ec-7ff0-466f-9950-9aaf70e92f49 is in the cache, so can't be assumed" component=kube-scheduler
Mar  4 10:14:48 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:48" level=info msg="E0304 10:14:48.718312    2012 factory.go:337] \"Error scheduling pod; retrying\" err=\"pod 5f6dc6ec-7ff0-466f-9950-9aaf70e92f49 is in the cache, so can't be assumed\" pod=\"kube-system/metrics-server-6fbcd86f7b-kdfx4\"" component=kube-scheduler
Mar  4 10:14:49 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:49" level=info msg="I0304 10:14:49.449082    1991 clientconn.go:106] parsed scheme: \"\"" component=kube-apiserver
Mar  4 10:14:49 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:49" level=info msg="I0304 10:14:49.449225    1991 clientconn.go:106] scheme \"\" not registered, fallback to default scheme" component=kube-apiserver
Mar  4 10:14:49 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:49" level=info msg="I0304 10:14:49.449423    1991 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/k0s/konnectivity-server/konnectivity-server.sock  <nil> 0 <nil>}] <nil> <nil>}" component=kube-apiserver
Mar  4 10:14:49 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:49" level=info msg="I0304 10:14:49.449520    1991 clientconn.go:948] ClientConn switching balancer to \"pick_first\"" component=kube-apiserver
Mar  4 10:14:49 platform-devlab-node03 k0s[1951]: time="2021-03-04 10:14:49" level=info msg="I0304 10:14:49.449850    1991 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick" component=kube-apiserver

kubectl get all -A

kubectl get all -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-5f6546844f-ssf5m   1/1     Running   0          12m
kube-system   pod/calico-node-5mwsx                          1/1     Running   0          12m
kube-system   pod/calico-node-d6xgr                          1/1     Running   0          12m
kube-system   pod/calico-node-rphdx                          1/1     Running   0          12m
kube-system   pod/coredns-5c98d7d4d8-sbzb8                   1/1     Running   0          13m
kube-system   pod/konnectivity-agent-7q58b                   1/1     Running   0          12m
kube-system   pod/konnectivity-agent-c8wpd                   1/1     Running   0          12m
kube-system   pod/konnectivity-agent-hnvzb                   1/1     Running   0          12m
kube-system   pod/kube-proxy-476p7                           1/1     Running   0          12m
kube-system   pod/kube-proxy-6t7pr                           1/1     Running   0          12m
kube-system   pod/kube-proxy-kr9gc                           1/1     Running   0          12m
kube-system   pod/metrics-server-6fbcd86f7b-kdfx4            1/1     Running   0          12m

NAMESPACE     NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP                  14m
kube-system   service/kube-dns         ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   13m
kube-system   service/metrics-server   ClusterIP   10.103.9.201   <none>        443/TCP                  12m

NAMESPACE     NAME                                DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node          3         3         3       3            3           kubernetes.io/os=linux   13m
kube-system   daemonset.apps/konnectivity-agent   3         3         3       3            3           kubernetes.io/os=linux   14m
kube-system   daemonset.apps/kube-proxy           3         3         3       3            3           kubernetes.io/os=linux   13m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           13m
kube-system   deployment.apps/coredns                   1/1     1            1           13m
kube-system   deployment.apps/metrics-server            1/1     1            1           12m

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-5f6546844f   1         1         1       13m
kube-system   replicaset.apps/coredns-5c98d7d4d8                   1         1         1       13m
kube-system   replicaset.apps/metrics-server-6fbcd86f7b            1         1         1       12m

K0sctl should set defaults to k0s.config.apiversion and kind

And it can use k0s default-config to do that.

This just feels weird:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
spec:
  k0s:
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster

You should be able to leave those out and just start with images or spec or whatever (it would be excellent if everything relevant in k0s was under spec so you could just have the contents of that under k0sctl's spec.k0s.config)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.