Giter VIP home page Giter VIP logo

k3s-ansible-traefik-rancher's People

Contributors

christhepcgeek avatar timothystewart6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

k3s-ansible-traefik-rancher's Issues

Script Errors: Task Apply Traefik

Trying out your script... seem to get errors on one of my masters. Ive reset many times and reinstalled. Seem to keep getting this same error on 192.168.1.200 master 1.

Hosts

[master]
192.168.1.200
192.168.1.201
192.168.1.202

[node]
192.168.1.203
192.168.1.204
192.168.1.205

[k3s_cluster:children]
master
node


TASK [traefik : apply traefik config] ************************************************************************************************************************************************************************************
Sunday 24 July 2022 17:15:29 -0400 (0:00:10.278) 0:03:52.426 ***********
fatal: [192.168.1.200]: FAILED! => {"changed": true, "cmd": ["kubectl", "apply", "-f", "/tmp/traefik/traefik-config.yaml"], "delta": "0:00:00.867203", "end": "2022-07-24 17:15:30.747372", "msg": "non-zero return code", "rc": 1, "start": "2022-07-24 17:15:29.880169", "stderr": "time="2022-07-24T17:15:30-04:00" level=warning msg="Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions"\nerror: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied", "stderr_lines": ["time="2022-07-24T17:15:30-04:00" level=warning msg="Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions"", "error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied"], "stdout": "", "stdout_lines": []}

PLAY RECAP ***************************************************************************************************************************************************************************************************************
192.168.1.200 : ok=42 changed=23 unreachable=0 failed=1 skipped=13 rescued=0 ignored=0
192.168.1.201 : ok=26 changed=9 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0
192.168.1.202 : ok=26 changed=9 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0
192.168.1.203 : ok=10 changed=3 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0
192.168.1.204 : ok=10 changed=3 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0
192.168.1.205 : ok=10 changed=3 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0

Slight issue with timing after cert-manager installed

Hey Chris -- I ended up increasing the sleep time for a RPI cluster install to separate the cert-manager before longhorn.

  • name: wait 15s for cert-manager to fully deploy
    wait_for:
  • timeout: 15
  • timeout: 60

I think the installation may have some challenges, given Go and other requirements, but may be a suggestion you consider.

IF you want some help implementing/testing, be happy to do so, but...either way, just noting that my cert-manager needed a bit more time. I just tagged it at 60. It could only need 15... :)

Help needed - 1 Master node always fails

I'm running 6 total vms in proxmox, 3 masters, 3 workers and I keep getting a failure on a master.
I've killed and retried, setup brand new blank VM's (ubuntu 24.02), and can't seem to figure out what i'm doing wrong.

Here is what displays in my terminal (error at the end):

server@k3s-admin:~/k3s-ansible-traefik-rancher$ ./deploy.sh
[WARNING]: Could not match supplied host pattern, ignoring: proxmox

PLAY [Prepare Proxmox cluster] *******************************************************************************************************************************************************************************************************
skipping: no hosts matched

PLAY [Prepare k3s nodes] *************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.112]
ok: [192.168.4.111]
ok: [192.168.4.102]
ok: [192.168.4.101]
ok: [192.168.4.113]

TASK [lxc : Check for rc.local file] *************************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [lxc : Create rc.local if needed] ***********************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [lxc : Write rc.local file] *****************************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [prereq : Set same timezone on every Server] ************************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.102]
ok: [192.168.4.101]
ok: [192.168.4.111]
ok: [192.168.4.112]
ok: [192.168.4.113]

TASK [prereq : Set SELinux to disabled state] ****************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [prereq : Enable IPv4 forwarding] ***********************************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.102]
ok: [192.168.4.112]
ok: [192.168.4.111]
ok: [192.168.4.101]
ok: [192.168.4.113]

TASK [prereq : Enable IPv6 forwarding] ***********************************************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.112]
ok: [192.168.4.103]
ok: [192.168.4.111]
ok: [192.168.4.113]

TASK [prereq : Enable IPv6 router advertisements] ************************************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.103]
ok: [192.168.4.111]
ok: [192.168.4.112]
ok: [192.168.4.113]

TASK [prereq : Add br_netfilter to /etc/modules-load.d/] *****************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [prereq : Load br_netfilter] ****************************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [prereq : Set bridge-nf-call-iptables (just to be sure)] ************************************************************************************************************************************************************************
skipping: [192.168.4.101] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [192.168.4.101] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [192.168.4.102] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [192.168.4.102] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [192.168.4.103] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [192.168.4.103]
skipping: [192.168.4.111] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [192.168.4.111] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [192.168.4.111]
skipping: [192.168.4.112] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [192.168.4.112] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [192.168.4.113] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [192.168.4.112]
skipping: [192.168.4.113] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [192.168.4.113]

TASK [prereq : Add /usr/local/bin to sudo secure_path] *******************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [download : Download k3s binary x64] ********************************************************************************************************************************************************************************************
changed: [192.168.4.102]
changed: [192.168.4.111]
changed: [192.168.4.101]
changed: [192.168.4.103]
changed: [192.168.4.112]
changed: [192.168.4.113]

TASK [download : Download k3s binary arm64] ******************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [download : Download k3s binary armhf] ******************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [raspberrypi : Test for raspberry pi /proc/cpuinfo] *****************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.102]
ok: [192.168.4.112]
ok: [192.168.4.101]
ok: [192.168.4.113]
ok: [192.168.4.111]

TASK [raspberrypi : Test for raspberry pi /proc/device-tree/model] *******************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.112]
ok: [192.168.4.111]
ok: [192.168.4.113]
ok: [192.168.4.103]

TASK [raspberrypi : Set raspberry_pi fact to true] ***********************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [raspberrypi : Set detected_distribution to Raspbian] ***************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.112]
skipping: [192.168.4.111]
skipping: [192.168.4.113]

TASK [raspberrypi : Set detected_distribution to Raspbian (ARM64 on Debian Buster)] **************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [raspberrypi : Set detected_distribution_major_version] *************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [raspberrypi : Set detected_distribution to Raspbian (ARM64 on Debian Bullseye)] ************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [raspberrypi : Execute OS related tasks on the Raspberry Pi - setup] ************************************************************************************************************************************************************
skipping: [192.168.4.101] => (item=/home/server/k3s-ansible-traefik-rancher/roles/raspberrypi/tasks/setup/Ubuntu.yml) 
skipping: [192.168.4.102] => (item=/home/server/k3s-ansible-traefik-rancher/roles/raspberrypi/tasks/setup/Ubuntu.yml) 
skipping: [192.168.4.101]
skipping: [192.168.4.103] => (item=/home/server/k3s-ansible-traefik-rancher/roles/raspberrypi/tasks/setup/Ubuntu.yml) 
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111] => (item=/home/server/k3s-ansible-traefik-rancher/roles/raspberrypi/tasks/setup/Ubuntu.yml) 
skipping: [192.168.4.111]
skipping: [192.168.4.112] => (item=/home/server/k3s-ansible-traefik-rancher/roles/raspberrypi/tasks/setup/Ubuntu.yml) 
skipping: [192.168.4.112]
skipping: [192.168.4.113] => (item=/home/server/k3s-ansible-traefik-rancher/roles/raspberrypi/tasks/setup/Ubuntu.yml) 
skipping: [192.168.4.113]

TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *********************************************************************************************************************************************************************
skipping: [192.168.4.101] => (item=rancher) 
skipping: [192.168.4.101] => (item=rancher/k3s) 
skipping: [192.168.4.101]
skipping: [192.168.4.102] => (item=rancher) 
skipping: [192.168.4.102] => (item=rancher/k3s) 
skipping: [192.168.4.103] => (item=rancher) 
skipping: [192.168.4.103] => (item=rancher/k3s) 
skipping: [192.168.4.102]
skipping: [192.168.4.111] => (item=rancher) 
skipping: [192.168.4.111] => (item=rancher/k3s) 
skipping: [192.168.4.103]
skipping: [192.168.4.112] => (item=rancher) 
skipping: [192.168.4.111]
skipping: [192.168.4.112] => (item=rancher/k3s) 
skipping: [192.168.4.112]
skipping: [192.168.4.113] => (item=rancher) 
skipping: [192.168.4.113] => (item=rancher/k3s) 
skipping: [192.168.4.113]

TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] ***********************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

PLAY [Setup k3s servers] *************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.101]
ok: [192.168.4.102]

TASK [k3s_server : Stop k3s-init] ****************************************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.102]
ok: [192.168.4.101]

TASK [k3s_server : Clean previous runs of k3s-init] **********************************************************************************************************************************************************************************
ok: [192.168.4.102]
ok: [192.168.4.103]
ok: [192.168.4.101]

TASK [k3s_server : Deploy K3s http_proxy conf] ***************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]

TASK [k3s_server : Deploy vip manifest] **********************************************************************************************************************************************************************************************
included: /home/server/k3s-ansible-traefik-rancher/roles/k3s_server/tasks/vip.yml for 192.168.4.101, 192.168.4.102, 192.168.4.103

TASK [k3s_server : Create manifests directory on first master] ***********************************************************************************************************************************************************************
skipping: [192.168.4.102]
skipping: [192.168.4.103]
changed: [192.168.4.101]

TASK [k3s_server : Download vip rbac manifest to first master] ***********************************************************************************************************************************************************************
skipping: [192.168.4.102]
skipping: [192.168.4.103]
changed: [192.168.4.101]

TASK [k3s_server : Copy vip manifest to first master] ********************************************************************************************************************************************************************************
skipping: [192.168.4.102]
skipping: [192.168.4.103]
changed: [192.168.4.101]

TASK [k3s_server : Deploy metallb manifest] ******************************************************************************************************************************************************************************************
included: /home/server/k3s-ansible-traefik-rancher/roles/k3s_server/tasks/metallb.yml for 192.168.4.101, 192.168.4.102, 192.168.4.103

TASK [k3s_server : Create manifests directory on first master] ***********************************************************************************************************************************************************************
skipping: [192.168.4.102]
skipping: [192.168.4.103]
ok: [192.168.4.101]

TASK [k3s_server : Download to first master: manifest for metallb-native] ************************************************************************************************************************************************************
skipping: [192.168.4.102]
skipping: [192.168.4.103]
changed: [192.168.4.101]

TASK [k3s_server : Set image versions in manifest for metallb-native] ****************************************************************************************************************************************************************
skipping: [192.168.4.102] => (item=metallb/speaker:v0.13.12 => metallb/speaker:v0.13.12) 
skipping: [192.168.4.102]
skipping: [192.168.4.103] => (item=metallb/speaker:v0.13.12 => metallb/speaker:v0.13.12) 
skipping: [192.168.4.103]
ok: [192.168.4.101] => (item=metallb/speaker:v0.13.12 => metallb/speaker:v0.13.12)

TASK [k3s_server : Init cluster inside the transient k3s-init service] ***************************************************************************************************************************************************************
changed: [192.168.4.101]
changed: [192.168.4.102]
changed: [192.168.4.103]

TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *************************************************************************************************************************************
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (14 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (14 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (13 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (13 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (12 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (12 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (11 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (11 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (10 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (10 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (14 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (9 retries left).
FAILED - RETRYING: [192.168.4.102]: Verify that all nodes actually joined (check k3s-init.service if this fails) (9 retries left).
FAILED - RETRYING: [192.168.4.101]: Verify that all nodes actually joined (check k3s-init.service if this fails) (13 retries left).
FAILED - RETRYING: [192.168.4.103]: Verify that all nodes actually joined (check k3s-init.service if this fails) (8 retries left).
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.103]

TASK [k3s_server : Save logs of k3s-init.service] ************************************************************************************************************************************************************************************
skipping: [192.168.4.101]
skipping: [192.168.4.102]
skipping: [192.168.4.103]

TASK [k3s_server : Kill the temporary service used for initialization] ***************************************************************************************************************************************************************
changed: [192.168.4.103]
changed: [192.168.4.101]
changed: [192.168.4.102]

TASK [k3s_server : Copy K3s service file] ********************************************************************************************************************************************************************************************
changed: [192.168.4.101]
changed: [192.168.4.102]
changed: [192.168.4.103]

TASK [k3s_server : Enable and check K3s service] *************************************************************************************************************************************************************************************
changed: [192.168.4.102]
changed: [192.168.4.101]
changed: [192.168.4.103]

TASK [k3s_server : Wait for node-token] **********************************************************************************************************************************************************************************************
ok: [192.168.4.102]
ok: [192.168.4.103]
ok: [192.168.4.101]

TASK [k3s_server : Register node-token file access mode] *****************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.101]
ok: [192.168.4.102]

TASK [k3s_server : Change file access node-token] ************************************************************************************************************************************************************************************
changed: [192.168.4.101]
changed: [192.168.4.103]
changed: [192.168.4.102]

TASK [k3s_server : Read node-token from master] **************************************************************************************************************************************************************************************
ok: [192.168.4.103]
ok: [192.168.4.102]
ok: [192.168.4.101]

TASK [k3s_server : Store Master node-token] ******************************************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.103]

TASK [k3s_server : Restore node-token file access] ***********************************************************************************************************************************************************************************
changed: [192.168.4.102]
changed: [192.168.4.101]
changed: [192.168.4.103]

TASK [k3s_server : Create directory .kube] *******************************************************************************************************************************************************************************************
ok: [192.168.4.102]
ok: [192.168.4.101]
ok: [192.168.4.103]

TASK [k3s_server : Copy config file to user home directory] **************************************************************************************************************************************************************************
changed: [192.168.4.101]
changed: [192.168.4.102]
changed: [192.168.4.103]

TASK [k3s_server : Configure kubectl cluster to https://192.168.4.50:6443] ***********************************************************************************************************************************************************
changed: [192.168.4.102]
changed: [192.168.4.103]
changed: [192.168.4.101]

TASK [k3s_server : Create kubectl symlink] *******************************************************************************************************************************************************************************************
ok: [192.168.4.102]
ok: [192.168.4.101]
ok: [192.168.4.103]

TASK [k3s_server : Create crictl symlink] ********************************************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.103]

TASK [k3s_server : Get contents of manifests folder] *********************************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.103]

TASK [k3s_server : Get sub dirs of manifests folder] *********************************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.103]

TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] ***************************************************************************************************
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml)
changed: [192.168.4.103] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml)
changed: [192.168.4.102] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml)
changed: [192.168.4.103] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml)
changed: [192.168.4.102] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml)
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml)
changed: [192.168.4.102] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml)
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml)
changed: [192.168.4.103] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml)
changed: [192.168.4.102] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml)
changed: [192.168.4.103] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml)
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml)
changed: [192.168.4.102] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server)
changed: [192.168.4.103] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server)
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml)
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml)
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml)
changed: [192.168.4.101] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server)

PLAY [Setup k3s agents] **************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************
ok: [192.168.4.113]
ok: [192.168.4.112]
ok: [192.168.4.111]

TASK [k3s_agent : Deploy K3s http_proxy conf] ****************************************************************************************************************************************************************************************
skipping: [192.168.4.111]
skipping: [192.168.4.112]
skipping: [192.168.4.113]

TASK [k3s_agent : Copy K3s service file] *********************************************************************************************************************************************************************************************
changed: [192.168.4.111]
changed: [192.168.4.112]
changed: [192.168.4.113]

TASK [k3s_agent : Enable and check K3s service] **************************************************************************************************************************************************************************************
changed: [192.168.4.113]
changed: [192.168.4.112]
changed: [192.168.4.111]

PLAY [Configure k3s cluster] *********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************
ok: [192.168.4.101]
ok: [192.168.4.102]
ok: [192.168.4.103]

TASK [k3s_server_post : Deploy metallb pool] *****************************************************************************************************************************************************************************************
included: /home/server/k3s-ansible-traefik-rancher/roles/k3s_server_post/tasks/metallb.yml for 192.168.4.101, 192.168.4.102, 192.168.4.103

TASK [k3s_server_post : Create manifests directory for temp configuration] ***********************************************************************************************************************************************************
changed: [192.168.4.101] => (item=192.168.4.101)
ok: [192.168.4.101] => (item=192.168.4.102)
ok: [192.168.4.101] => (item=192.168.4.103)

TASK [k3s_server_post : Copy metallb CRs manifest to first master] *******************************************************************************************************************************************************************
changed: [192.168.4.101] => (item=192.168.4.101)
ok: [192.168.4.101] => (item=192.168.4.102)
ok: [192.168.4.101] => (item=192.168.4.103)

TASK [k3s_server_post : Test metallb-system namespace] *******************************************************************************************************************************************************************************
ok: [192.168.4.101] => (item=192.168.4.101)
ok: [192.168.4.101] => (item=192.168.4.102)
ok: [192.168.4.101] => (item=192.168.4.103)

TASK [k3s_server_post : Wait for MetalLB resources] **********************************************************************************************************************************************************************************
ok: [192.168.4.101] => (item=controller)
failed: [192.168.4.101] (item=webhook service) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "pod", "--namespace=metallb-system", "--selector=component=controller", "--for=jsonpath={.status.phase}=Running", "--timeout=120s"], "delta": "0:00:02.877238", "end": "2024-05-13 18:14:12.777332", "item": {"condition": "--for=jsonpath='{.status.phase}'=Running", "description": "webhook service", "resource": "pod", "selector": "component=controller"}, "msg": "non-zero return code", "rc": 1, "start": "2024-05-13 18:14:09.900094", "stderr": "error: Get \"https://127.0.0.1:6443/api/v1/namespaces/metallb-system/pods?labelSelector=component%3Dcontroller\": dial tcp 127.0.0.1:6443: connect: connection refused - error from a previous attempt: unexpected EOF", "stderr_lines": ["error: Get \"https://127.0.0.1:6443/api/v1/namespaces/metallb-system/pods?labelSelector=component%3Dcontroller\": dial tcp 127.0.0.1:6443: connect: connection refused - error from a previous attempt: unexpected EOF"], "stdout": "", "stdout_lines": []}
failed: [192.168.4.101] (item=pods in replica sets) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "pod", "--namespace=metallb-system", "--selector=component=controller,app=metallb", "--for", "condition=Ready", "--timeout=120s"], "delta": "0:00:00.095547", "end": "2024-05-13 18:14:13.022334", "item": {"condition": "--for condition=Ready", "description": "pods in replica sets", "resource": "pod", "selector": "component=controller,app=metallb"}, "msg": "non-zero return code", "rc": 1, "start": "2024-05-13 18:14:12.926787", "stderr": "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}
failed: [192.168.4.101] (item=ready replicas of controller) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "replicaset", "--namespace=metallb-system", "--selector=component=controller,app=metallb", "--for=jsonpath={.status.readyReplicas}=1", "--timeout=120s"], "delta": "0:00:00.082801", "end": "2024-05-13 18:14:13.253787", "item": {"condition": "--for=jsonpath='{.status.readyReplicas}'=1", "description": "ready replicas of controller", "resource": "replicaset", "selector": "component=controller,app=metallb"}, "msg": "non-zero return code", "rc": 1, "start": "2024-05-13 18:14:13.170986", "stderr": "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}
failed: [192.168.4.101] (item=fully labeled replicas of controller) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "replicaset", "--namespace=metallb-system", "--selector=component=controller,app=metallb", "--for=jsonpath={.status.fullyLabeledReplicas}=1", "--timeout=120s"], "delta": "0:00:00.081092", "end": "2024-05-13 18:14:13.486861", "item": {"condition": "--for=jsonpath='{.status.fullyLabeledReplicas}'=1", "description": "fully labeled replicas of controller", "resource": "replicaset", "selector": "component=controller,app=metallb"}, "msg": "non-zero return code", "rc": 1, "start": "2024-05-13 18:14:13.405769", "stderr": "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}
failed: [192.168.4.101] (item=available replicas of controller) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "replicaset", "--namespace=metallb-system", "--selector=component=controller,app=metallb", "--for=jsonpath={.status.availableReplicas}=1", "--timeout=120s"], "delta": "0:00:00.085194", "end": "2024-05-13 18:14:13.722874", "item": {"condition": "--for=jsonpath='{.status.availableReplicas}'=1", "description": "available replicas of controller", "resource": "replicaset", "selector": "component=controller,app=metallb"}, "msg": "non-zero return code", "rc": 1, "start": "2024-05-13 18:14:13.637680", "stderr": "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT *******************************************************************************************************************************************************************************************************************

PLAY RECAP ***************************************************************************************************************************************************************************************************************************
192.168.4.101              : ok=43   changed=16   unreachable=0    failed=1    skipped=20   rescued=0    ignored=0   
192.168.4.102              : ok=34   changed=10   unreachable=0    failed=0    skipped=26   rescued=0    ignored=0   
192.168.4.103              : ok=34   changed=10   unreachable=0    failed=0    skipped=26   rescued=0    ignored=0   
192.168.4.111              : ok=11   changed=3    unreachable=0    failed=0    skipped=19   rescued=0    ignored=0   
192.168.4.112              : ok=11   changed=3    unreachable=0    failed=0    skipped=19   rescued=0    ignored=0   
192.168.4.113              : ok=11   changed=3    unreachable=0    failed=0    skipped=19   rescued=0    ignored=0 

My hosts.ini file:

[master]
192.168.4.101
192.168.4.102
192.168.4.103


[node]
192.168.4.111
192.168.4.112
192.168.4.113

# only required if proxmox_lxc_configure: true
# must contain all proxmox instances that have a master or worker node
# [proxmox]
# 192.168.30.43

[k3s_cluster:children]
master
node

My all.yml file:

---
k3s_version: v1.26.10+k3s1
ansible_user: server
systemd_dir: /etc/systemd/system

# set your timezone
system_timezone: "America/Chicago"

# interface which will be used for flannel
# debian is usually eth0, ubuntu could be either that or ens18, varies by OS. check with `ip -a` in terminal
flannel_iface: "eth0"

#retry count to check all nodes join cluster.  uncomment and set this to something higher than 20
#if your cluster doesn't all join up before the playbook times out
#retry_count: 40

# apiserver_endpoint is virtual ip-address which will be configured on each master
apiserver_endpoint: "192.168.4.50"

# k3s_token is required  masters can talk together securely
k3s_token: "secrettoken"

# The IP on which the node is reachable in the cluster.
# Here, a sensible default is provided, you can still override
# it for each of your hosts, though.
k3s_node_ip: '{{ ansible_facts[flannel_iface]["ipv4"]["address"] }}'

# Disable the taint manually by setting: k3s_master_taint = false
# switch which line is commented below to enable the taint on your masters when you have agent nodes
#k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"

k3s_master_taint: false

# these arguments are recommended for servers as well as agents:
extra_args: >-
  --flannel-iface={{ flannel_iface }}
  --node-ip={{ k3s_node_ip }}

# change these to your liking, the only required ones are --no-deploy servicelb and --no-depoly traefik (this playbook deploys traefik)
# If you don't want to deploy traefik with helm afterwards and rather use the one packed with k3s, remove the --no-deploy traefik flag
# and set the var 'deploy_traefik: false' down below
# -----------------------
# 7-24-2022: added additional args for prometheus monitoring following Tim's tutorial on that, If you don't plan to do monitoring they can be removed
# "--kube-controller-manager-arg bind-address=0.0.0.0 --kube-proxy-arg metrics-bind-address=0.0.0.0 --kube-scheduler-arg bind-address=0.0.0.0 --etcd-expose-metrics true --kubelet-arg containerd=/run/k3s/containerd/containerd.sock"
extra_server_args: >-
  {{ extra_args }}
  {{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
  --disable servicelb
  --disable traefik
  --write-kubeconfig-mode 644
  --kube-controller-manager-arg bind-address=0.0.0.0
  --kube-proxy-arg metrics-bind-address=0.0.0.0
  --kube-scheduler-arg bind-address=0.0.0.0
  --etcd-expose-metrics true
  --kubelet-arg containerd=/run/k3s/containerd/containerd.sock

extra_agent_args: >-
  {{ extra_args }}

# image tag for kube-vip
kube_vip_tag_version: "v0.6.3"

# metallb type frr or native
metal_lb_type: "native"

# metallb mode layer2 or bgp
metal_lb_mode: "layer2"

# bgp options
# metal_lb_bgp_my_asn: "64513"
# metal_lb_bgp_peer_asn: "64512"
# metal_lb_bgp_peer_address: "192.168.30.1"

# image tag for metal lb
#metal_lb_frr_tag_version: "v7.5.1"
metal_lb_speaker_tag_version: "v0.13.12"
metal_lb_controller_tag_version: "v0.13.12"

# metallb ip range for load balancer
metal_lb_ip_range: "192.168.4.60-192.168.4.80"

# Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes
# in your hosts.ini file.
# Please read https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 before using this.
# Most notably, your containers must be privileged, and must not have nesting set to true.
# Please note this script disables most of the security of lxc containers, with the trade off being that lxc
# containers are significantly more resource efficent compared to full VMs.
# Mixing and matching VMs and lxc containers is not supported, ymmv if you want to do this.
# I would only really recommend using this if you have partiularly low powered proxmox nodes where the overhead of
# VMs would use a significant portion of your available resources.
proxmox_lxc_configure: false
# the user that you would use to ssh into the host, for example if you run ssh some-user@my-proxmox-host,
# set this value to some-user
proxmox_lxc_ssh_user: server
# the unique proxmox ids for all of the containers in the cluster, both worker and master nodes
proxmox_lxc_ct_ids:
  - 401
  - 402
  - 403
  - 411
  - 412
  - 413


#deploy traefik? this deploys both an internal (default ingress) and external instance of traefik, external = traefik-external ingressClass
deploy_traefik: true

#first IP from above metalLB range which will be used by traefik
#--IMPORTANT-- This IP NEEDS to be contained in the above pool provided to metalLB.  Usually I use the first one in that range
# internal and external param respectively - port forward from your firewall to the external instance
traefik_int_endpoint_ip: "192.168.4.60"
traefik_ext_endpoint_ip: "192.168.4.61"

#set this in your local DNS server (ie. Pihole, or pfsense, etc.) pointing to the IP from the line just above.
traefik_int_dash_dns_name: "traefik.trever.cloud"
traefik_ext_dash_dns_name: "traefik-ext.trever.cloud"

#number of traefik pods you want running
traefik_replicas: 1

#deploy rancher?
deploy_rancher: true
#number of replicas you want for rancher's pods
rancher_replicas: 1

#rancher dns name
rancher_dns_name: "rancher.trever.cloud"

#version of cert-manager to deploy
cert_manager_ver: "v1.13.2"

#set this to true and put your ca cert and internal-ca issuer info in the variables below
#use_internal_ca: false

#issuer_email: "[email protected]"
#issuer_server_addr: "https://ca.yourdomain.lan/directory"

#internal_ca_cert: |
#  -----BEGIN CERTIFICATE-----
# <cert data here>
#  -----END CERTIFICATE-----


# Only enable this if you have set up your own container registry to act as a mirror / pull-through cache
# (harbor / nexus / docker's official registry / etc).
# Can be beneficial for larger dev/test environments (for example if you're getting rate limited by docker hub),
# or air-gapped environments where your nodes don't have internet access after the initial setup
# (which is still needed for downloading the k3s binary and such).
# k3s's documentation about private registries here: https://docs.k3s.io/installation/private-registry
custom_registries: false
# The registries can be authenticated or anonymous, depending on your registry server configuration.
# If they allow anonymous access, simply remove the following bit from custom_registries_yaml
#   configs:
#     "registry.domain.com":
#       auth:
#         username: yourusername
#         password: yourpassword
# The following is an example that pulls all images used in this playbook through your private registries.
# It also allows you to pull your own images from your private registry, without having to use imagePullSecrets
# in your deployments.
# If all you need is your own images and you don't care about caching the docker/quay/ghcr.io images,
# you can just remove those from the mirrors: section.
custom_registries_yaml: |
  mirrors:
    docker.io:
      endpoint:
        - "https://registry.domain.com/v2/dockerhub"
    quay.io:
      endpoint:
        - "https://registry.domain.com/v2/quayio"
    ghcr.io:
      endpoint:
        - "https://registry.domain.com/v2/ghcrio"
    registry.domain.com:
      endpoint:
        - "https://registry.domain.com"

  configs:
    "registry.domain.com":
      auth:
        username: yourusername
        password: yourpassword

# Only enable and configure these if you access the internet through a proxy
# proxy_env:
#   HTTP_PROXY: "http://proxy.domain.local:3128"
#   HTTPS_PROXY: "http://proxy.domain.local:3128"
#   NO_PROXY: "*.domain.local,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.