Giter VIP home page Giter VIP logo

Comments (39)

exocode avatar exocode commented on May 25, 2024 2

I have some debugging progress. Somehow agent-3 is not able to install. I do not know how agents are provisioned. I my k3os noob world I assume, that in some database agent-3 is still orphaned and not "checked out" correctly.

I think, that k3os is not installed at all on agent-3. when I ssh into my control-panes I see different folders than the agent-3.
All other agents (agent-0,1,2, 4,5) have the same folder structure as control-panes (eg. a folder /k3os is in the root, agent-3 does not have that)

Another observation I did: When I terraform apply, and the failed/tainted node gets replaced, Terraform immediately stops with the error below.

When I increase workers/new nodes are added, I can see a "progress state", so Terraform is waiting while increasing / adding new workers, but TF is not waiting when replacing nodes.

Another observation: I now downscaled to 3 worker agents (agent-0, agent-1, agent-2). applied it. then rescaled to 6 agents (agent-3, agent-4, agent-5).

agent-4 and agent-5 spins up as it should be
but somehow agent-3 is still not installing k3os, it gets skipped in the installation process

here is the complete output of the rescale process from 3 to 6 agents. you can see that agent-3 is skipped. Also in the UI agent-3 shows no installing process, but others all get up and running:

Where can I delete configurations of agent-3? Maybe in the control-plane nodes? Is it stored in etcd somewhere?

I know a lot of stuff... But I wanna understand why a specific node is skipped when installing.

Bildschirmfoto 2022-02-10 um 23 04 25

hcloud_server.agents[5]: Creating...
hcloud_server.agents[3]: Creating...  ################### AGENT-3
hcloud_server.agents[4]: Creating...
hcloud_server.agents[3]: Still creating... [10s elapsed]  ################### AGENT-3
hcloud_server.agents[5]: Still creating... [10s elapsed]
hcloud_server.agents[4]: Still creating... [10s elapsed]
hcloud_server.agents[5]: Provisioning with 'file'...
hcloud_server.agents[4]: Provisioning with 'file'...
hcloud_server.agents[4]: Still creating... [20s elapsed]
hcloud_server.agents[5]: Still creating... [20s elapsed]
hcloud_server.agents[5]: Still creating... [30s elapsed]
hcloud_server.agents[4]: Still creating... [30s elapsed]
hcloud_server.agents[4]: Still creating... [40s elapsed]
hcloud_server.agents[5]: Still creating... [40s elapsed]
hcloud_server.agents[4]: Provisioning with 'remote-exec'...
hcloud_server.agents[4] (remote-exec): Connecting to remote host via SSH...
hcloud_server.agents[4] (remote-exec):   Host: 78.47.82.149
hcloud_server.agents[4] (remote-exec):   User: root
hcloud_server.agents[4] (remote-exec):   Password: false
hcloud_server.agents[4] (remote-exec):   Private key: true
hcloud_server.agents[4] (remote-exec):   Certificate: false
hcloud_server.agents[4] (remote-exec):   SSH Agent: true
hcloud_server.agents[4] (remote-exec):   Checking Host Key: false
hcloud_server.agents[4] (remote-exec):   Target Platform: unix
hcloud_server.agents[4] (remote-exec): Connected!
hcloud_server.agents[4] (remote-exec): Reading package lists... 0%
hcloud_server.agents[5]: Provisioning with 'remote-exec'...
hcloud_server.agents[5] (remote-exec): Connecting to remote host via SSH...
hcloud_server.agents[5] (remote-exec):   Host: 78.46.194.159
hcloud_server.agents[5] (remote-exec):   User: root
hcloud_server.agents[5] (remote-exec):   Password: false
hcloud_server.agents[5] (remote-exec):   Private key: true
hcloud_server.agents[5] (remote-exec):   Certificate: false
hcloud_server.agents[5] (remote-exec):   SSH Agent: true
hcloud_server.agents[5] (remote-exec):   Checking Host Key: false
hcloud_server.agents[5] (remote-exec):   Target Platform: unix
hcloud_server.agents[4] (remote-exec): Reading package lists... 0%
hcloud_server.agents[4] (remote-exec): Reading package lists... 16%
hcloud_server.agents[4] (remote-exec): Reading package lists... Done
hcloud_server.agents[4] (remote-exec): Building dependency tree... 0%
hcloud_server.agents[4] (remote-exec): Building dependency tree... 0%
hcloud_server.agents[4] (remote-exec): Building dependency tree... 50%
hcloud_server.agents[4] (remote-exec): Building dependency tree... 50%
hcloud_server.agents[4] (remote-exec): Building dependency tree... Done
hcloud_server.agents[4] (remote-exec): Reading state information... 0%
hcloud_server.agents[4] (remote-exec): Reading state information... 0%
hcloud_server.agents[4] (remote-exec): Reading state information... Done
hcloud_server.agents[4] (remote-exec): mtools is already the newest version (4.0.26-1).
hcloud_server.agents[4] (remote-exec): The following additional packages will be installed:
hcloud_server.agents[4] (remote-exec):   grub-efi-amd64 grub-efi-amd64-bin
hcloud_server.agents[4] (remote-exec):   grub-efi-amd64-signed libburn4
hcloud_server.agents[4] (remote-exec):   libisoburn1 libisofs6 libjte2
hcloud_server.agents[4] (remote-exec):   shim-helpers-amd64-signed
hcloud_server.agents[4] (remote-exec):   shim-signed shim-signed-common
hcloud_server.agents[4] (remote-exec):   shim-unsigned
hcloud_server.agents[4] (remote-exec): Suggested packages:
hcloud_server.agents[4] (remote-exec):   desktop-base xorriso-tcltk jigit
hcloud_server.agents[4] (remote-exec):   cdck
hcloud_server.agents[4] (remote-exec): Recommended packages:
hcloud_server.agents[4] (remote-exec):   secureboot-db
hcloud_server.agents[5] (remote-exec): Connected!
hcloud_server.agents[4] (remote-exec): The following NEW packages will be installed:
hcloud_server.agents[4] (remote-exec):   grub-efi grub-efi-amd64
hcloud_server.agents[4] (remote-exec):   grub-efi-amd64-bin
hcloud_server.agents[4] (remote-exec):   grub-efi-amd64-signed grub-pc-bin
hcloud_server.agents[4] (remote-exec):   libburn4 libisoburn1 libisofs6
hcloud_server.agents[4] (remote-exec):   libjte2 shim-helpers-amd64-signed
hcloud_server.agents[4] (remote-exec):   shim-signed shim-signed-common
hcloud_server.agents[4] (remote-exec):   shim-unsigned xorriso
hcloud_server.agents[4] (remote-exec): 0 upgraded, 14 newly installed, 0 to remove and 0 not upgraded.
hcloud_server.agents[4] (remote-exec): Need to get 4,346 kB of archives.
hcloud_server.agents[4] (remote-exec): After this operation, 24.3 MB of additional disk space will be used.
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 0% [Working]
hcloud_server.agents[4] (remote-exec): Get:1 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi-amd64-bin amd64 2.04-20 [699 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 0% [1 grub-efi-amd64-bin 14.2 kB/699 kB
hcloud_server.agents[4] (remote-exec): 14% [Working]
hcloud_server.agents[4] (remote-exec): Get:2 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi-amd64 amd64 2.04-20 [39.8 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 15% [2 grub-efi-amd64 21.3 kB/39.8 kB 5
hcloud_server.agents[4] (remote-exec): 16% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:3 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi amd64 2.04-20 [2,536 B]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 18% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:4 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi-amd64-signed amd64 1+2.04+20 [469 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 19% [4 grub-efi-amd64-signed 44.0 kB/46
hcloud_server.agents[4] (remote-exec): 28% [Working]
hcloud_server.agents[4] (remote-exec): Get:5 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-pc-bin amd64 2.04-20 [971 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 29% [5 grub-pc-bin 33.2 kB/971 kB 3%]
hcloud_server.agents[4] (remote-exec): 47% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:6 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libburn4 amd64 1.5.2-1 [165 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 48% [6 libburn4 29.6 kB/165 kB 18%]
hcloud_server.agents[4] (remote-exec): 52% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:7 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libjte2 amd64 1.22-3 [30.0 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 52% [7 libjte2 30.0 kB/30.0 kB 100%]
hcloud_server.agents[4] (remote-exec): 54% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:8 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libisofs6 amd64 1.5.2-1 [205 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 55% [8 libisofs6 46.5 kB/205 kB 23%]
hcloud_server.agents[4] (remote-exec): 59% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:9 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libisoburn1 amd64 1.5.2-1 [391 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 60% [9 libisoburn1 38.1 kB/391 kB 10%]
hcloud_server.agents[4] (remote-exec): 68% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:10 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-unsigned amd64 15.4-7 [431 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 68% [10 shim-unsigned 40.0 kB/431 kB 9%
hcloud_server.agents[4] (remote-exec): 77% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:11 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-helpers-amd64-signed amd64 1+15.4+7 [298 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 78% [11 shim-helpers-amd64-signed 37.8
hcloud_server.agents[4] (remote-exec): 84% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:12 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-signed-common all 1.38+15.4-7 [13.6 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 84% [12 shim-signed-common 13.6 kB/13.6
hcloud_server.agents[4] (remote-exec): 86% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:13 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-signed amd64 1.38+15.4-7 [320 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 87% [13 shim-signed 53.8 kB/320 kB 17%]
hcloud_server.agents[4] (remote-exec): 93% [Waiting for headers]
hcloud_server.agents[4] (remote-exec): Get:14 http://mirror.hetzner.com/debian/packages bullseye/main amd64 xorriso amd64 1.5.2-1 [311 kB]
hcloud_server.agents[4] (remote-exec):
hcloud_server.agents[4] (remote-exec): 94% [14 xorriso 61.7 kB/311 kB 20%]
hcloud_server.agents[4] (remote-exec): 100% [Working]
hcloud_server.agents[4] (remote-exec): Fetched 4,346 kB in 0s (14.3 MB/s)
hcloud_server.agents[5] (remote-exec): Reading package lists... 0%
hcloud_server.agents[4] (remote-exec): Preconfiguring packages ...
                                       Selecting previously unselected package grub-efi-amd64-bin.
hcloud_server.agents[4] (remote-exec): (Reading database ...
hcloud_server.agents[4] (remote-exec): (Reading database ... 5%
hcloud_server.agents[4] (remote-exec): (Reading database ... 10%
hcloud_server.agents[4] (remote-exec): (Reading database ... 15%
hcloud_server.agents[4] (remote-exec): (Reading database ... 20%
hcloud_server.agents[4] (remote-exec): (Reading database ... 25%
hcloud_server.agents[4] (remote-exec): (Reading database ... 30%
hcloud_server.agents[4] (remote-exec): (Reading database ... 35%
hcloud_server.agents[4] (remote-exec): (Reading database ... 40%
hcloud_server.agents[4] (remote-exec): (Reading database ... 45%
hcloud_server.agents[4] (remote-exec): (Reading database ... 50%
hcloud_server.agents[4] (remote-exec): (Reading database ... 55%
hcloud_server.agents[4] (remote-exec): (Reading database ... 60%
hcloud_server.agents[4] (remote-exec): (Reading database ... 65%
hcloud_server.agents[4] (remote-exec): (Reading database ... 70%
hcloud_server.agents[4] (remote-exec): (Reading database ... 75%
hcloud_server.agents[4] (remote-exec): (Reading database ... 80%
hcloud_server.agents[4] (remote-exec): (Reading database ... 85%
hcloud_server.agents[4] (remote-exec): (Reading database ... 90%
hcloud_server.agents[4] (remote-exec): (Reading database ... 95%
hcloud_server.agents[4] (remote-exec): (Reading database ... 100%
hcloud_server.agents[4] (remote-exec): (Reading database ... 62163 files and directories currently installed.)
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../00-grub-efi-amd64-bin_2.04-20_amd64.deb ...
Progress: [  2%] [..................]  Unpacking grub-efi-amd64-bin (2.04-20) ...
Progress: [  4%] [..................]  Selecting previously unselected package grub-efi-amd64.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../01-grub-efi-amd64_2.04-20_amd64.deb ...
Progress: [  5%] [..................]  Unpacking grub-efi-amd64 (2.04-20) ...
Progress: [  7%] [#.................]  Selecting previously unselected package grub-efi.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../02-grub-efi_2.04-20_amd64.deb ...
Progress: [  9%] [#.................]  Unpacking grub-efi (2.04-20) ...
Progress: [ 11%] [#.................]  Selecting previously unselected package grub-efi-amd64-signed.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../03-grub-efi-amd64-signed_1+2.04+20_amd64.deb ...
Progress: [ 12%] [##................]  Unpacking grub-efi-amd64-signed (1+2.04+20) ...
Progress: [ 14%] [##................]  Selecting previously unselected package grub-pc-bin.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../04-grub-pc-bin_2.04-20_amd64.deb ...
Progress: [ 16%] [##................]  Unpacking grub-pc-bin (2.04-20) ...
Progress: [ 18%] [###...............]  Selecting previously unselected package libburn4:amd64.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../05-libburn4_1.5.2-1_amd64.deb ...
Progress: [ 19%] [###...............]  Unpacking libburn4:amd64 (1.5.2-1) ...
Progress: [ 21%] [###...............]  Selecting previously unselected package libjte2:amd64.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../06-libjte2_1.22-3_amd64.deb ...
Progress: [ 23%] [####..............]  Unpacking libjte2:amd64 (1.22-3) ...
Progress: [ 25%] [####..............]  Selecting previously unselected package libisofs6:amd64.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../07-libisofs6_1.5.2-1_amd64.deb ...
Progress: [ 26%] [####..............]  Unpacking libisofs6:amd64 (1.5.2-1) ...
Progress: [ 28%] [#####.............]  Selecting previously unselected package libisoburn1:amd64.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../08-libisoburn1_1.5.2-1_amd64.deb ...
Progress: [ 30%] [#####.............]  Unpacking libisoburn1:amd64 (1.5.2-1) ...
Progress: [ 32%] [#####.............]  Selecting previously unselected package shim-unsigned.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../09-shim-unsigned_15.4-7_amd64.deb ...
Progress: [ 33%] [#####.............]  Unpacking shim-unsigned (15.4-7) ...
Progress: [ 35%] [######............]  Selecting previously unselected package shim-helpers-amd64-signed.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../10-shim-helpers-amd64-signed_1+15.4+7_amd64.deb ...
Progress: [ 37%] [######............]  Unpacking shim-helpers-amd64-signed (1+15.4+7) ...
Progress: [ 39%] [######............]  Selecting previously unselected package shim-signed-common.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../11-shim-signed-common_1.38+15.4-7_all.deb ...
Progress: [ 40%] [#######...........]  Unpacking shim-signed-common (1.38+15.4-7) ...
Progress: [ 42%] [#######...........]  Selecting previously unselected package shim-signed:amd64.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../12-shim-signed_1.38+15.4-7_amd64.deb ...
Progress: [ 44%] [#######...........]  Unpacking shim-signed:amd64 (1.38+15.4-7) ...
Progress: [ 46%] [########..........]  Selecting previously unselected package xorriso.
hcloud_server.agents[4] (remote-exec): Preparing to unpack .../13-xorriso_1.5.2-1_amd64.deb ...
Progress: [ 47%] [########..........]  Unpacking xorriso (1.5.2-1) ...
Progress: [ 49%] [########..........]  Setting up grub-efi-amd64-signed (1+2.04+20) ...
Progress: [ 53%] [#########.........]  Setting up libjte2:amd64 (1.22-3) ...
Progress: [ 56%] [##########........]  Setting up grub-pc-bin (2.04-20) ...
Progress: [ 60%] [##########........]  Setting up shim-signed-common (1.38+15.4-7) ...
hcloud_server.agents[5] (remote-exec): Reading package lists... 0%
hcloud_server.agents[5] (remote-exec): Reading package lists... 16%
hcloud_server.agents[5] (remote-exec): Reading package lists... Done
Progress: [ 61%] [###########.......]
hcloud_server.agents[4] (remote-exec): No DKMS packages installed: not changing Secure Boot validation state.
Progress: [ 63%] [###########.......]  Setting up libburn4:amd64 (1.5.2-1) ...
Progress: [ 67%] [###########.......]  Setting up grub-efi-amd64-bin (2.04-20) ...
Progress: [ 70%] [############......]  Setting up shim-unsigned (15.4-7) ...
Progress: [ 74%] [#############.....]  Setting up libisofs6:amd64 (1.5.2-1) ...
Progress: [ 77%] [#############.....]  Setting up grub-efi-amd64 (2.04-20) ...
hcloud_server.agents[5] (remote-exec): Building dependency tree... 0%
hcloud_server.agents[5] (remote-exec): Building dependency tree... 0%
hcloud_server.agents[5] (remote-exec): Building dependency tree... 50%
hcloud_server.agents[5] (remote-exec): Building dependency tree... 50%
hcloud_server.agents[5] (remote-exec): Building dependency tree... Done
hcloud_server.agents[5] (remote-exec): Reading state information... 0%
hcloud_server.agents[5] (remote-exec): Reading state information... 0%
hcloud_server.agents[5] (remote-exec): Reading state information... Done
hcloud_server.agents[5] (remote-exec): mtools is already the newest version (4.0.26-1).
Progress: [ 79%] [##############....]
hcloud_server.agents[5] (remote-exec): The following additional packages will be installed:
hcloud_server.agents[5] (remote-exec):   grub-efi-amd64 grub-efi-amd64-bin
hcloud_server.agents[5] (remote-exec):   grub-efi-amd64-signed libburn4
hcloud_server.agents[5] (remote-exec):   libisoburn1 libisofs6 libjte2
hcloud_server.agents[5] (remote-exec):   shim-helpers-amd64-signed
hcloud_server.agents[5] (remote-exec):   shim-signed shim-signed-common
hcloud_server.agents[5] (remote-exec):   shim-unsigned
hcloud_server.agents[5] (remote-exec): Suggested packages:
hcloud_server.agents[5] (remote-exec):   desktop-base xorriso-tcltk jigit
hcloud_server.agents[5] (remote-exec):   cdck
hcloud_server.agents[5] (remote-exec): Recommended packages:
hcloud_server.agents[5] (remote-exec):   secureboot-db
hcloud_server.agents[5] (remote-exec): The following NEW packages will be installed:
hcloud_server.agents[5] (remote-exec):   grub-efi grub-efi-amd64
hcloud_server.agents[5] (remote-exec):   grub-efi-amd64-bin
hcloud_server.agents[5] (remote-exec):   grub-efi-amd64-signed grub-pc-bin
hcloud_server.agents[5] (remote-exec):   libburn4 libisoburn1 libisofs6
hcloud_server.agents[5] (remote-exec):   libjte2 shim-helpers-amd64-signed
hcloud_server.agents[5] (remote-exec):   shim-signed shim-signed-common
hcloud_server.agents[5] (remote-exec):   shim-unsigned xorriso
hcloud_server.agents[5] (remote-exec): 0 upgraded, 14 newly installed, 0 to remove and 0 not upgraded.
hcloud_server.agents[5] (remote-exec): Need to get 4,346 kB of archives.
hcloud_server.agents[5] (remote-exec): After this operation, 24.3 MB of additional disk space will be used.

hcloud_server.agents[4] (remote-exec): Creating config file /etc/default/grub with new version
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 0% [Working]
hcloud_server.agents[5] (remote-exec): Get:1 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi-amd64-bin amd64 2.04-20 [699 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 0% [1 grub-efi-amd64-bin 14.2 kB/699 kB
hcloud_server.agents[5] (remote-exec): 14% [Working]
hcloud_server.agents[5] (remote-exec): Get:2 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi-amd64 amd64 2.04-20 [39.8 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 14% [2 grub-efi-amd64 6,991 B/39.8 kB 1
hcloud_server.agents[5] (remote-exec): 16% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:3 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi amd64 2.04-20 [2,536 B]
hcloud_server.agents[5] (remote-exec): Get:4 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-efi-amd64-signed amd64 1+2.04+20 [469 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 19% [4 grub-efi-amd64-signed 41.0 kB/46
hcloud_server.agents[5] (remote-exec): Get:5 http://mirror.hetzner.com/debian/packages bullseye/main amd64 grub-pc-bin amd64 2.04-20 [971 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 29% [5 grub-pc-bin 71.6 kB/971 kB 7%]
hcloud_server.agents[5] (remote-exec): 47% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:6 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libburn4 amd64 1.5.2-1 [165 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 48% [6 libburn4 45.1 kB/165 kB 27%]
hcloud_server.agents[5] (remote-exec): 52% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:7 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libjte2 amd64 1.22-3 [30.0 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 54% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:8 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libisofs6 amd64 1.5.2-1 [205 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 55% [8 libisofs6 46.5 kB/205 kB 23%]
hcloud_server.agents[5] (remote-exec): 59% [Working]
hcloud_server.agents[5] (remote-exec): Get:9 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libisoburn1 amd64 1.5.2-1 [391 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 60% [9 libisoburn1 38.1 kB/391 kB 10%]
hcloud_server.agents[5] (remote-exec): 68% [Working]
hcloud_server.agents[5] (remote-exec): Get:10 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-unsigned amd64 15.4-7 [431 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 68% [10 shim-unsigned 40.0 kB/431 kB 9%
hcloud_server.agents[5] (remote-exec): 77% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:11 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-helpers-amd64-signed amd64 1+15.4+7 [298 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 78% [11 shim-helpers-amd64-signed 53.3
hcloud_server.agents[5] (remote-exec): 84% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:12 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-signed-common all 1.38+15.4-7 [13.6 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 86% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:13 http://mirror.hetzner.com/debian/packages bullseye/main amd64 shim-signed amd64 1.38+15.4-7 [320 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 87% [13 shim-signed 83.0 kB/320 kB 26%]
hcloud_server.agents[5] (remote-exec): 93% [Waiting for headers]
hcloud_server.agents[5] (remote-exec): Get:14 http://mirror.hetzner.com/debian/packages bullseye/main amd64 xorriso amd64 1.5.2-1 [311 kB]
hcloud_server.agents[5] (remote-exec):
hcloud_server.agents[5] (remote-exec): 94% [14 xorriso 73.1 kB/311 kB 23%]
hcloud_server.agents[5] (remote-exec): 100% [Working]
hcloud_server.agents[5] (remote-exec): Fetched 4,346 kB in 0s (14.6 MB/s)
Progress: [ 81%] [##############....]  Setting up libisoburn1:amd64 (1.5.2-1) ...
Progress: [ 82%] [##############....]  Setting up shim-helpers-amd64-signed (1+15.4+7) ...
Progress: [ 86%] [###############...]  Setting up xorriso (1.5.2-1) ...
hcloud_server.agents[4] (remote-exec): Setting up grub-efi (2.04-20) ...
Progress: [ 95%] [#################.]  Setting up shim-signed:amd64 (1.38+15.4-7) ...
Progress: [ 96%] [#################.]  Processing triggers for libc-bin (2.31-13+deb11u2) ...
Progress: [ 98%] [#################.]
hcloud_server.agents[5] (remote-exec): Preconfiguring packages ...
                                       Selecting previously unselected package grub-efi-amd64-bin.
hcloud_server.agents[5] (remote-exec): (Reading database ...
hcloud_server.agents[5] (remote-exec): (Reading database ... 5%
hcloud_server.agents[5] (remote-exec): (Reading database ... 10%
hcloud_server.agents[5] (remote-exec): (Reading database ... 15%
hcloud_server.agents[5] (remote-exec): (Reading database ... 20%
hcloud_server.agents[5] (remote-exec): (Reading database ... 25%
hcloud_server.agents[5] (remote-exec): (Reading database ... 30%
hcloud_server.agents[5] (remote-exec): (Reading database ... 35%
hcloud_server.agents[5] (remote-exec): (Reading database ... 40%
hcloud_server.agents[5] (remote-exec): (Reading database ... 45%
hcloud_server.agents[5] (remote-exec): (Reading database ... 50%
hcloud_server.agents[5] (remote-exec): (Reading database ... 55%
hcloud_server.agents[5] (remote-exec): (Reading database ... 60%
hcloud_server.agents[5] (remote-exec): (Reading database ... 65%
hcloud_server.agents[5] (remote-exec): (Reading database ... 70%
hcloud_server.agents[5] (remote-exec): (Reading database ... 75%
hcloud_server.agents[5] (remote-exec): (Reading database ... 80%
hcloud_server.agents[5] (remote-exec): (Reading database ... 85%
hcloud_server.agents[5] (remote-exec): (Reading database ... 90%
hcloud_server.agents[5] (remote-exec): (Reading database ... 95%
hcloud_server.agents[5] (remote-exec): (Reading database ... 100%
hcloud_server.agents[5] (remote-exec): (Reading database ... 62163 files and directories currently installed.)
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../00-grub-efi-amd64-bin_2.04-20_amd64.deb ...
Progress: [  2%] [..................]  Unpacking grub-efi-amd64-bin (2.04-20) ...
hcloud_server.agents[4] (remote-exec): Processing triggers for man-db (2.9.4-2) ...
Progress: [  4%] [..................]  Selecting previously unselected package grub-efi-amd64.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../01-grub-efi-amd64_2.04-20_amd64.deb ...
Progress: [  5%] [..................]  Unpacking grub-efi-amd64 (2.04-20) ...
Progress: [  7%] [#.................]  Selecting previously unselected package grub-efi.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../02-grub-efi_2.04-20_amd64.deb ...
Progress: [  9%] [#.................]  Unpacking grub-efi (2.04-20) ...
Progress: [ 11%] [#.................]  Selecting previously unselected package grub-efi-amd64-signed.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../03-grub-efi-amd64-signed_1+2.04+20_amd64.deb ...
Progress: [ 12%] [##................]  Unpacking grub-efi-amd64-signed (1+2.04+20) ...
Progress: [ 14%] [##................]  Selecting previously unselected package grub-pc-bin.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../04-grub-pc-bin_2.04-20_amd64.deb ...
Progress: [ 16%] [##................]  Unpacking grub-pc-bin (2.04-20) ...
Progress: [ 18%] [###...............]  Selecting previously unselected package libburn4:amd64.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../05-libburn4_1.5.2-1_amd64.deb ...
Progress: [ 19%] [###...............]  Unpacking libburn4:amd64 (1.5.2-1) ...
Progress: [ 21%] [###...............]  Selecting previously unselected package libjte2:amd64.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../06-libjte2_1.22-3_amd64.deb ...
Progress: [ 23%] [####..............]  Unpacking libjte2:amd64 (1.22-3) ...
Progress: [ 25%] [####..............]  Selecting previously unselected package libisofs6:amd64.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../07-libisofs6_1.5.2-1_amd64.deb ...
Progress: [ 26%] [####..............]  Unpacking libisofs6:amd64 (1.5.2-1) ...
Progress: [ 28%] [#####.............]  Selecting previously unselected package libisoburn1:amd64.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../08-libisoburn1_1.5.2-1_amd64.deb ...
Progress: [ 30%] [#####.............]  Unpacking libisoburn1:amd64 (1.5.2-1) ...
Progress: [ 32%] [#####.............]  Selecting previously unselected package shim-unsigned.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../09-shim-unsigned_15.4-7_amd64.deb ...
Progress: [ 33%] [#####.............]  Unpacking shim-unsigned (15.4-7) ...
Progress: [ 35%] [######............]  Selecting previously unselected package shim-helpers-amd64-signed.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../10-shim-helpers-amd64-signed_1+15.4+7_amd64.deb ...
Progress: [ 37%] [######............]  Unpacking shim-helpers-amd64-signed (1+15.4+7) ...
Progress: [ 39%] [######............]  Selecting previously unselected package shim-signed-common.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../11-shim-signed-common_1.38+15.4-7_all.deb ...
Progress: [ 40%] [#######...........]  Unpacking shim-signed-common (1.38+15.4-7) ...
Progress: [ 42%] [#######...........]  Selecting previously unselected package shim-signed:amd64.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../12-shim-signed_1.38+15.4-7_amd64.deb ...
Progress: [ 44%] [#######...........]  Unpacking shim-signed:amd64 (1.38+15.4-7) ...
hcloud_server.agents[4] (remote-exec): Processing triggers for install-info (6.7.0.dfsg.2-6) ...
Progress: [ 46%] [########..........]  Selecting previously unselected package xorriso.
hcloud_server.agents[5] (remote-exec): Preparing to unpack .../13-xorriso_1.5.2-1_amd64.deb ...
Progress: [ 47%] [########..........]  Unpacking xorriso (1.5.2-1) ...
Progress: [ 49%] [########..........]  Setting up grub-efi-amd64-signed (1+2.04+20) ...
hcloud_server.agents[5] (remote-exec): Setting up libjte2:amd64 (1.22-3) ...
Progress: [ 56%] [##########........]  Setting up grub-pc-bin (2.04-20) ...
Progress: [ 60%] [##########........]  Setting up shim-signed-common (1.38+15.4-7) ...
Progress: [ 61%] [###########.......]
hcloud_server.agents[5] (remote-exec): No DKMS packages installed: not changing Secure Boot validation state.
Progress: [ 63%] [###########.......]  Setting up libburn4:amd64 (1.5.2-1) ...
Progress: [ 67%] [###########.......]  Setting up grub-efi-amd64-bin (2.04-20) ...
Progress: [ 70%] [############......]  Setting up shim-unsigned (15.4-7) ...
Progress: [ 74%] [#############.....]  Setting up libisofs6:amd64 (1.5.2-1) ...
Progress: [ 77%] [#############.....]  Setting up grub-efi-amd64 (2.04-20) ...

Progress: [ 79%] [##############....]

hcloud_server.agents[5] (remote-exec): Creating config file /etc/default/grub with new version
Progress: [ 81%] [##############....]  Setting up libisoburn1:amd64 (1.5.2-1) ...
Progress: [ 84%] [###############...]  Setting up shim-helpers-amd64-signed (1+15.4+7) ...
Progress: [ 88%] [###############...]  Setting up xorriso (1.5.2-1) ...
Progress: [ 91%] [################..]  Setting up grub-efi (2.04-20) ...
Progress: [ 95%] [#################.]  Setting up shim-signed:amd64 (1.38+15.4-7) ...
Progress: [ 98%] [#################.]  Processing triggers for libc-bin (2.31-13+deb11u2) ...
hcloud_server.agents[4] (remote-exec):   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
hcloud_server.agents[4] (remote-exec):                                  Dload  Upload   Total   Spent    Left  Speed
hcloud_server.agents[4] (remote-exec):   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
hcloud_server.agents[5] (remote-exec): Processing triggers for man-db (2.9.4-2) ...
hcloud_server.agents[4] (remote-exec): 100  9021  100  9021    0     0  16372      0 --:--:-- --:--:-- --:--:-- 16372
hcloud_server.agents[4] (remote-exec): mount: /run/k3os/iso: wrong fs type, bad option, bad superblock on /dev/sda, missing codepage or helper program, or other error.
hcloud_server.agents[4] (remote-exec):   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
hcloud_server.agents[4] (remote-exec):                                  Dload  Upload   Total   Spent    Left  Speed
hcloud_server.agents[4] (remote-exec):   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
hcloud_server.agents[4] (remote-exec): 100   653  100   653    0     0  22517      0 --:--:-- --:--:-- --:--:-- 22517
hcloud_server.agents[5] (remote-exec): Processing triggers for install-info (6.7.0.dfsg.2-6) ...
hcloud_server.agents[5]: Still creating... [50s elapsed]
hcloud_server.agents[4]: Still creating... [50s elapsed]

hcloud_server.agents[5] (remote-exec):   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
hcloud_server.agents[5] (remote-exec):                                  Dload  Upload   Total   Spent    Left  Speed
hcloud_server.agents[5] (remote-exec):   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
hcloud_server.agents[5] (remote-exec): 100  9021  100  9021    0     0  38883      0 --:--:-- --:--:-- --:--:-- 38883
hcloud_server.agents[4] (remote-exec):  21  513M   21  109M    0     0   140M      0  0:00:03 --:--:--  0:00:03  140M
hcloud_server.agents[5] (remote-exec): mount: /run/k3os/iso: wrong fs type, bad option, bad superblock on /dev/sda, missing codepage or helper program, or other error.
hcloud_server.agents[5] (remote-exec):   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
hcloud_server.agents[5] (remote-exec):                                  Dload  Upload   Total   Spent    Left  Speed
hcloud_server.agents[5] (remote-exec):   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
hcloud_server.agents[5] (remote-exec): 100   653  100   653    0     0  21766      0 --:--:-- --:--:-- --:--:-- 22517
hcloud_server.agents[5] (remote-exec):   0  513M    0  9175    0     0  58814      0  2:32:32 --:--:--  2:32:32 58814
hcloud_server.agents[4] (remote-exec):  77  513M   77  397M    0     0   224M      0  0:00:02  0:00:01  0:00:01  288M
hcloud_server.agents[4] (remote-exec): 100  513M  100  513M    0     0   233M      0  0:00:02  0:00:02 --:--:--  285M
hcloud_server.agents[5] (remote-exec):  55  513M   55  282M    0     0   251M      0  0:00:02  0:00:01  0:00:01  291M
hcloud_server.agents[4] (remote-exec): 1+0 records in
hcloud_server.agents[4] (remote-exec): 1+0 records out
hcloud_server.agents[4] (remote-exec): 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505506 s, 207 MB/s
hcloud_server.agents[5] (remote-exec): 100  513M  100  513M    0     0   281M      0  0:00:01  0:00:01 --:--:--  307M
hcloud_server.agents[5] (remote-exec): 1+0 records in
hcloud_server.agents[5] (remote-exec): 1+0 records out
hcloud_server.agents[5] (remote-exec): 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382795 s, 274 MB/s
hcloud_server.agents[4] (remote-exec): mke2fs 1.46.2 (28-Feb-2021)
hcloud_server.agents[4] (remote-exec): Discarding device blocks: done
hcloud_server.agents[4] (remote-exec): Creating filesystem with 170752 4k blocks and 42720 inodes
hcloud_server.agents[4] (remote-exec): Filesystem UUID: 893634a3-76d0-46cd-8d38-ed51e8f8ab79
hcloud_server.agents[4] (remote-exec): Superblock backups stored on blocks:
hcloud_server.agents[4] (remote-exec): 	32768, 98304, 163840

hcloud_server.agents[4] (remote-exec): Allocating group tables: done
hcloud_server.agents[4] (remote-exec): Writing inode tables: done
hcloud_server.agents[4] (remote-exec): Creating journal (4096 blocks): done
hcloud_server.agents[4] (remote-exec): Writing superblocks and filesystem accounting information: done

hcloud_server.agents[4] (remote-exec): k3os/
hcloud_server.agents[4] (remote-exec): k3os/system/
hcloud_server.agents[4] (remote-exec): k3os/system/config.yaml
hcloud_server.agents[4] (remote-exec): k3os/system/k3os/
hcloud_server.agents[4] (remote-exec): k3os/system/k3os/current
hcloud_server.agents[4] (remote-exec): k3os/system/k3os/v0.21.5-k3s2r1/
hcloud_server.agents[4] (remote-exec): k3os/system/k3os/v0.21.5-k3s2r1/k3os
hcloud_server.agents[4] (remote-exec): k3os/system/k3os/v0.21.5-k3s2r1/k3os-install.sh
hcloud_server.agents[4] (remote-exec): k3os/system/k3s/
hcloud_server.agents[4] (remote-exec): k3os/system/k3s/current
hcloud_server.agents[4] (remote-exec): k3os/system/k3s/v1.21.5+k3s2/
hcloud_server.agents[4] (remote-exec): k3os/system/k3s/v1.21.5+k3s2/k3s
hcloud_server.agents[4] (remote-exec): k3os/system/k3s/v1.21.5+k3s2/k3s-install.sh
hcloud_server.agents[4] (remote-exec): k3os/system/kernel/
hcloud_server.agents[4] (remote-exec): k3os/system/kernel/5.4.0-88-generic/
hcloud_server.agents[4] (remote-exec): k3os/system/kernel/5.4.0-88-generic/initrd
hcloud_server.agents[4] (remote-exec): k3os/system/kernel/5.4.0-88-generic/kernel.squashfs
hcloud_server.agents[5] (remote-exec): mke2fs 1.46.2 (28-Feb-2021)
hcloud_server.agents[5] (remote-exec): Discarding device blocks: done
hcloud_server.agents[5] (remote-exec): Creating filesystem with 170752 4k blocks and 42720 inodes
hcloud_server.agents[5] (remote-exec): Filesystem UUID: 5570ae31-0d89-4d55-869f-2b9df5549edb
hcloud_server.agents[5] (remote-exec): Superblock backups stored on blocks:
hcloud_server.agents[5] (remote-exec): 	32768, 98304, 163840

hcloud_server.agents[5] (remote-exec): Allocating group tables: done
hcloud_server.agents[5] (remote-exec): Writing inode tables: done
hcloud_server.agents[5] (remote-exec): Creating journal (4096 blocks): done
hcloud_server.agents[5] (remote-exec): Writing superblocks and filesystem accounting information: done

hcloud_server.agents[5] (remote-exec): k3os/
hcloud_server.agents[5] (remote-exec): k3os/system/
hcloud_server.agents[5] (remote-exec): k3os/system/config.yaml
hcloud_server.agents[5] (remote-exec): k3os/system/k3os/
hcloud_server.agents[5] (remote-exec): k3os/system/k3os/current
hcloud_server.agents[5] (remote-exec): k3os/system/k3os/v0.21.5-k3s2r1/
hcloud_server.agents[5] (remote-exec): k3os/system/k3os/v0.21.5-k3s2r1/k3os
hcloud_server.agents[5] (remote-exec): k3os/system/k3os/v0.21.5-k3s2r1/k3os-install.sh
hcloud_server.agents[5] (remote-exec): k3os/system/k3s/
hcloud_server.agents[5] (remote-exec): k3os/system/k3s/current
hcloud_server.agents[5] (remote-exec): k3os/system/k3s/v1.21.5+k3s2/
hcloud_server.agents[5] (remote-exec): k3os/system/k3s/v1.21.5+k3s2/k3s
hcloud_server.agents[5] (remote-exec): k3os/system/k3s/v1.21.5+k3s2/k3s-install.sh
hcloud_server.agents[5] (remote-exec): k3os/system/kernel/
hcloud_server.agents[5] (remote-exec): k3os/system/kernel/5.4.0-88-generic/
hcloud_server.agents[5] (remote-exec): k3os/system/kernel/5.4.0-88-generic/initrd
hcloud_server.agents[4] (remote-exec): k3os/system/kernel/current
hcloud_server.agents[5] (remote-exec): k3os/system/kernel/5.4.0-88-generic/kernel.squashfs
hcloud_server.agents[4] (remote-exec): Installing for i386-pc platform.
hcloud_server.agents[5] (remote-exec): k3os/system/kernel/current
hcloud_server.agents[5] (remote-exec): Installing for i386-pc platform.
hcloud_server.agents[4] (remote-exec): Installation finished. No error reported.
hcloud_server.agents[4] (remote-exec): Reboot scheduled for Thu 2022-02-10 23:00:26 CET, use 'shutdown -c' to cancel.
hcloud_server.agents[5] (remote-exec): Installation finished. No error reported.
hcloud_server.agents[5] (remote-exec): Reboot scheduled for Thu 2022-02-10 23:00:26 CET, use 'shutdown -c' to cancel.
hcloud_server.agents[4]: Provisioning with 'local-exec'...
hcloud_server.agents[4] (local-exec): Executing: ["/bin/sh" "-c" "sleep 60 && ping 78.47.82.149 | grep --line-buffered 'bytes from' | head -1 && sleep 100"]
hcloud_server.agents[5]: Provisioning with 'local-exec'...
hcloud_server.agents[5] (local-exec): Executing: ["/bin/sh" "-c" "sleep 60 && ping 78.46.194.159 | grep --line-buffered 'bytes from' | head -1 && sleep 100"]
hcloud_server.agents[5]: Still creating... [1m0s elapsed]
hcloud_server.agents[4]: Still creating... [1m0s elapsed]
hcloud_server.agents[5]: Still creating... [1m10s elapsed]
hcloud_server.agents[4]: Still creating... [1m10s elapsed]
hcloud_server.agents[5]: Still creating... [1m20s elapsed]
hcloud_server.agents[4]: Still creating... [1m20s elapsed]
hcloud_server.agents[4]: Still creating... [1m30s elapsed]
hcloud_server.agents[5]: Still creating... [1m30s elapsed]
hcloud_server.agents[4]: Still creating... [1m40s elapsed]
hcloud_server.agents[5]: Still creating... [1m40s elapsed]
hcloud_server.agents[5]: Still creating... [1m50s elapsed]
hcloud_server.agents[4]: Still creating... [1m50s elapsed]
hcloud_server.agents[5]: Still creating... [2m0s elapsed]
hcloud_server.agents[4]: Still creating... [2m0s elapsed]
hcloud_server.agents[5]: Still creating... [2m10s elapsed]
hcloud_server.agents[4]: Still creating... [2m10s elapsed]
hcloud_server.agents[5] (local-exec): 64 bytes from 78.46.194.159: icmp_seq=20 ttl=53 time=39.428 ms
hcloud_server.agents[5]: Still creating... [2m20s elapsed]
hcloud_server.agents[4]: Still creating... [2m20s elapsed]
hcloud_server.agents[4] (local-exec): 64 bytes from 78.47.82.149: icmp_seq=21 ttl=53 time=29.669 ms
hcloud_server.agents[5]: Still creating... [2m30s elapsed]
hcloud_server.agents[4]: Still creating... [2m30s elapsed]
hcloud_server.agents[4]: Still creating... [2m40s elapsed]
hcloud_server.agents[5]: Still creating... [2m40s elapsed]
hcloud_server.agents[5]: Still creating... [2m50s elapsed]
hcloud_server.agents[4]: Still creating... [2m50s elapsed]
hcloud_server.agents[5]: Still creating... [3m0s elapsed]
hcloud_server.agents[4]: Still creating... [3m0s elapsed]
hcloud_server.agents[4]: Still creating... [3m10s elapsed]
hcloud_server.agents[5]: Still creating... [3m10s elapsed]
hcloud_server.agents[5]: Still creating... [3m20s elapsed]
hcloud_server.agents[4]: Still creating... [3m20s elapsed]
hcloud_server.agents[4]: Still creating... [3m30s elapsed]
hcloud_server.agents[5]: Still creating... [3m30s elapsed]
hcloud_server.agents[5]: Still creating... [3m40s elapsed]
hcloud_server.agents[4]: Still creating... [3m40s elapsed]
hcloud_server.agents[5]: Still creating... [3m50s elapsed]
hcloud_server.agents[4]: Still creating... [3m50s elapsed]
hcloud_server.agents[5]: Still creating... [4m0s elapsed]
hcloud_server.agents[4]: Still creating... [4m0s elapsed]
hcloud_server.agents[5]: Creation complete after 4m2s [id=17887432]
hcloud_server.agents[4]: Creation complete after 4m2s [id=17887429]
╷
│ Error: hcloud/inlineAttachServerToNetwork: attach server to network: provided IP is not available (ip_not_available)
│
│   with hcloud_server.agents[3],
│   on agents.tf line 1, in resource "hcloud_server" "agents":
│    1: resource "hcloud_server" "agents" {
│
â•ĩ

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024 2

Found the root cause of the occasional hcloud/inlineAttachServerToNetwork: attach server to network: provided IP is not available (ip_not_available) error @phaer and @mnencia, and @exocode.

Sometimes the LB grabs one of the IPs before it is available to the nodes. So am moving servers to 10.0.1.x and agents to 10.0.2.x in the new branch https://github.com/kube-hetzner/kube-hetzner/tree/k3s-install.

We have no other choice because we can't set the private IP of the LB, it is automatically assigned.

ksnip_20220216-095828

from terraform-hcloud-kube-hetzner.

mnencia avatar mnencia commented on May 25, 2024 2

I was searching for a way to move the lb IP in a different /24 network, but your solution is better: move everything in our control elsewhere and leave the 10.0.0.0/24 network for the things we cannot control.

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024 1

or simply apply patch_latest.yaml against my cluster?

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024 1

Ok, the CSI is working again. Thank you. But I still stuck with my agent-3 node:

  1. I tried to shutdown and restart the node
  2. I tried to reapply Terraform when node-3 (see: "Logs when node running")
  3. I tried to reapply Terraform after node-3 was deleted (see: "Logs when node was deleted")

Logs when node running

❯ tf apply --auto-approve
random_password.k3s_token: Refreshing state... [id=none]
local_file.traefik_config: Refreshing state... [id=25ba84696ee16d68f5b98f6ea6b70bb14c3c530c]
hcloud_network.k3s: Refreshing state... [id=1352333]
hcloud_placement_group.k3s_placement_group: Refreshing state... [id=19653]
hcloud_ssh_key.default: Refreshing state... [id=5492430]
hcloud_firewall.k3s: Refreshing state... [id=290151]
local_file.hetzner_ccm_config: Refreshing state... [id=f5ec6cb5689cb5830d04857365d567edae562174]
hcloud_network_subnet.k3s: Refreshing state... [id=1352333-10.0.0.0/16]
local_file.hetzner_csi_config: Refreshing state... [id=17eb99cc9c025b24af1e1f591d01ec110dc91dc1]
hcloud_server.first_control_plane: Refreshing state... [id=17736249]
hcloud_server.control_planes[1]: Refreshing state... [id=17736378]
hcloud_server.control_planes[0]: Refreshing state... [id=17736377]
hcloud_server.agents[4]: Refreshing state... [id=17858945]
hcloud_server.agents[2]: Refreshing state... [id=17736383]
hcloud_server.agents[5]: Refreshing state... [id=17861319]
hcloud_server.agents[3]: Refreshing state... [id=17872069]
hcloud_server.agents[0]: Refreshing state... [id=17736379]
hcloud_server.agents[1]: Refreshing state... [id=17736385]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # hcloud_firewall.k3s has been changed
  ~ resource "hcloud_firewall" "k3s" {
        id     = "290151"
        name   = "k3s-firewall"
        # (1 unchanged attribute hidden)

      - apply_to {
          - server = 17870378 -> null
        }
      + apply_to {
          + server = 17872069
        }

        # (21 unchanged blocks hidden)
    }
  # hcloud_placement_group.k3s_placement_group has been changed
  ~ resource "hcloud_placement_group" "k3s_placement_group" {
        id      = "19653"
        name    = "k3s-placement-group"
      ~ servers = [
          - 17870378,
          + 17872069,
            # (8 unchanged elements hidden)
        ]
        # (2 unchanged attributes hidden)
    }
  # hcloud_server.agents[3] has been changed
  ~ resource "hcloud_server" "agents" {
      + datacenter         = "fsn1-dc14"
        id                 = "17872069"
      + ipv4_address       = "78.47.82.xxx"
      + ipv6_address       = "2a01:4f8:c17:xxxx::1"
      + ipv6_network       = "2a01:4f8:c17:xxxx::/64"
        name               = "k3s-agent-3"
      + status             = "running"
        # (12 unchanged attributes hidden)

      + network {
          + alias_ips   = []
          + ip          = "10.0.0.11"
          + mac_address = "86:00:00:03:9b:35"
          + network_id  = 1352333
        }
      - network {
          - alias_ips  = [] -> null
          - ip         = "10.0.0.8" -> null
          - network_id = 1352333 -> null
        }
    }
  # local_file.hetzner_csi_config has been deleted
  - resource "local_file" "hetzner_csi_config" {
      - content              = <<-EOT
            apiVersion: kustomize.config.k8s.io/v1beta1
            kind: Kustomization

            resources:
            - "https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.6.0/deploy/kubernetes/hcloud-csi.yml"


            patchesStrategicMerge:
            - patch_latest.yaml
        EOT -> null
      - directory_permission = "0755" -> null
      - file_permission      = "0644" -> null
      - filename             = "./hetzner/csi/kustomization.yaml" -> null
      - id                   = "17eb99cc9c025b24af1e1f591d01ec110dc91dc1" -> null
    }

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # hcloud_server.agents[3] is tainted, so must be replaced
-/+ resource "hcloud_server" "agents" {
      + backup_window      = (known after apply)
      ~ datacenter         = "fsn1-dc14" -> (known after apply)
      ~ id                 = "17872069" -> (known after apply)
      ~ ipv4_address       = "78.47.82.xxx" -> (known after apply)
      ~ ipv6_address       = "2a01:4f8:c17:xxxx::1" -> (known after apply)
      ~ ipv6_network       = "2a01:4f8:c17:xxxx::/64" -> (known after apply)
        name               = "k3s-agent-3"
      ~ status             = "running" -> (known after apply)
        # (12 unchanged attributes hidden)

      - network {
          - alias_ips   = [] -> null
          - ip          = "10.0.0.11" -> null
          - mac_address = "86:00:00:03:9b:35" -> null
          - network_id  = 1352333 -> null
        }
      + network {
          + alias_ips   = []
          + ip          = "10.0.0.8"
          + mac_address = (known after apply)
          + network_id  = 1352333
        }
    }

  # local_file.hetzner_csi_config will be created
  + resource "local_file" "hetzner_csi_config" {
      + content              = <<-EOT
            apiVersion: kustomize.config.k8s.io/v1beta1
            kind: Kustomization

            resources:
            - "https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.6.0/deploy/kubernetes/hcloud-csi.yml"

        EOT
      + directory_permission = "0755"
      + file_permission      = "0644"
      + filename             = "./hetzner/csi/kustomization.yaml"
      + id                   = (known after apply)
    }

Plan: 2 to add, 0 to change, 1 to destroy.

Changes to Outputs:
  ~ agents_public_ip = [
        # (2 unchanged elements hidden)
        "138.201.246.186",
      + (known after apply),
      + "78.46.163.xxx",
      + "49.12.100.xxx",
    ]
hcloud_server.agents[3]: Destroying... [id=17872069]
hcloud_server.agents[3]: Destruction complete after 1s
local_file.hetzner_csi_config: Creating...
hcloud_server.agents[3]: Creating...
local_file.hetzner_csi_config: Creation complete after 0s [id=aa232912bcf86722e32b698e1e077522c7f02a9d]
hcloud_server.agents[3]: Still creating... [10s elapsed]
╷
│ Error: hcloud/inlineAttachServerToNetwork: attach server to network: provided IP is not available (ip_not_available)
│
│   with hcloud_server.agents[3],
│   on agents.tf line 1, in resource "hcloud_server" "agents":
│    1: resource "hcloud_server" "agents" {
│
â•ĩ

Logs when node deleted

❯ tf apply --auto-approve
random_password.k3s_token: Refreshing state... [id=none]
hcloud_network.k3s: Refreshing state... [id=1352333]
hcloud_ssh_key.default: Refreshing state... [id=5492430]
hcloud_placement_group.k3s_placement_group: Refreshing state... [id=19653]
local_file.traefik_config: Refreshing state... [id=25ba84696ee16d68f5b98f6ea6b70bb14c3c530c]
hcloud_firewall.k3s: Refreshing state... [id=290151]
local_file.hetzner_csi_config: Refreshing state... [id=aa232912bcf86722e32b698e1e077522c7f02a9d]
local_file.hetzner_ccm_config: Refreshing state... [id=f5ec6cb5689cb5830d04857365d567edae562174]
hcloud_network_subnet.k3s: Refreshing state... [id=1352333-10.0.0.0/16]
hcloud_server.first_control_plane: Refreshing state... [id=17736249]
hcloud_server.agents[5]: Refreshing state... [id=17861319]
hcloud_server.agents[2]: Refreshing state... [id=17736383]
hcloud_server.agents[0]: Refreshing state... [id=17736379]
hcloud_server.agents[3]: Refreshing state... [id=17875783]
hcloud_server.agents[1]: Refreshing state... [id=17736385]
hcloud_server.control_planes[0]: Refreshing state... [id=17736377]
hcloud_server.agents[4]: Refreshing state... [id=17858945]
hcloud_server.control_planes[1]: Refreshing state... [id=17736378]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # hcloud_placement_group.k3s_placement_group has been changed
  ~ resource "hcloud_placement_group" "k3s_placement_group" {
        id      = "19653"
        name    = "k3s-placement-group"
      ~ servers = [
          - 17872069,
            # (8 unchanged elements hidden)
        ]
        # (2 unchanged attributes hidden)
    }
  # hcloud_firewall.k3s has been changed
  ~ resource "hcloud_firewall" "k3s" {
        id     = "290151"
        name   = "k3s-firewall"
        # (1 unchanged attribute hidden)

      - apply_to {
          - server = 17872069 -> null
        }

        # (21 unchanged blocks hidden)
    }
  # hcloud_server.agents[3] has been deleted
  - resource "hcloud_server" "agents" {
      - backups            = false -> null
      - delete_protection  = false -> null
      - firewall_ids       = [
          - 290151,
        ] -> null
      - id                 = "17875783" -> null
      - image              = "ubuntu-20.04" -> null
      - keep_disk          = false -> null
      - labels             = {
          - "engine"      = "k3s"
          - "k3s_upgrade" = "true"
          - "provisioner" = "terraform"
        } -> null
      - location           = "fsn1" -> null
      - name               = "k3s-agent-3" -> null
      - placement_group_id = 19653 -> null
      - rebuild_protection = false -> null
      - rescue             = "linux64" -> null
      - server_type        = "cpx21" -> null
      - ssh_keys           = [
          - "5492430",
        ] -> null

      - network {
          - alias_ips  = [] -> null
          - ip         = "10.0.0.8" -> null
          - network_id = 1352333 -> null
        }
    }

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # hcloud_server.agents[3] will be created
  + resource "hcloud_server" "agents" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 290151,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "k3s_upgrade" = "true"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-agent-3"
      + placement_group_id = 19653
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx21"
      + ssh_keys           = [
          + "5492430",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.0.8"
          + mac_address = (known after apply)
          + network_id  = 1352333
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  ~ agents_public_ip = [
        # (2 unchanged elements hidden)
        "138.201.246.xxx",
      + (known after apply),
      + "78.46.163.xxx",
      + "49.12.100.xxx",
    ]
hcloud_server.agents[3]: Creating...
hcloud_server.agents[3]: Still creating... [10s elapsed]
╷
│ Error: hcloud/inlineAttachServerToNetwork: attach server to network: provided IP is not available (ip_not_available)
│
│   with hcloud_server.agents[3],
│   on agents.tf line 1, in resource "hcloud_server" "agents":
│    1: resource "hcloud_server" "agents" {
│
â•ĩ

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024 1

Ok only to be clear:

git clone.. 
git checkout master
tf apply...

right?
Because I am sure, I was on master. Or did you switch a few days before when I initially cloned your repo (3rd february)?
I will reinstall the whole cluster. Thank you

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024 1

ok, I saw, that you did a push a few hours ago, which solves the problem above. Seems that the "merge" to the "new version" still has some issues.

But with a totally new git clone (then delete project, create project and token, cp terraform.tfvars.example terraform.tfvars
paste the new token, adding my public and private key..

tf init
tf plan
tf apply:

end in that error:

hcloud_server.first_control_plane: Still creating... [5m50s elapsed]
hcloud_server.first_control_plane: Still creating... [6m0s elapsed]
hcloud_server.first_control_plane: Still creating... [6m10s elapsed]
hcloud_server.first_control_plane: Still creating... [6m20s elapsed]
hcloud_server.first_control_plane: Still creating... [6m30s elapsed]
hcloud_server.first_control_plane: Still creating... [6m40s elapsed]
hcloud_server.first_control_plane: Still creating... [6m50s elapsed]
╷
│ Error: file provisioner error
│
│   with hcloud_server.first_control_plane,
│   on master.tf line 54, in resource "hcloud_server" "first_control_plane":
│   54:   provisioner "file" {
│
│ timeout - last error: dial tcp 78.47.82.149:22: connect: operation timed out
â•ĩ

Attached new log:
newlog.txt

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

@exocode Maybe try poweroff and poweron on the node via hcloud server, and retry.

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

@mysticaltech thank you for your fast reply. Another issue which could be is hcloud csi driver:

❯ k describe pod -n kube-system hcloud-csi-node-n4qt8
Name:         hcloud-csi-node-n4qt8
Namespace:    kube-system
Priority:     0
Node:         k3s-agent-4/10.0.0.9
Start Time:   Thu, 10 Feb 2022 11:10:09 +0100
Labels:       app=hcloud-csi
              controller-revision-hash=566c85cd7b
              pod-template-generation=1
Annotations:  <none>
Status:       Pending
IP:           10.42.6.18
IPs:
  IP:           10.42.6.18
Controlled By:  DaemonSet/hcloud-csi-node
Containers:
  csi-node-driver-registrar:
    Container ID:
    Image:         k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Args:
      --kubelet-registration-path=/var/lib/kubelet/plugins/csi.hetzner.cloud/socket
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /registration from registration-dir (rw)
      /run/csi from plugin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gll7w (ro)
  hcloud-csi-driver:
    Container ID:   containerd://fd0435d4e0a9a01cb02b57c55b2e52d163bda920488e54578dae9f5b3f6f1b67
    Image:          hetznercloud/hcloud-csi-driver:1.6.0
    Image ID:       docker.io/hetznercloud/hcloud-csi-driver@sha256:1475d525f9a4039ae8f1d81666a0fc912d92f34415f6c53723656dff0ee16bd1
    Ports:          9189/TCP, 9808/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 10 Feb 2022 11:29:06 +0100
      Finished:     Thu, 10 Feb 2022 11:29:25 +0100
    Ready:          False
    Restart Count:  11
    Liveness:       http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5
    Environment:
      CSI_ENDPOINT:      unix:///run/csi/socket
      METRICS_ENDPOINT:  0.0.0.0:9189
      ENABLE_METRICS:    true
      HCLOUD_TOKEN:      <set to the key 'token' in secret 'hcloud-csi'>  Optional: false
      KUBE_NODE_NAME:     (v1:spec.nodeName)
    Mounts:
      /dev from device-dir (rw)
      /run/csi from plugin-dir (rw)
      /var/lib/kubelet from kubelet-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gll7w (ro)
  liveness-probe:
    Container ID:
    Image:          k8s.gcr.io/sig-storage/livenessprobe:v2.3.0
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /run/csi from plugin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gll7w (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubelet-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet
    HostPathType:  Directory
  plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi.hetzner.cloud/
    HostPathType:  DirectoryOrCreate
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
  device-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  Directory
  kube-api-access-gll7w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 :NoExecute op=Exists
                             :NoSchedule op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  21m                  default-scheduler  Successfully assigned kube-system/hcloud-csi-node-n4qt8 to k3s-agent-4
  Normal   Pulling    21m                  kubelet            Pulling image "hetznercloud/hcloud-csi-driver:1.6.0"
  Normal   Started    21m                  kubelet            Started container hcloud-csi-driver
  Normal   Pulled     21m                  kubelet            Successfully pulled image "hetznercloud/hcloud-csi-driver:1.6.0" in 992.465684ms
  Normal   Created    21m                  kubelet            Created container hcloud-csi-driver
  Warning  Failed     21m                  kubelet            Error: ImagePullBackOff
  Warning  Failed     21m                  kubelet            Error: ImagePullBackOff
  Normal   BackOff    21m                  kubelet            Back-off pulling image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0"
  Normal   Pulling    21m (x2 over 21m)    kubelet            Pulling image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0"
  Warning  Failed     21m (x2 over 21m)    kubelet            Failed to pull image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0": rpc error: code = Unknown desc = failed to pull and unpack image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0": failed to resolve reference "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0": pulling from host k8s.gcr.io failed with status code [manifests v2.3.0]: 403 Forbidden
  Warning  Failed     21m (x2 over 21m)    kubelet            Error: ErrImagePull
  Warning  Failed     21m (x2 over 21m)    kubelet            Error: ErrImagePull
  Warning  Failed     21m (x2 over 21m)    kubelet            Failed to pull image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0": rpc error: code = Unknown desc = failed to pull and unpack image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0": failed to resolve reference "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0": pulling from host k8s.gcr.io failed with status code [manifests v2.2.0]: 403 Forbidden
  Normal   Pulling    21m (x2 over 21m)    kubelet            Pulling image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0"
  Warning  Unhealthy  21m (x5 over 21m)    kubelet            Liveness probe failed: Get "http://10.42.6.18:9808/healthz": dial tcp 10.42.6.18:9808: connect: connection refused
  Normal   BackOff    75s (x115 over 21m)  kubelet            Back-off pulling image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0"

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

I think your best bet here, is to edit the hetzner/csi/kustomization.yaml and change the patch from patch_latest.yaml to patch.yaml.

And re-apply with kubectl apply -k hetzner/csi

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Or the contrary, try using patch_latest.yaml

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

hmm.. maybe I don't get you:
That is the content of csi/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- "https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.6.0/deploy/kubernetes/hcloud-csi.yml"

should I add patch_latest.yaml?

or should I rename it to patch.yaml?

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

It seems the deployment was already "latest", nothing has changed:

❯ kubectl apply -k hetzner/csi
storageclass.storage.k8s.io/hcloud-volumes unchanged
serviceaccount/hcloud-csi unchanged
clusterrole.rbac.authorization.k8s.io/hcloud-csi unchanged
clusterrolebinding.rbac.authorization.k8s.io/hcloud-csi unchanged
service/hcloud-csi-controller-metrics unchanged
service/hcloud-csi-node-metrics unchanged
statefulset.apps/hcloud-csi-controller unchanged
daemonset.apps/hcloud-csi-node unchanged
csidriver.storage.k8s.io/csi.hetzner.cloud unchanged
  • Off/On also didnt work either.
  • kubectl apply -k hetzner/csi didn't work
    -reapply Terraform didn't work

Only way I currently see is, to downscale cluster below the count of that worker (< 3 nodes) and then try to rescale. But that could blow up the remaining cluster nodes, they may crash too, because of overscheduling

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Ok, I see, that's not what I meant.

Create this kustomization.yaml in hetzner/csi, and re-apply as above:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- "https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.6.0/deploy/kubernetes/hcloud-csi.yml"


patchesStrategicMerge:
- patch_latest.yaml

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Normally it should work, as the latest images are always available! 🤞

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

After about 15 minutes, agent-3 is still not there:

❯ k get nodes
NAME                  STATUS   ROLES                       AGE     VERSION
k3s-agent-0           Ready    <none>                      5d23h   v1.21.5+k3s2
k3s-agent-1           Ready    <none>                      5d23h   v1.21.5+k3s2
k3s-agent-2           Ready    <none>                      5d23h   v1.21.5+k3s2
k3s-agent-4           Ready    <none>                      14h     v1.21.5+k3s2
k3s-agent-5           Ready    <none>                      11h     v1.21.5+k3s2
k3s-control-plane-0   Ready    control-plane,etcd,master   5d23h   v1.21.5+k3s2
k3s-control-plane-1   Ready    control-plane,etcd,master   5d23h   v1.21.5+k3s2
k3s-control-plane-2   Ready    control-plane,etcd,master   5d23h   v1.21.5+k3s2

But in the UI it is visible, so somehow, agent-3 is not provisioned correctly:

Bildschirmfoto 2022-02-10 um 14 51 04

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Ok, very simple then. Forget terraform for now. SSH into the machine, the command is in the readme.

Then, see if k3s is running or not with systemctl status k3s-agent, if not try to start it, with systemctl start k3s-agent and last but not least get a peak at the logs, with journalctl -u k3s-agent.

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

☚ī¸ still not there where I should be :-(
any suggestions left?

thank you :-/

ssh [email protected] -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no

root@k3s-agent-3:~# systemctl status k3s-agent
Unit k3s-agent.service could not be found.

root@k3s-agent-3:/# systemctl start k3s-agent
Failed to start k3s-agent.service: Unit k3s-agent.service not found.

root@k3s-agent-3:~# systemctl start k3s-agent
Failed to start k3s-agent.service: Unit k3s-agent.service not found.

Also tried with -3 suffix

root@k3s-agent-3:/# systemctl status k3s-agent-3
Unit k3s-agent-3.service could not be found.

root@k3s-agent-3:/# systemctl start k3s-agent-3
Failed to start k3s-agent-3.service: Unit k3s-agent-3.service not found.

root@k3s-agent-3:/# journalctl -u k3s-agent-3
-- Logs begin at Thu 2022-02-10 13:27:38 UTC, end at Thu 2022-02-10 18:17:01 UTC. --
-- No entries --

That is the journalctl output (using grep)

root@k3s-agent-3:/# journalctl|grep k3s
Feb 10 13:27:39 k3s-agent-3 systemd-udevd[433]: Using default interface naming scheme 'v245'.
Feb 10 13:27:39 k3s-agent-3 kernel: virtio_net virtio1 eth0: renamed from enp1s0
Feb 10 13:27:39 k3s-agent-3 systemd-udevd[433]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped.
Feb 10 13:27:40 k3s-agent-3 cloud-init[550]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'init-local' at Thu, 10 Feb 2022 13:27:39 +0000. Up 5.31 seconds.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Finished Initial cloud-init job (pre-networking).
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Reached target Network (Pre).
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Starting Network Service...
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: Enumeration completed
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Started Network Service.
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: eth0: Interface name change detected, eth0 has been renamed to enp1s0.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Starting Wait for Network to be Configured...
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Starting Network Name Resolution...
Feb 10 13:27:40 k3s-agent-3 systemd-timesyncd[543]: Network configuration changed, trying to establish connection.
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: enp1s0: Link DOWN
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: enp1s0: Lost carrier
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: enp1s0: Interface name change detected, enp1s0 has been renamed to eth0.
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: eth0: IPv6 successfully enabled
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: eth0: Link UP
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: eth0: Gained carrier
Feb 10 13:27:40 k3s-agent-3 systemd-resolved[597]: Positive Trust Anchors:
Feb 10 13:27:40 k3s-agent-3 systemd-resolved[597]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 10 13:27:40 k3s-agent-3 systemd-resolved[597]: Negative trust anchors: 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 10 13:27:40 k3s-agent-3 systemd-resolved[597]: Using system hostname 'k3s-agent-3'.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Started Network Name Resolution.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Reached target Network.
Feb 10 13:27:40 k3s-agent-3 systemd[1]: Reached target Host and Network Name Lookups.
Feb 10 13:27:40 k3s-agent-3 systemd-networkd[587]: eth0: DHCPv4 address 78.47.82.149/32 via 172.31.1.1
Feb 10 13:27:40 k3s-agent-3 systemd-timesyncd[543]: Network configuration changed, trying to establish connection.
Feb 10 13:27:42 k3s-agent-3 systemd-networkd[587]: eth0: Gained IPv6LL
Feb 10 13:27:42 k3s-agent-3 systemd-timesyncd[543]: Network configuration changed, trying to establish connection.
Feb 10 13:27:42 k3s-agent-3 systemd-networkd-wait-online[596]: managing: eth0
Feb 10 13:27:42 k3s-agent-3 systemd[1]: Finished Wait for Network to be Configured.
Feb 10 13:27:42 k3s-agent-3 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'init' at Thu, 10 Feb 2022 13:27:42 +0000. Up 8.21 seconds.
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++++
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +--------+------+-----------------------------+-----------------+--------+-------------------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: | Device |  Up  |           Address           |       Mask      | Scope  |     Hw-Address    |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +--------+------+-----------------------------+-----------------+--------+-------------------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |  eth0  | True |         78.47.82.149        | 255.255.255.255 | global | 96:00:01:15:32:5c |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |  eth0  | True |   2a01:4f8:c17:8d4a::1/64   |        .        | global | 96:00:01:15:32:5c |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |  eth0  | True | fe80::9400:1ff:fe15:325c/64 |        .        |  link  | 96:00:01:15:32:5c |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   lo   | True |          127.0.0.1          |    255.0.0.0    |  host  |         .         |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   lo   | True |           ::1/128           |        .        |  host  |         .         |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +--------+------+-----------------------------+-----------------+--------+-------------------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +-------+-------------+------------+-----------------+-----------+-------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: | Route | Destination |  Gateway   |     Genmask     | Interface | Flags |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +-------+-------------+------------+-----------------+-----------+-------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   0   |   0.0.0.0   | 172.31.1.1 |     0.0.0.0     |    eth0   |   UG  |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   1   |  172.31.1.1 |  0.0.0.0   | 255.255.255.255 |    eth0   |   UH  |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +-------+-------------+------------+-----------------+-----------+-------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: ++++++++++++++++++++++++Route IPv6 info+++++++++++++++++++++++++
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +-------+------------------------+---------+-----------+-------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: | Route |      Destination       | Gateway | Interface | Flags |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +-------+------------------------+---------+-----------+-------+
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   1   | 2a01:4f8:c17:8d4a::/64 |    ::   |    eth0   |   U   |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   2   |       fe80::/64        |    ::   |    eth0   |   U   |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   3   |          ::/0          | fe80::1 |    eth0   |   UG  |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   5   |         local          |    ::   |    eth0   |   U   |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   6   |         local          |    ::   |    eth0   |   U   |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: |   7   |       multicast        |    ::   |    eth0   |   U   |
Feb 10 13:27:42 k3s-agent-3 cloud-init[611]: ci-info: +-------+------------------------+---------+-----------+-------+
Feb 10 13:27:43 k3s-agent-3 systemd-udevd[601]: sda: Failed to process device, ignoring: Resource temporarily unavailable
Feb 10 13:27:43 k3s-agent-3 multipathd[511]: sda: failed to get udev uid: Invalid argument
Feb 10 13:27:43 k3s-agent-3 multipathd[511]: sda: failed to get unknown uid: Invalid argument
Feb 10 13:27:43 k3s-agent-3 systemd-udevd[603]: sda15: Failed to process device, ignoring: Resource temporarily unavailable
Feb 10 13:27:43 k3s-agent-3 systemd-udevd[601]: sda1: Failed to process device, ignoring: Resource temporarily unavailable
Feb 10 13:27:43 k3s-agent-3 systemd-udevd[589]: sda14: Failed to process device, ignoring: Resource temporarily unavailable
Feb 10 13:27:43 k3s-agent-3 kernel: EXT4-fs (sda1): resizing filesystem from 829696 to 19934715 blocks
Feb 10 13:27:43 k3s-agent-3 kernel: EXT4-fs (sda1): resized filesystem to 19934715
Feb 10 13:27:43 k3s-agent-3 passwd[675]: password for 'root' changed by 'root'
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Generating public/private rsa key pair.
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key fingerprint is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: SHA256:wDueIJwelw5GLJS6L2acQsJXksAIqqUR1JQiHSETECA root@k3s-agent-3
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key's randomart image is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +---[RSA 3072]----+
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |E**+.            |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |*X.o .           |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |* * . o          |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |oB + o o         |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |+.B * o S        |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |o= B o o         |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |+.+ . o          |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |o=.              |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |+.               |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +----[SHA256]-----+
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Generating public/private dsa key pair.
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your identification has been saved in /etc/ssh/ssh_host_dsa_key
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key fingerprint is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: SHA256:J1IUWSdyV4lY1aGS4jZV2QXpdFh9bewK9sWzouN/BFQ root@k3s-agent-3
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key's randomart image is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +---[DSA 1024]----+
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |        +++o+=BEB|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |       ..o.++o*.O|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |        .. +.+ =.|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |       .. o +...+|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |      . S+.. o.oo|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |       ..o.  .oo |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |            . o  |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |           o   . |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |          ..o..  |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +----[SHA256]-----+
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Generating public/private ecdsa key pair.
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key fingerprint is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: SHA256:9nmvMoENfJCpAY8fuDYw/OurljLp4YsFRkVVsYkxjyE root@k3s-agent-3
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key's randomart image is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +---[ECDSA 256]---+
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |  .Eo*.o.o       |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: | .. .+O *        |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: | .+ oooB .       |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |.  + o..o .      |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |..  = . S=       |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |.. . o ...o.     |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: | o...     o..    |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |*.+.      o. .   |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |oBo.o.     o...  |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +----[SHA256]-----+
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Generating public/private ed25519 key pair.
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key fingerprint is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: SHA256:YxeV5xA22/suivm62mHu3D1qzWgJ/hsn+KVFngBM3zU root@k3s-agent-3
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: The key's randomart image is:
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +--[ED25519 256]--+
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |          . =o E.|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |         o +o=...|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |          + o+o  |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |           o  .. |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |        S . . o  |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |       . o.. + o |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |         .+.o=* .|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |         =.*=O=. |
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: |        .oXBX+.o.|
Feb 10 13:27:43 k3s-agent-3 cloud-init[611]: +----[SHA256]-----+
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Initial cloud-init job (metadata service crawler).
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Cloud-config availability.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Network is Online.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target System Initialization.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started ACPI Events Check.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Trigger to poll for Ubuntu Pro licenses (Only enabled on GCP LTS non-pro).
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Daily apt download activities.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Daily apt upgrade and clean activities.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Periodic ext4 Online Metadata Check for All Filesystems.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Discard unused blocks once a week.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Refresh fwupd metadata regularly.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Daily rotation of log files.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Daily man-db regeneration.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Message of the Day.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Ubuntu Advantage Timer for running repeated jobs.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Paths.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Timers.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Listening on ACPID Listen Socket.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in Unix socket for apport crash forwarding being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Listening on cloud-init hotplug hook socket.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Listening on UUID daemon activation socket.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Sockets.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Basic System.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Apply the settings specified in cloud-config...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started D-Bus System Message Bus.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Save initial kernel messages after boot.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Remove Stale Online ext4 Metadata Check Snapshots...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in getty on tty2-tty6 if dbus and logind are not available being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Record successful boot for GRUB...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Finds and configures Hetzner Cloud private network interfaces...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started irqbalance daemon.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Dispatcher daemon for systemd-networkd...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in Set the CPU Frequency Scaling governor being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in Login to default iSCSI targets being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Remote File Systems (Pre).
Feb 10 13:27:43 k3s-agent-3 hc-ifscan[698]: Scanning for unconfigured interfaces
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Remote File Systems.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting LSB: automatic crash report generation...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Deferred execution scheduler...
Feb 10 13:27:43 k3s-agent-3 hc-ifscan[700]: find: ‘/sys/class/net/en*’: No such file or directory
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Availability of block devices...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Regular background program processing daemon.
Feb 10 13:27:43 k3s-agent-3 cron[705]: (CRON) INFO (pidfile fd = 3)
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in Pollinate to seed the pseudo random number generator being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started QEMU Guest Agent.
Feb 10 13:27:43 k3s-agent-3 cron[705]: (CRON) INFO (Running @reboot jobs)
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in fast remote file copy program daemon being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting System Logging Service...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in Secure Boot updates for DB and DBX being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting OpenBSD Secure Shell server...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Login Service...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Permit User Sessions...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Condition check resulted in Ubuntu Advantage reboot cmds being skipped.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Disk Manager...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: hc-net-scan.service: Succeeded.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Finds and configures Hetzner Cloud private network interfaces.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Deferred execution scheduler.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Availability of block devices.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Permit User Sessions.
Feb 10 13:27:43 k3s-agent-3 udisksd[712]: udisks daemon version 2.8.4 starting
Feb 10 13:27:43 k3s-agent-3 systemd[1]: grub-common.service: Succeeded.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Record successful boot for GRUB.
Feb 10 13:27:43 k3s-agent-3 systemd-logind[710]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 10 13:27:43 k3s-agent-3 systemd-logind[710]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 10 13:27:43 k3s-agent-3 systemd-logind[710]: New seat seat0.
Feb 10 13:27:43 k3s-agent-3 rsyslogd[708]: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.2001.0]
Feb 10 13:27:43 k3s-agent-3 rsyslogd[708]: rsyslogd's groupid changed to 110
Feb 10 13:27:43 k3s-agent-3 rsyslogd[708]: rsyslogd's userid changed to 104
Feb 10 13:27:43 k3s-agent-3 rsyslogd[708]: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="708" x-info="https://www.rsyslog.com"] start
Feb 10 13:27:43 k3s-agent-3 sshd[715]: Server listening on 0.0.0.0 port 22.
Feb 10 13:27:43 k3s-agent-3 sshd[715]: Server listening on :: port 22.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started System Logging Service.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started OpenBSD Secure Shell server.
Feb 10 13:27:43 k3s-agent-3 apport[701]:  * Starting automatic crash report generation: apport
Feb 10 13:27:43 k3s-agent-3 apport[701]:    ...done.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting GRUB failed boot detection...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Hold until boot process finishes up...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Terminate Plymouth Boot Screen...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started LSB: automatic crash report generation.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: plymouth-quit-wait.service: Succeeded.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Hold until boot process finishes up.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Serial Getty on ttyS0.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Set console scheme...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: plymouth-quit.service: Succeeded.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Terminate Plymouth Boot Screen.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: grub-initrd-fallback.service: Succeeded.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished GRUB failed boot detection.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Set console scheme.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Created slice system-getty.slice.
Feb 10 13:27:43 k3s-agent-3 dbus-daemon[687]: [system] AppArmor D-Bus mediation is enabled
Feb 10 13:27:43 k3s-agent-3 networkd-dispatcher[697]: No valid path found for iwconfig
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Getty on tty1.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Login Prompts.
Feb 10 13:27:43 k3s-agent-3 dbus-daemon[687]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Login Service.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Unattended Upgrades Shutdown.
Feb 10 13:27:43 k3s-agent-3 udisksd[712]: failed to load module mdraid: libbd_mdraid.so.2: cannot open shared object file: No such file or directory
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Dispatcher daemon for systemd-networkd.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: e2scrub_reap.service: Succeeded.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Remove Stale Online ext4 Metadata Check Snapshots.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Multi-User System.
Feb 10 13:27:43 k3s-agent-3 udisksd[712]: Failed to load the 'mdraid' libblockdev plugin
Feb 10 13:27:43 k3s-agent-3 dbus-daemon[687]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=712 comm="/usr/lib/udisks2/udisksd " label="unconfined")
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Authorization Manager...
Feb 10 13:27:43 k3s-agent-3 polkitd[747]: started daemon version 0.105 using authority implementation `local' version `0.105'
Feb 10 13:27:43 k3s-agent-3 dbus-daemon[687]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Authorization Manager.
Feb 10 13:27:43 k3s-agent-3 udisksd[712]: Error probing device: Error sending ATA command IDENTIFY PACKET DEVICE to '/dev/sr0': Unexpected sense data returned:
Feb 10 13:27:43 k3s-agent-3 udisksd[712]: Error probing device: Error sending ATA command IDENTIFY PACKET DEVICE to '/dev/sr0': Unexpected sense data returned:
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Started Disk Manager.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Reached target Graphical Interface.
Feb 10 13:27:43 k3s-agent-3 udisksd[712]: Acquired the name org.freedesktop.UDisks2 on the system message bus
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Feb 10 13:27:43 k3s-agent-3 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Feb 10 13:27:43 k3s-agent-3 systemd[1]: Finished Update UTMP about System Runlevel Changes.
Feb 10 13:27:44 k3s-agent-3 cloud-init[762]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:config' at Thu, 10 Feb 2022 13:27:43 +0000. Up 9.52 seconds.
Feb 10 13:27:44 k3s-agent-3 systemd[1]: Finished Apply the settings specified in cloud-config.
Feb 10 13:27:44 k3s-agent-3 systemd[1]: Starting Execute cloud user/final scripts...
Feb 10 13:27:44 k3s-agent-3 cloud-init[779]: #############################################################
Feb 10 13:27:44 k3s-agent-3 cloud-init[780]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 10 13:27:44 k3s-agent-3 cloud-init[782]: 1024 SHA256:J1IUWSdyV4lY1aGS4jZV2QXpdFh9bewK9sWzouN/BFQ root@k3s-agent-3 (DSA)
Feb 10 13:27:44 k3s-agent-3 cloud-init[784]: 256 SHA256:9nmvMoENfJCpAY8fuDYw/OurljLp4YsFRkVVsYkxjyE root@k3s-agent-3 (ECDSA)
Feb 10 13:27:44 k3s-agent-3 cloud-init[786]: 256 SHA256:YxeV5xA22/suivm62mHu3D1qzWgJ/hsn+KVFngBM3zU root@k3s-agent-3 (ED25519)
Feb 10 13:27:44 k3s-agent-3 cloud-init[788]: 3072 SHA256:wDueIJwelw5GLJS6L2acQsJXksAIqqUR1JQiHSETECA root@k3s-agent-3 (RSA)
Feb 10 13:27:44 k3s-agent-3 cloud-init[789]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 10 13:27:44 k3s-agent-3 cloud-init[790]: #############################################################
Feb 10 13:27:44 k3s-agent-3 cloud-init[775]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:final' at Thu, 10 Feb 2022 13:27:44 +0000. Up 10.25 seconds.
Feb 10 13:27:44 k3s-agent-3 cloud-init[775]: Cloud-init v. 21.4-0ubuntu1~20.04.1 finished at Thu, 10 Feb 2022 13:27:44 +0000. Datasource DataSourceHetzner.  Up 10.44 seconds
Feb 10 13:27:44 k3s-agent-3 systemd[1]: Finished Execute cloud user/final scripts.
Feb 10 13:27:44 k3s-agent-3 systemd[1]: Reached target Cloud-init target.
Feb 10 13:27:44 k3s-agent-3 systemd[1]: Startup finished in 3.928s (kernel) + 6.567s (userspace) = 10.496s.
Feb 10 13:27:44 k3s-agent-3 systemd[1]: dmesg.service: Succeeded.
Feb 10 13:28:09 k3s-agent-3 systemd[1]: systemd-fsckd.service: Succeeded.
Feb 10 13:28:22 k3s-agent-3 systemd-timesyncd[543]: Timed out waiting for reply from [2001:67c:1560:8003::c8]:123 (ntp.ubuntu.com).
Feb 10 13:28:32 k3s-agent-3 systemd-timesyncd[543]: Timed out waiting for reply from [2001:67c:1560:8003::c7]:123 (ntp.ubuntu.com).
Feb 10 13:28:33 k3s-agent-3 systemd-timesyncd[543]: Initial synchronization to time server 91.189.89.199:123 (ntp.ubuntu.com).
Feb 10 13:41:06 k3s-agent-3 sshd[815]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 13:41:08 k3s-agent-3 sshd[815]: Failed password for root from 92.255.85.237 port 44236 ssh2
Feb 10 13:41:10 k3s-agent-3 sshd[815]: Received disconnect from 92.255.85.237 port 44236:11: Bye Bye [preauth]
Feb 10 13:41:10 k3s-agent-3 sshd[815]: Disconnected from authenticating user root 92.255.85.237 port 44236 [preauth]
Feb 10 13:42:45 k3s-agent-3 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 10 13:42:45 k3s-agent-3 systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Feb 10 13:42:45 k3s-agent-3 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 10 13:50:55 k3s-agent-3 systemd[1]: Starting Ubuntu Advantage Timer for running repeated jobs...
Feb 10 13:50:55 k3s-agent-3 systemd[1]: ua-timer.service: Succeeded.
Feb 10 13:50:55 k3s-agent-3 systemd[1]: Finished Ubuntu Advantage Timer for running repeated jobs.
Feb 10 13:57:38 k3s-agent-3 sshd[874]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 13:57:40 k3s-agent-3 sshd[874]: Failed password for root from 92.255.85.237 port 59228 ssh2
Feb 10 13:57:42 k3s-agent-3 sshd[874]: Received disconnect from 92.255.85.237 port 59228:11: Bye Bye [preauth]
Feb 10 13:57:42 k3s-agent-3 sshd[874]: Disconnected from authenticating user root 92.255.85.237 port 59228 [preauth]
Feb 10 14:04:04 k3s-agent-3 sshd[879]: Connection closed by 114.67.126.221 port 59924 [preauth]
Feb 10 14:16:51 k3s-agent-3 sshd[884]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.135  user=root
Feb 10 14:16:53 k3s-agent-3 sshd[884]: Failed password for root from 92.255.85.135 port 48368 ssh2
Feb 10 14:16:55 k3s-agent-3 sshd[884]: Received disconnect from 92.255.85.135 port 48368:11: Bye Bye [preauth]
Feb 10 14:16:55 k3s-agent-3 sshd[884]: Disconnected from authenticating user root 92.255.85.135 port 48368 [preauth]
Feb 10 14:17:01 k3s-agent-3 CRON[886]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 10 14:17:01 k3s-agent-3 CRON[893]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 10 14:17:01 k3s-agent-3 CRON[886]: pam_unix(cron:session): session closed for user root
Feb 10 14:31:10 k3s-agent-3 sshd[900]: Invalid user admin from 45.9.20.25 port 59412
Feb 10 14:31:10 k3s-agent-3 sshd[900]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 14:31:10 k3s-agent-3 sshd[900]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.9.20.25
Feb 10 14:31:11 k3s-agent-3 sshd[900]: Failed password for invalid user admin from 45.9.20.25 port 59412 ssh2
Feb 10 14:31:15 k3s-agent-3 sshd[900]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 14:31:18 k3s-agent-3 sshd[900]: Failed password for invalid user admin from 45.9.20.25 port 59412 ssh2
Feb 10 14:31:23 k3s-agent-3 sshd[900]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 14:31:25 k3s-agent-3 sshd[900]: Failed password for invalid user admin from 45.9.20.25 port 59412 ssh2
Feb 10 14:31:28 k3s-agent-3 sshd[900]: Connection closed by invalid user admin 45.9.20.25 port 59412 [preauth]
Feb 10 14:31:28 k3s-agent-3 sshd[900]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.9.20.25
Feb 10 14:32:46 k3s-agent-3 sshd[904]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 14:32:48 k3s-agent-3 sshd[904]: Failed password for root from 92.255.85.237 port 17690 ssh2
Feb 10 14:32:48 k3s-agent-3 sshd[904]: Received disconnect from 92.255.85.237 port 17690:11: Bye Bye [preauth]
Feb 10 14:32:48 k3s-agent-3 sshd[904]: Disconnected from authenticating user root 92.255.85.237 port 17690 [preauth]
Feb 10 14:52:16 k3s-agent-3 sshd[912]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 14:52:19 k3s-agent-3 sshd[912]: Failed password for root from 92.255.85.237 port 57812 ssh2
Feb 10 14:52:20 k3s-agent-3 sshd[912]: Received disconnect from 92.255.85.237 port 57812:11: Bye Bye [preauth]
Feb 10 14:52:20 k3s-agent-3 sshd[912]: Disconnected from authenticating user root 92.255.85.237 port 57812 [preauth]
Feb 10 15:10:45 k3s-agent-3 sshd[919]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.135  user=root
Feb 10 15:10:47 k3s-agent-3 sshd[919]: Failed password for root from 92.255.85.135 port 56244 ssh2
Feb 10 15:10:49 k3s-agent-3 sshd[919]: Received disconnect from 92.255.85.135 port 56244:11: Bye Bye [preauth]
Feb 10 15:10:49 k3s-agent-3 sshd[919]: Disconnected from authenticating user root 92.255.85.135 port 56244 [preauth]
Feb 10 15:17:01 k3s-agent-3 CRON[923]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 10 15:17:01 k3s-agent-3 CRON[924]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 10 15:17:01 k3s-agent-3 CRON[923]: pam_unix(cron:session): session closed for user root
Feb 10 15:22:11 k3s-agent-3 sshd[928]: Invalid user test from 176.111.173.245 port 59314
Feb 10 15:22:11 k3s-agent-3 sshd[928]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 15:22:11 k3s-agent-3 sshd[928]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=176.111.173.245
Feb 10 15:22:13 k3s-agent-3 sshd[928]: Failed password for invalid user test from 176.111.173.245 port 59314 ssh2
Feb 10 15:22:16 k3s-agent-3 sshd[928]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 15:22:18 k3s-agent-3 sshd[928]: Failed password for invalid user test from 176.111.173.245 port 59314 ssh2
Feb 10 15:22:23 k3s-agent-3 sshd[928]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 15:22:25 k3s-agent-3 sshd[928]: Failed password for invalid user test from 176.111.173.245 port 59314 ssh2
Feb 10 15:22:28 k3s-agent-3 sshd[928]: Connection closed by invalid user test 176.111.173.245 port 59314 [preauth]
Feb 10 15:22:28 k3s-agent-3 sshd[928]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=176.111.173.245
Feb 10 15:27:06 k3s-agent-3 sshd[930]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 15:27:07 k3s-agent-3 sshd[930]: Failed password for root from 92.255.85.237 port 31758 ssh2
Feb 10 15:27:08 k3s-agent-3 sshd[930]: Received disconnect from 92.255.85.237 port 31758:11: Bye Bye [preauth]
Feb 10 15:27:08 k3s-agent-3 sshd[930]: Disconnected from authenticating user root 92.255.85.237 port 31758 [preauth]
Feb 10 15:41:59 k3s-agent-3 sshd[936]: Invalid user charles from 45.9.20.25 port 37286
Feb 10 15:41:59 k3s-agent-3 sshd[936]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 15:41:59 k3s-agent-3 sshd[936]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.9.20.25
Feb 10 15:42:00 k3s-agent-3 sshd[936]: Failed password for invalid user charles from 45.9.20.25 port 37286 ssh2
Feb 10 15:42:00 k3s-agent-3 sshd[936]: Connection closed by invalid user charles 45.9.20.25 port 37286 [preauth]
Feb 10 15:43:58 k3s-agent-3 sshd[939]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 15:44:00 k3s-agent-3 sshd[939]: Failed password for root from 92.255.85.237 port 20764 ssh2
Feb 10 15:44:02 k3s-agent-3 sshd[939]: Received disconnect from 92.255.85.237 port 20764:11: Bye Bye [preauth]
Feb 10 15:44:02 k3s-agent-3 sshd[939]: Disconnected from authenticating user root 92.255.85.237 port 20764 [preauth]
Feb 10 15:47:10 k3s-agent-3 sshd[942]: Invalid user test from 193.169.255.199 port 17854
Feb 10 15:47:10 k3s-agent-3 sshd[942]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 15:47:10 k3s-agent-3 sshd[942]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.169.255.199
Feb 10 15:47:11 k3s-agent-3 sshd[942]: Failed password for invalid user test from 193.169.255.199 port 17854 ssh2
Feb 10 15:47:15 k3s-agent-3 sshd[942]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 15:47:17 k3s-agent-3 sshd[942]: Failed password for invalid user test from 193.169.255.199 port 17854 ssh2
Feb 10 15:47:22 k3s-agent-3 sshd[942]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 15:47:23 k3s-agent-3 sshd[942]: Failed password for invalid user test from 193.169.255.199 port 17854 ssh2
Feb 10 15:47:27 k3s-agent-3 sshd[942]: Connection closed by invalid user test 193.169.255.199 port 17854 [preauth]
Feb 10 15:47:27 k3s-agent-3 sshd[942]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.169.255.199
Feb 10 15:56:13 k3s-agent-3 sshd[947]: error: kex_exchange_identification: read: Connection reset by peer
Feb 10 15:56:26 k3s-agent-3 sshd[949]: Connection reset by 89.185.85.100 port 38550 [preauth]
Feb 10 15:56:26 k3s-agent-3 sshd[948]: Connection reset by 89.185.85.100 port 38528 [preauth]
Feb 10 16:01:24 k3s-agent-3 sshd[955]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 16:01:26 k3s-agent-3 sshd[955]: Failed password for root from 92.255.85.237 port 58022 ssh2
Feb 10 16:01:26 k3s-agent-3 sshd[955]: Received disconnect from 92.255.85.237 port 58022:11: Bye Bye [preauth]
Feb 10 16:01:26 k3s-agent-3 sshd[955]: Disconnected from authenticating user root 92.255.85.237 port 58022 [preauth]
Feb 10 16:17:01 k3s-agent-3 CRON[960]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 10 16:17:01 k3s-agent-3 CRON[961]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 10 16:17:01 k3s-agent-3 CRON[960]: pam_unix(cron:session): session closed for user root
Feb 10 16:19:46 k3s-agent-3 sshd[964]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.135  user=root
Feb 10 16:19:48 k3s-agent-3 sshd[964]: Failed password for root from 92.255.85.135 port 31258 ssh2
Feb 10 16:19:48 k3s-agent-3 sshd[964]: Received disconnect from 92.255.85.135 port 31258:11: Bye Bye [preauth]
Feb 10 16:19:48 k3s-agent-3 sshd[964]: Disconnected from authenticating user root 92.255.85.135 port 31258 [preauth]
Feb 10 16:38:54 k3s-agent-3 sshd[970]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 16:38:55 k3s-agent-3 sshd[970]: Failed password for root from 92.255.85.237 port 12074 ssh2
Feb 10 16:38:56 k3s-agent-3 sshd[970]: Received disconnect from 92.255.85.237 port 12074:11: Bye Bye [preauth]
Feb 10 16:38:56 k3s-agent-3 sshd[970]: Disconnected from authenticating user root 92.255.85.237 port 12074 [preauth]
Feb 10 16:46:45 k3s-agent-3 sshd[974]: error: kex_exchange_identification: Connection closed by remote host
Feb 10 16:55:37 k3s-agent-3 sshd[978]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.135  user=root
Feb 10 16:55:39 k3s-agent-3 sshd[978]: Failed password for root from 92.255.85.135 port 27498 ssh2
Feb 10 16:55:41 k3s-agent-3 sshd[978]: Received disconnect from 92.255.85.135 port 27498:11: Bye Bye [preauth]
Feb 10 16:55:41 k3s-agent-3 sshd[978]: Disconnected from authenticating user root 92.255.85.135 port 27498 [preauth]
Feb 10 17:08:18 k3s-agent-3 sshd[984]: Invalid user admin from 45.9.20.25 port 28014
Feb 10 17:08:18 k3s-agent-3 sshd[984]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 17:08:18 k3s-agent-3 sshd[984]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.9.20.25
Feb 10 17:08:19 k3s-agent-3 sshd[984]: Failed password for invalid user admin from 45.9.20.25 port 28014 ssh2
Feb 10 17:08:23 k3s-agent-3 sshd[984]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 17:08:24 k3s-agent-3 sshd[984]: Failed password for invalid user admin from 45.9.20.25 port 28014 ssh2
Feb 10 17:08:28 k3s-agent-3 sshd[984]: pam_unix(sshd:auth): check pass; user unknown
Feb 10 17:08:30 k3s-agent-3 sshd[984]: Failed password for invalid user admin from 45.9.20.25 port 28014 ssh2
Feb 10 17:08:34 k3s-agent-3 sshd[984]: Connection closed by invalid user admin 45.9.20.25 port 28014 [preauth]
Feb 10 17:08:34 k3s-agent-3 sshd[984]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.9.20.25
Feb 10 17:13:12 k3s-agent-3 sshd[986]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.135  user=root
Feb 10 17:13:15 k3s-agent-3 sshd[986]: Failed password for root from 92.255.85.135 port 47320 ssh2
Feb 10 17:13:17 k3s-agent-3 sshd[986]: Received disconnect from 92.255.85.135 port 47320:11: Bye Bye [preauth]
Feb 10 17:13:17 k3s-agent-3 sshd[986]: Disconnected from authenticating user root 92.255.85.135 port 47320 [preauth]
Feb 10 17:17:01 k3s-agent-3 CRON[989]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 10 17:17:01 k3s-agent-3 CRON[990]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 10 17:17:01 k3s-agent-3 CRON[989]: pam_unix(cron:session): session closed for user root
Feb 10 17:31:15 k3s-agent-3 sshd[996]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 17:31:17 k3s-agent-3 sshd[996]: Failed password for root from 92.255.85.237 port 27598 ssh2
Feb 10 17:31:17 k3s-agent-3 sshd[996]: Received disconnect from 92.255.85.237 port 27598:11: Bye Bye [preauth]
Feb 10 17:31:17 k3s-agent-3 sshd[996]: Disconnected from authenticating user root 92.255.85.237 port 27598 [preauth]
Feb 10 17:37:59 k3s-agent-3 sshd[1000]: Accepted publickey for root from 185.246.208.180 port 56784 ssh2: RSA SHA256:mHFQxRrA2VhsDKqc/lwpMxeEk2EgXDuMt1wFMuCzHlA
Feb 10 17:37:59 k3s-agent-3 sshd[1000]: pam_unix(sshd:session): session opened for user root by (uid=0)
Feb 10 17:37:59 k3s-agent-3 systemd[1]: Created slice User Slice of UID 0.
Feb 10 17:37:59 k3s-agent-3 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 10 17:37:59 k3s-agent-3 systemd-logind[710]: New session 5 of user root.
Feb 10 17:37:59 k3s-agent-3 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 10 17:37:59 k3s-agent-3 systemd[1]: Starting User Manager for UID 0...
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: pam_unix(systemd-user:session): session opened for user root by (uid=0)
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Reached target Paths.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Reached target Timers.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Starting D-Bus User Message Bus Socket.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Listening on GnuPG network certificate management daemon.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Listening on GnuPG cryptographic agent and passphrase cache.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Listening on debconf communication socket.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Listening on D-Bus User Message Bus Socket.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Reached target Sockets.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Reached target Basic System.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Reached target Main User Target.
Feb 10 17:37:59 k3s-agent-3 systemd[1]: Started User Manager for UID 0.
Feb 10 17:37:59 k3s-agent-3 systemd[1018]: Startup finished in 58ms.
Feb 10 17:37:59 k3s-agent-3 systemd[1]: Started Session 5 of user root.
Feb 10 17:48:47 k3s-agent-3 sshd[1205]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 17:48:49 k3s-agent-3 sshd[1205]: Failed password for root from 92.255.85.237 port 38130 ssh2
Feb 10 17:48:49 k3s-agent-3 sshd[1205]: Received disconnect from 92.255.85.237 port 38130:11: Bye Bye [preauth]
Feb 10 17:48:49 k3s-agent-3 sshd[1205]: Disconnected from authenticating user root 92.255.85.237 port 38130 [preauth]
Feb 10 18:08:15 k3s-agent-3 sshd[1214]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.237  user=root
Feb 10 18:08:18 k3s-agent-3 sshd[1214]: Failed password for root from 92.255.85.237 port 26082 ssh2
Feb 10 18:08:19 k3s-agent-3 sshd[1214]: Received disconnect from 92.255.85.237 port 26082:11: Bye Bye [preauth]
Feb 10 18:08:19 k3s-agent-3 sshd[1214]: Disconnected from authenticating user root 92.255.85.237 port 26082 [preauth]
Feb 10 18:10:31 k3s-agent-3 sshd[1237]: error: kex_exchange_identification: Connection closed by remote host
Feb 10 18:12:01 k3s-agent-3 sshd[1269]: Unable to negotiate with 61.177.172.174 port 24414: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth]
Feb 10 18:16:19 k3s-agent-3 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 1359 (find)
Feb 10 18:16:19 k3s-agent-3 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 10 18:16:19 k3s-agent-3 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 10 18:17:01 k3s-agent-3 CRON[1383]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 10 18:17:01 k3s-agent-3 CRON[1384]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 10 18:17:01 k3s-agent-3 CRON[1383]: pam_unix(cron:session): session closed for user root

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

My bad, the instructions I shared above are for MicroOS.

For k3os, the old system, please try to find what is the name of the k3s systemd process if there is one. Maybe just try systemctl status k3s, and journalctl -u k3s

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

hmmm.. 🤷đŸģ‍♂ī¸

root@k3s-agent-3:~# uname -a
Linux k3s-agent-3 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

root@k3s-agent-3:~# systemctl status k3s
Unit k3s.service could not be found.

root@k3s-agent-3:~# journalctl -u k3s
-- Logs begin at Thu 2022-02-10 18:22:47 UTC, end at Thu 2022-02-10 20:40:56 UTC. --
-- No entries --

There is no k3*

root@k3s-agent-3:~# find / -name "*k3*"
/usr/share/doc/libnetfilter-conntrack3
/usr/share/bash-completion/completions/freeciv-gtk3
/usr/share/bash-completion/completions/k3b
/usr/share/terminfo/s/synertek380
/usr/share/terminfo/v/vt320-k3
/usr/share/terminfo/v/vt320-k311
/usr/lib/python3/dist-packages/DistUpgrade/SimpleGtk3builderApp.py
/usr/lib/python3/dist-packages/DistUpgrade/__pycache__/SimpleGtk3builderApp.cpython-38.pyc
/usr/lib/python3/dist-packages/DistUpgrade/__pycache__/DistUpgradeViewGtk3.cpython-38.pyc
/usr/lib/python3/dist-packages/DistUpgrade/DistUpgradeViewGtk3.py
/usr/lib/python3/dist-packages/twisted/internet/__pycache__/gtk3reactor.cpython-38.pyc
/usr/lib/python3/dist-packages/twisted/internet/gtk3reactor.py
/sys/class/ata_link/link3
/sys/devices/pci0000:00/0000:00:1f.2/ata3/link3
/sys/devices/pci0000:00/0000:00:1f.2/ata3/link3/ata_link/link3
/sys/devices/system/machinecheck/machinecheck2/bank3
/sys/devices/system/machinecheck/machinecheck0/bank3
/sys/devices/system/machinecheck/machinecheck1/bank3
/var/lib/dpkg/info/libnetfilter-conntrack3:amd64.shlibs
/var/lib/dpkg/info/libnetfilter-conntrack3:amd64.triggers
/var/lib/dpkg/info/libnetfilter-conntrack3:amd64.md5sums
/var/lib/dpkg/info/libnetfilter-conntrack3:amd64.list

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Basically, you just need to find out why it's not joining!

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Just found out here that the logs are in /var/log/k3s-service.log

See rancher/k3os#433

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

there is no process running with k3.. I am not familiar with k3os but I think the installation process didn't work:

root@k3s-agent-3:~# ls -al /var/log
total 2048
drwxrwxr-x   8 root      syslog            4096 Jan 24 10:00 .
drwxr-xr-x  12 root      root              4096 Jan 24 09:57 ..
-rw-r--r--   1 root      root                 0 Jan 24 10:00 alternatives.log
drwxr-xr-x   2 root      root              4096 Feb 10 18:46 apt
-rw-r-----   1 syslog    adm             797869 Feb 10 20:47 auth.log
-rw-r--r--   1 root      root                 0 Jan 24 10:00 bootstrap.log
-rw-rw----   1 root      utmp            844416 Feb 10 20:47 btmp
-rw-r--r--   1 syslog    adm             104390 Feb 10 18:22 cloud-init.log
-rw-r-----   1 root      adm               5083 Feb 10 18:22 cloud-init-output.log
drwxr-xr-x   2 root      root              4096 Aug  4  2021 dist-upgrade
-rw-r--r--   1 root      adm              60692 Feb 10 18:22 dmesg
-rw-r--r--   1 root      root               734 Feb 10 18:46 dpkg.log
-rw-r--r--   1 root      root                 0 Jan 24 10:00 faillog
drwxr-sr-x+  4 root      systemd-journal   4096 Feb 10 18:22 journal
-rw-r-----   1 syslog    adm              76775 Feb 10 18:22 kern.log
drwxr-xr-x   2 landscape landscape         4096 Jan 24 09:56 landscape
-rw-rw-r--   1 root      utmp               292 Feb 10 20:40 lastlog
drwx------   2 root      root              4096 Aug 24 08:42 private
-rw-r-----   1 syslog    adm             133855 Feb 10 20:40 syslog
-rw-------   1 root      root                 0 Aug 24 08:43 ubuntu-advantage.log
-rw-------   1 root      root               157 Feb 10 18:58 ubuntu-advantage-timer.log
drwxr-x---   2 root      adm               4096 Jan 24 09:56 unattended-upgrades
-rw-rw-r--   1 root      utmp              4608 Feb 10 20:40 wtmp

Only the cloud-init.log contains something with "k3s":

2022-02-10 18:22:49,028 - util.py[DEBUG]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'init-local' at Thu, 10 Feb 2022 18:22:48 +0000. Up 5.80 seconds.
2022-02-10 18:22:49,029 - main.py[DEBUG]: No kernel command line url found.
2022-02-10 18:22:49,029 - main.py[DEBUG]: Closing stdin.
2022-02-10 18:22:49,030 - util.py[DEBUG]: Writing to /var/log/cloud-init.log - ab: [644] 0 bytes
2022-02-10 18:22:49,031 - util.py[DEBUG]: Changing the ownership of /var/log/cloud-init.log to 104:4
2022-02-10 18:22:49,031 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance/boot-finished
2022-02-10 18:22:49,031 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/data/no-net
2022-02-10 18:22:49,032 - handlers.py[DEBUG]: start: init-local/check-cache: attempting to read from cache [check]
2022-02-10 18:22:49,032 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/obj.pkl (quiet=False)
2022-02-10 18:22:49,032 - util.py[DEBUG]: Read 8494 bytes from /var/lib/cloud/instance/obj.pkl
2022-02-10 18:22:49,035 - stages.py[DEBUG]: cache invalid in datasource: DataSourceNone
2022-02-10 18:22:49,036 - handlers.py[DEBUG]: finish: init-local/check-cache: SUCCESS: cache invalid in datasource: DataSourceNone
2022-02-10 18:22:49,036 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance
2022-02-10 18:22:49,037 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'>
2022-02-10 18:22:49,037 - __init__.py[DEBUG]: Looking for data source in: ['Hetzner', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM']
2022-02-10 18:22:49,040 - __init__.py[DEBUG]: Searching for local data source in: ['DataSourceHetzner']
2022-02-10 18:22:49,040 - handlers.py[DEBUG]: start: init-local/search-Hetzner: searching for local data from DataSourceHetzner
2022-02-10 18:22:49,040 - __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceHetzner.DataSourceHetzner'>
2022-02-10 18:22:49,040 - __init__.py[DEBUG]: Update datasource metadata and network config due to events: boot-new-instance
2022-02-10 18:22:49,040 - dmi.py[DEBUG]: querying dmi data /sys/class/dmi/id/sys_vendor
2022-02-10 18:22:49,040 - dmi.py[DEBUG]: querying dmi data /sys/class/dmi/id/product_serial
2022-02-10 18:22:49,040 - DataSourceHetzner.py[DEBUG]: Running on Hetzner Cloud: serial=17883645
2022-02-10 18:22:49,040 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/name_assign_type (quiet=False)
2022-02-10 18:22:49,041 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/enp1s0/name_assign_type
2022-02-10 18:22:49,041 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/carrier (quiet=False)
2022-02-10 18:22:49,041 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/dormant (quiet=False)
2022-02-10 18:22:49,041 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/operstate (quiet=False)
2022-02-10 18:22:49,041 - util.py[DEBUG]: Read 5 bytes from /sys/class/net/enp1s0/operstate
2022-02-10 18:22:49,041 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/address (quiet=False)
2022-02-10 18:22:49,041 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/enp1s0/address
2022-02-10 18:22:49,041 - url_helper.py[DEBUG]: [0/1] open 'http://169.254.169.254/hetzner/v1/metadata/instance-id' with {'url': 'http://169.254.169.254/hetzner/v1/metadata/instance-id', 'allow_redirects': True, 'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 'Cloud-Init/21.4-0ubuntu1~20.04.1'}} configuration
2022-02-10 18:22:49,043 - dhcp.py[DEBUG]: Performing a dhcp discovery on enp1s0
2022-02-10 18:22:49,044 - util.py[DEBUG]: Copying /usr/sbin/dhclient to /var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhclient
2022-02-10 18:22:49,045 - subp.py[DEBUG]: Running command ['ip', 'link', 'set', 'dev', 'enp1s0', 'up'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,052 - subp.py[DEBUG]: Running command ['/var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhclient', '-1', '-v', '-lf', '/var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhcp.leases', '-pf', '/var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhclient.pid', 'enp1s0', '-sf', '/bin/true'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,166 - util.py[DEBUG]: All files appeared after 0 seconds: ['/var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhclient.pid', '/var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhcp.leases']
2022-02-10 18:22:49,166 - util.py[DEBUG]: Reading from /var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhclient.pid (quiet=False)
2022-02-10 18:22:49,166 - util.py[DEBUG]: Read 4 bytes from /var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhclient.pid
2022-02-10 18:22:49,166 - util.py[DEBUG]: Reading from /proc/555/stat (quiet=True)
2022-02-10 18:22:49,167 - util.py[DEBUG]: Read 305 bytes from /proc/555/stat
2022-02-10 18:22:49,167 - dhcp.py[DEBUG]: killing dhclient with pid=555
2022-02-10 18:22:49,167 - util.py[DEBUG]: Reading from /var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhcp.leases (quiet=False)
2022-02-10 18:22:49,168 - util.py[DEBUG]: Read 543 bytes from /var/tmp/cloud-init/cloud-init-dhcp-vgl3ieal/dhcp.leases
2022-02-10 18:22:49,168 - dhcp.py[DEBUG]: Received dhcp lease on enp1s0 for 78.46.194.159/255.255.255.255
2022-02-10 18:22:49,168 - url_helper.py[DEBUG]: [0/1] open 'http://169.254.169.254/hetzner/v1/metadata/instance-id' with {'url': 'http://169.254.169.254/hetzner/v1/metadata/instance-id', 'allow_redirects': True, 'method': 'GET', 'timeout': 5.0, 'headers': {'User-Agent': 'Cloud-Init/21.4-0ubuntu1~20.04.1'}} configuration
2022-02-10 18:22:49,171 - __init__.py[DEBUG]: Attempting setup of ephemeral network on enp1s0 with 78.46.194.159/32 brd 78.46.194.159
2022-02-10 18:22:49,171 - subp.py[DEBUG]: Running command ['ip', '-family', 'inet', 'addr', 'add', '78.46.194.159/32', 'broadcast', '78.46.194.159', 'dev', 'enp1s0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,174 - subp.py[DEBUG]: Running command ['ip', '-family', 'inet', 'link', 'set', 'dev', 'enp1s0', 'up'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,177 - subp.py[DEBUG]: Running command ['ip', '-4', 'route', 'add', '172.31.1.1/32', 'dev', 'enp1s0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,181 - subp.py[DEBUG]: Running command ['ip', '-4', 'route', 'add', '0.0.0.0/0', 'via', '172.31.1.1', 'dev', 'enp1s0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,184 - url_helper.py[DEBUG]: [0/61] open 'http://169.254.169.254/hetzner/v1/metadata' with {'url': 'http://169.254.169.254/hetzner/v1/metadata', 'allow_redirects': True, 'method': 'GET', 'timeout': 2.0, 'headers': {'User-Agent': 'Cloud-Init/21.4-0ubuntu1~20.04.1'}} configuration
2022-02-10 18:22:49,190 - url_helper.py[DEBUG]: Read from http://169.254.169.254/hetzner/v1/metadata (200, 5867b) after 1 attempts
2022-02-10 18:22:49,190 - util.py[DEBUG]: Attempting to load yaml from string of length 5867 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,201 - url_helper.py[DEBUG]: [0/61] open 'http://169.254.169.254/hetzner/v1/userdata' with {'url': 'http://169.254.169.254/hetzner/v1/userdata', 'allow_redirects': True, 'method': 'GET', 'timeout': 2.0, 'headers': {'User-Agent': 'Cloud-Init/21.4-0ubuntu1~20.04.1'}} configuration
2022-02-10 18:22:49,202 - url_helper.py[DEBUG]: Read from http://169.254.169.254/hetzner/v1/userdata (200, 0b) after 1 attempts
2022-02-10 18:22:49,202 - subp.py[DEBUG]: Running command ['ip', '-4', 'route', 'del', '0.0.0.0/0', 'via', '172.31.1.1', 'dev', 'enp1s0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,205 - subp.py[DEBUG]: Running command ['ip', '-4', 'route', 'del', '172.31.1.1/32', 'dev', 'enp1s0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,208 - subp.py[DEBUG]: Running command ['ip', '-family', 'inet', 'link', 'set', 'dev', 'enp1s0', 'down'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,210 - subp.py[DEBUG]: Running command ['ip', '-family', 'inet', 'addr', 'del', '78.46.194.159/32', 'dev', 'enp1s0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,215 - atomic_helper.py[DEBUG]: Atomically writing to file /run/cloud-init/instance-data-sensitive.json (via temporary file /run/cloud-init/tmp7eii89_v) - w: [600] 6667 bytes/chars
2022-02-10 18:22:49,216 - atomic_helper.py[DEBUG]: Atomically writing to file /run/cloud-init/instance-data.json (via temporary file /run/cloud-init/tmpvwq0gm7g) - w: [644] 2894 bytes/chars
2022-02-10 18:22:49,216 - handlers.py[DEBUG]: finish: init-local/search-Hetzner: SUCCESS: found local data from DataSourceHetzner
2022-02-10 18:22:49,216 - stages.py[INFO]: Loaded datasource DataSourceHetzner - DataSourceHetzner
2022-02-10 18:22:49,217 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False)
2022-02-10 18:22:49,217 - util.py[DEBUG]: Read 3807 bytes from /etc/cloud/cloud.cfg
2022-02-10 18:22:49,217 - util.py[DEBUG]: Attempting to load yaml from string of length 3807 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,232 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90_dpkg.cfg (quiet=False)
2022-02-10 18:22:49,232 - util.py[DEBUG]: Read 90 bytes from /etc/cloud/cloud.cfg.d/90_dpkg.cfg
2022-02-10 18:22:49,232 - util.py[DEBUG]: Attempting to load yaml from string of length 90 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,232 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg (quiet=False)
2022-02-10 18:22:49,232 - util.py[DEBUG]: Read 3561 bytes from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg
2022-02-10 18:22:49,232 - util.py[DEBUG]: Attempting to load yaml from string of length 3561 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,242 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False)
2022-02-10 18:22:49,242 - util.py[DEBUG]: Read 2070 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg
2022-02-10 18:22:49,242 - util.py[DEBUG]: Attempting to load yaml from string of length 2070 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,245 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False)
2022-02-10 18:22:49,245 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,245 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:49,246 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance
2022-02-10 18:22:49,246 - util.py[DEBUG]: Creating symbolic link from '/var/lib/cloud/instance' => '/var/lib/cloud/instances/17883645'
2022-02-10 18:22:49,246 - util.py[DEBUG]: Reading from /var/lib/cloud/instances/17883645/datasource (quiet=False)
2022-02-10 18:22:49,246 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/datasource - wb: [644] 37 bytes
2022-02-10 18:22:49,247 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-datasource - wb: [644] 37 bytes
2022-02-10 18:22:49,247 - util.py[DEBUG]: Reading from /var/lib/cloud/data/instance-id (quiet=False)
2022-02-10 18:22:49,247 - util.py[DEBUG]: Read 20 bytes from /var/lib/cloud/data/instance-id
2022-02-10 18:22:49,247 - stages.py[DEBUG]: previous iid found to be iid-datasource-none
2022-02-10 18:22:49,247 - util.py[DEBUG]: Writing to /var/lib/cloud/data/instance-id - wb: [644] 9 bytes
2022-02-10 18:22:49,247 - util.py[DEBUG]: Writing to /run/cloud-init/.instance-id - wb: [644] 9 bytes
2022-02-10 18:22:49,248 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-instance-id - wb: [644] 20 bytes
2022-02-10 18:22:49,248 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/obj.pkl - wb: [400] 11721 bytes
2022-02-10 18:22:49,249 - main.py[DEBUG]: [local] init will now be targeting instance id: 17883645. new=True
2022-02-10 18:22:49,249 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False)
2022-02-10 18:22:49,249 - util.py[DEBUG]: Read 3807 bytes from /etc/cloud/cloud.cfg
2022-02-10 18:22:49,249 - util.py[DEBUG]: Attempting to load yaml from string of length 3807 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,259 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90_dpkg.cfg (quiet=False)
2022-02-10 18:22:49,259 - util.py[DEBUG]: Read 90 bytes from /etc/cloud/cloud.cfg.d/90_dpkg.cfg
2022-02-10 18:22:49,259 - util.py[DEBUG]: Attempting to load yaml from string of length 90 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,260 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg (quiet=False)
2022-02-10 18:22:49,260 - util.py[DEBUG]: Read 3561 bytes from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg
2022-02-10 18:22:49,260 - util.py[DEBUG]: Attempting to load yaml from string of length 3561 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,269 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False)
2022-02-10 18:22:49,269 - util.py[DEBUG]: Read 2070 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg
2022-02-10 18:22:49,269 - util.py[DEBUG]: Attempting to load yaml from string of length 2070 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,272 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False)
2022-02-10 18:22:49,272 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,272 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:49,273 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'>
2022-02-10 18:22:49,274 - cc_set_hostname.py[DEBUG]: Setting the hostname to k3s-agent-3 (k3s-agent-3)
2022-02-10 18:22:49,274 - util.py[DEBUG]: Reading from /etc/hostname (quiet=False)
2022-02-10 18:22:49,274 - util.py[DEBUG]: Read 29 bytes from /etc/hostname
2022-02-10 18:22:49,274 - util.py[DEBUG]: Writing to /etc/hostname - wb: [644] 12 bytes
2022-02-10 18:22:49,274 - __init__.py[DEBUG]: Non-persistently setting the system hostname to k3s-agent-3
2022-02-10 18:22:49,274 - subp.py[DEBUG]: Running command ['hostname', 'k3s-agent-3'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,277 - atomic_helper.py[DEBUG]: Atomically writing to file /var/lib/cloud/data/set-hostname (via temporary file /var/lib/cloud/data/tmpt63892_6) - w: [644] 55 bytes/chars
2022-02-10 18:22:49,277 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/address (quiet=False)
2022-02-10 18:22:49,277 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/enp1s0/address
2022-02-10 18:22:49,277 - util.py[DEBUG]: Reading from /sys/class/net/lo/address (quiet=False)
2022-02-10 18:22:49,277 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/lo/address
2022-02-10 18:22:49,278 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/address (quiet=False)
2022-02-10 18:22:49,278 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/enp1s0/address
2022-02-10 18:22:49,278 - util.py[DEBUG]: Reading from /sys/class/net/lo/address (quiet=False)
2022-02-10 18:22:49,278 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/lo/address
2022-02-10 18:22:49,278 - util.py[DEBUG]: Reading from /sys/class/net/eth0/device/device (quiet=False)
2022-02-10 18:22:49,278 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/addr_assign_type (quiet=False)
2022-02-10 18:22:49,278 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/enp1s0/addr_assign_type
2022-02-10 18:22:49,278 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/uevent (quiet=False)
2022-02-10 18:22:49,278 - util.py[DEBUG]: Read 27 bytes from /sys/class/net/enp1s0/uevent
2022-02-10 18:22:49,278 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/address (quiet=False)
2022-02-10 18:22:49,278 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/enp1s0/address
2022-02-10 18:22:49,278 - __init__.py[DEBUG]: ovs-vsctl not in PATH; not detecting Open vSwitch interfaces
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/device/device (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 7 bytes from /sys/class/net/enp1s0/device/device
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/lo/addr_assign_type (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/lo/addr_assign_type
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/lo/uevent (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 23 bytes from /sys/class/net/lo/uevent
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/lo/address (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/lo/address
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/lo/device/device (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/type (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/enp1s0/type
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/lo/type (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 4 bytes from /sys/class/net/lo/type
2022-02-10 18:22:49,279 - networking.py[DEBUG]: net: all expected physical devices present
2022-02-10 18:22:49,279 - stages.py[DEBUG]: applying net config names for {'config': [{'mac_address': '96:00:01:15:5a:1d', 'name': 'eth0', 'subnets': [{'dns_nameservers': ['185.12.64.2', '185.12.64.1'], 'ipv4': True, 'type': 'dhcp'}, {'address': '2a01:4f8:c010:1ae1::1/64', 'dns_nameservers': ['2a01:4ff:ff00::add:2', '2a01:4ff:ff00::add:1'], 'gateway': 'fe80::1', 'ipv6': True, 'type': 'static'}], 'type': 'physical'}], 'version': 1}
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/eth0/device/device (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/addr_assign_type (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/enp1s0/addr_assign_type
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/uevent (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 27 bytes from /sys/class/net/enp1s0/uevent
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/address (quiet=False)
2022-02-10 18:22:49,279 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/enp1s0/address
2022-02-10 18:22:49,279 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/device/device (quiet=False)
2022-02-10 18:22:49,280 - util.py[DEBUG]: Read 7 bytes from /sys/class/net/enp1s0/device/device
2022-02-10 18:22:49,280 - util.py[DEBUG]: Reading from /sys/class/net/lo/addr_assign_type (quiet=False)
2022-02-10 18:22:49,280 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/lo/addr_assign_type
2022-02-10 18:22:49,280 - util.py[DEBUG]: Reading from /sys/class/net/lo/uevent (quiet=False)
2022-02-10 18:22:49,280 - util.py[DEBUG]: Read 23 bytes from /sys/class/net/lo/uevent
2022-02-10 18:22:49,280 - util.py[DEBUG]: Reading from /sys/class/net/lo/address (quiet=False)
2022-02-10 18:22:49,280 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/lo/address
2022-02-10 18:22:49,280 - util.py[DEBUG]: Reading from /sys/class/net/lo/device/device (quiet=False)
2022-02-10 18:22:49,280 - util.py[DEBUG]: Reading from /sys/class/net/enp1s0/operstate (quiet=False)
2022-02-10 18:22:49,280 - util.py[DEBUG]: Read 5 bytes from /sys/class/net/enp1s0/operstate
2022-02-10 18:22:49,280 - util.py[DEBUG]: Reading from /sys/class/net/lo/operstate (quiet=False)
2022-02-10 18:22:49,280 - util.py[DEBUG]: Read 8 bytes from /sys/class/net/lo/operstate
2022-02-10 18:22:49,280 - subp.py[DEBUG]: Running command ['ip', '-6', 'addr', 'show', 'permanent', 'scope', 'global'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,283 - subp.py[DEBUG]: Running command ['ip', '-4', 'addr', 'show'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,286 - __init__.py[DEBUG]: Detected interfaces {'enp1s0': {'downable': True, 'device_id': '0x0001', 'driver': 'virtio_net', 'mac': '96:00:01:15:5a:1d', 'name': 'enp1s0', 'up': False}, 'lo': {'downable': False, 'device_id': None, 'driver': None, 'mac': '00:00:00:00:00:00', 'name': 'lo', 'up': True}}
2022-02-10 18:22:49,287 - __init__.py[DEBUG]: achieving renaming of [['96:00:01:15:5a:1d', 'eth0', None, None]] with ops [('rename', '96:00:01:15:5a:1d', 'eth0', ('enp1s0', 'eth0'))]
2022-02-10 18:22:49,287 - subp.py[DEBUG]: Running command ['ip', 'link', 'set', 'enp1s0', 'name', 'eth0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,412 - stages.py[INFO]: Applying network configuration from ds bringup=False: {'config': [{'mac_address': '96:00:01:15:5a:1d', 'name': 'eth0', 'subnets': [{'dns_nameservers': ['185.12.64.2', '185.12.64.1'], 'ipv4': True, 'type': 'dhcp'}, {'address': '2a01:4f8:c010:1ae1::1/64', 'dns_nameservers': ['2a01:4ff:ff00::add:2', '2a01:4ff:ff00::add:1'], 'gateway': 'fe80::1', 'ipv6': True, 'type': 'static'}], 'type': 'physical'}], 'version': 1}
2022-02-10 18:22:49,413 - util.py[DEBUG]: Writing to /run/cloud-init/sem/apply_network_config.once - wb: [644] 23 bytes
2022-02-10 18:22:49,422 - __init__.py[DEBUG]: Selected renderer 'netplan' from priority list: ['netplan', 'eni', 'sysconfig']
2022-02-10 18:22:49,422 - subp.py[DEBUG]: Running command ['netplan', 'info'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,530 - util.py[DEBUG]: Attempting to load yaml from string of length 230 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:49,535 - util.py[DEBUG]: Writing to /etc/netplan/50-cloud-init.yaml - wb: [644] 703 bytes
2022-02-10 18:22:49,536 - subp.py[DEBUG]: Running command ['netplan', 'generate'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,659 - subp.py[DEBUG]: Running command ['udevadm', 'test-builtin', 'net_setup_link', '/sys/class/net/eth0'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,666 - subp.py[DEBUG]: Running command ['udevadm', 'test-builtin', 'net_setup_link', '/sys/class/net/lo'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:49,670 - __init__.py[DEBUG]: Not bringing up newly configured network interfaces
2022-02-10 18:22:49,670 - main.py[DEBUG]: [local] Exiting. datasource DataSourceHetzner not in local mode.
2022-02-10 18:22:49,671 - atomic_helper.py[DEBUG]: Atomically writing to file /var/lib/cloud/data/status.json (via temporary file /var/lib/cloud/data/tmpkdto0f4n) - w: [644] 490 bytes/chars
2022-02-10 18:22:49,671 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2022-02-10 18:22:49,671 - util.py[DEBUG]: Read 11 bytes from /proc/uptime
2022-02-10 18:22:49,671 - util.py[DEBUG]: cloud-init mode 'init' took 0.704 seconds (0.71)
2022-02-10 18:22:49,672 - handlers.py[DEBUG]: finish: init-local: SUCCESS: searching for local datasources
2022-02-10 18:22:52,015 - util.py[DEBUG]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'init' at Thu, 10 Feb 2022 18:22:51 +0000. Up 8.79 seconds.
2022-02-10 18:22:52,015 - main.py[DEBUG]: No kernel command line url found.
2022-02-10 18:22:52,015 - main.py[DEBUG]: Closing stdin.
2022-02-10 18:22:52,016 - util.py[DEBUG]: Writing to /var/log/cloud-init.log - ab: [644] 0 bytes
2022-02-10 18:22:52,017 - util.py[DEBUG]: Changing the ownership of /var/log/cloud-init.log to 104:4
2022-02-10 18:22:52,017 - subp.py[DEBUG]: Running command ['ip', 'addr', 'show'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,020 - subp.py[DEBUG]: Running command ['ip', '-o', 'route', 'list'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,022 - subp.py[DEBUG]: Running command ['ip', '--oneline', '-6', 'route', 'list', 'table', 'all'] with allowed return codes [0, 1] (shell=False, capture=True)
2022-02-10 18:22:52,026 - main.py[DEBUG]: Checking to see if files that we need already exist from a previous run that would allow us to stop early.
2022-02-10 18:22:52,026 - main.py[DEBUG]: Execution continuing, no previous run detected that would allow us to stop early.
2022-02-10 18:22:52,026 - handlers.py[DEBUG]: start: init-network/check-cache: attempting to read from cache [trust]
2022-02-10 18:22:52,026 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/obj.pkl (quiet=False)
2022-02-10 18:22:52,026 - util.py[DEBUG]: Read 11721 bytes from /var/lib/cloud/instance/obj.pkl
2022-02-10 18:22:52,029 - util.py[DEBUG]: Reading from /run/cloud-init/.instance-id (quiet=False)
2022-02-10 18:22:52,029 - util.py[DEBUG]: Read 9 bytes from /run/cloud-init/.instance-id
2022-02-10 18:22:52,029 - stages.py[DEBUG]: restored from cache with run check: DataSourceHetzner
2022-02-10 18:22:52,029 - handlers.py[DEBUG]: finish: init-network/check-cache: SUCCESS: restored from cache with run check: DataSourceHetzner
2022-02-10 18:22:52,029 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False)
2022-02-10 18:22:52,029 - util.py[DEBUG]: Read 3807 bytes from /etc/cloud/cloud.cfg
2022-02-10 18:22:52,029 - util.py[DEBUG]: Attempting to load yaml from string of length 3807 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,043 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90_dpkg.cfg (quiet=False)
2022-02-10 18:22:52,043 - util.py[DEBUG]: Read 90 bytes from /etc/cloud/cloud.cfg.d/90_dpkg.cfg
2022-02-10 18:22:52,043 - util.py[DEBUG]: Attempting to load yaml from string of length 90 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,043 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg (quiet=False)
2022-02-10 18:22:52,043 - util.py[DEBUG]: Read 3561 bytes from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg
2022-02-10 18:22:52,043 - util.py[DEBUG]: Attempting to load yaml from string of length 3561 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,054 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False)
2022-02-10 18:22:52,054 - util.py[DEBUG]: Read 2070 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg
2022-02-10 18:22:52,054 - util.py[DEBUG]: Attempting to load yaml from string of length 2070 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,057 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False)
2022-02-10 18:22:52,057 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,057 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:52,058 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance
2022-02-10 18:22:52,058 - util.py[DEBUG]: Creating symbolic link from '/var/lib/cloud/instance' => '/var/lib/cloud/instances/17883645'
2022-02-10 18:22:52,058 - util.py[DEBUG]: Reading from /var/lib/cloud/instances/17883645/datasource (quiet=False)
2022-02-10 18:22:52,058 - util.py[DEBUG]: Read 37 bytes from /var/lib/cloud/instances/17883645/datasource
2022-02-10 18:22:52,058 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/datasource - wb: [644] 37 bytes
2022-02-10 18:22:52,059 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-datasource - wb: [644] 37 bytes
2022-02-10 18:22:52,059 - util.py[DEBUG]: Reading from /var/lib/cloud/data/instance-id (quiet=False)
2022-02-10 18:22:52,059 - util.py[DEBUG]: Read 9 bytes from /var/lib/cloud/data/instance-id
2022-02-10 18:22:52,059 - stages.py[DEBUG]: previous iid found to be 17883645
2022-02-10 18:22:52,059 - util.py[DEBUG]: Writing to /var/lib/cloud/data/instance-id - wb: [644] 9 bytes
2022-02-10 18:22:52,059 - util.py[DEBUG]: Writing to /run/cloud-init/.instance-id - wb: [644] 9 bytes
2022-02-10 18:22:52,060 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-instance-id - wb: [644] 9 bytes
2022-02-10 18:22:52,060 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/obj.pkl - wb: [400] 11724 bytes
2022-02-10 18:22:52,061 - main.py[DEBUG]: [net] init will now be targeting instance id: 17883645. new=False
2022-02-10 18:22:52,061 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False)
2022-02-10 18:22:52,061 - util.py[DEBUG]: Read 3807 bytes from /etc/cloud/cloud.cfg
2022-02-10 18:22:52,061 - util.py[DEBUG]: Attempting to load yaml from string of length 3807 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,071 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90_dpkg.cfg (quiet=False)
2022-02-10 18:22:52,072 - util.py[DEBUG]: Read 90 bytes from /etc/cloud/cloud.cfg.d/90_dpkg.cfg
2022-02-10 18:22:52,072 - util.py[DEBUG]: Attempting to load yaml from string of length 90 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,072 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg (quiet=False)
2022-02-10 18:22:52,072 - util.py[DEBUG]: Read 3561 bytes from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg
2022-02-10 18:22:52,072 - util.py[DEBUG]: Attempting to load yaml from string of length 3561 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,082 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False)
2022-02-10 18:22:52,082 - util.py[DEBUG]: Read 2070 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg
2022-02-10 18:22:52,082 - util.py[DEBUG]: Attempting to load yaml from string of length 2070 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,085 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False)
2022-02-10 18:22:52,085 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,086 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:52,087 - util.py[DEBUG]: Reading from /sys/class/net/eth0/address (quiet=False)
2022-02-10 18:22:52,087 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/eth0/address
2022-02-10 18:22:52,087 - util.py[DEBUG]: Reading from /sys/class/net/lo/address (quiet=False)
2022-02-10 18:22:52,087 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/lo/address
2022-02-10 18:22:52,088 - stages.py[DEBUG]: Allowed events: {<EventScope.NETWORK: 'network'>: {<EventType.BOOT_NEW_INSTANCE: 'boot-new-instance'>}}
2022-02-10 18:22:52,088 - stages.py[DEBUG]: Event Denied: scopes=['network'] EventType=boot-legacy
2022-02-10 18:22:52,088 - stages.py[DEBUG]: No network config applied. Neither a new instance nor datasource network update allowed
2022-02-10 18:22:52,088 - stages.py[DEBUG]: applying net config names for {'config': [{'mac_address': '96:00:01:15:5a:1d', 'name': 'eth0', 'subnets': [{'dns_nameservers': ['185.12.64.2', '185.12.64.1'], 'ipv4': True, 'type': 'dhcp'}, {'address': '2a01:4f8:c010:1ae1::1/64', 'dns_nameservers': ['2a01:4ff:ff00::add:2', '2a01:4ff:ff00::add:1'], 'gateway': 'fe80::1', 'ipv6': True, 'type': 'static'}], 'type': 'physical'}], 'version': 1}
2022-02-10 18:22:52,088 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'>
2022-02-10 18:22:52,088 - util.py[DEBUG]: Reading from /sys/class/net/eth0/device/device (quiet=False)
2022-02-10 18:22:52,088 - util.py[DEBUG]: Read 7 bytes from /sys/class/net/eth0/device/device
2022-02-10 18:22:52,088 - util.py[DEBUG]: Reading from /sys/class/net/eth0/addr_assign_type (quiet=False)
2022-02-10 18:22:52,088 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/eth0/addr_assign_type
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/eth0/uevent (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 25 bytes from /sys/class/net/eth0/uevent
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/eth0/address (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/eth0/address
2022-02-10 18:22:52,089 - __init__.py[DEBUG]: ovs-vsctl not in PATH; not detecting Open vSwitch interfaces
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/eth0/device/device (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 7 bytes from /sys/class/net/eth0/device/device
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/lo/addr_assign_type (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 2 bytes from /sys/class/net/lo/addr_assign_type
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/lo/uevent (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 23 bytes from /sys/class/net/lo/uevent
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/lo/address (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 18 bytes from /sys/class/net/lo/address
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/lo/device/device (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/eth0/operstate (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 3 bytes from /sys/class/net/eth0/operstate
2022-02-10 18:22:52,089 - util.py[DEBUG]: Reading from /sys/class/net/lo/operstate (quiet=False)
2022-02-10 18:22:52,089 - util.py[DEBUG]: Read 8 bytes from /sys/class/net/lo/operstate
2022-02-10 18:22:52,090 - subp.py[DEBUG]: Running command ['ip', '-6', 'addr', 'show', 'permanent', 'scope', 'global'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,093 - subp.py[DEBUG]: Running command ['ip', '-4', 'addr', 'show'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,095 - __init__.py[DEBUG]: Detected interfaces {'eth0': {'downable': False, 'device_id': '0x0001', 'driver': 'virtio_net', 'mac': '96:00:01:15:5a:1d', 'name': 'eth0', 'up': True}, 'lo': {'downable': False, 'device_id': None, 'driver': None, 'mac': '00:00:00:00:00:00', 'name': 'lo', 'up': True}}
2022-02-10 18:22:52,096 - __init__.py[DEBUG]: no work necessary for renaming of [['96:00:01:15:5a:1d', 'eth0', 'virtio_net', '0x0001']]
2022-02-10 18:22:52,096 - handlers.py[DEBUG]: start: init-network/setup-datasource: setting up datasource
2022-02-10 18:22:52,096 - handlers.py[DEBUG]: finish: init-network/setup-datasource: SUCCESS: setting up datasource
2022-02-10 18:22:52,096 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/user-data.txt - wb: [600] 0 bytes
2022-02-10 18:22:52,099 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/user-data.txt.i - wb: [600] 308 bytes
2022-02-10 18:22:52,099 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/vendor-data.txt - wb: [600] 4544 bytes
2022-02-10 18:22:52,101 - util.py[DEBUG]: Attempting to load yaml from string of length 4205 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,108 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/vendor-data.txt.i - wb: [600] 4566 bytes
2022-02-10 18:22:52,109 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/vendor-data2.txt - wb: [600] 0 bytes
2022-02-10 18:22:52,110 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/vendor-data2.txt.i - wb: [600] 308 bytes
2022-02-10 18:22:52,111 - util.py[DEBUG]: Reading from /var/lib/cloud/data/set-hostname (quiet=False)
2022-02-10 18:22:52,111 - util.py[DEBUG]: Read 55 bytes from /var/lib/cloud/data/set-hostname
2022-02-10 18:22:52,111 - cc_set_hostname.py[DEBUG]: No hostname changes. Skipping set-hostname
2022-02-10 18:22:52,111 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/consume_data - wb: [644] 24 bytes
2022-02-10 18:22:52,112 - helpers.py[DEBUG]: Running consume_data using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/consume_data'>)
2022-02-10 18:22:52,112 - handlers.py[DEBUG]: start: init-network/consume-user-data: reading and applying user-data
2022-02-10 18:22:52,113 - stages.py[DEBUG]: Added default handler for {'text/cloud-config', 'text/cloud-config-jsonp'} from CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']]
2022-02-10 18:22:52,113 - stages.py[DEBUG]: Added default handler for {'text/x-shellscript'} from ShellScriptPartHandler: [['text/x-shellscript']]
2022-02-10 18:22:52,113 - stages.py[DEBUG]: Added default handler for {'text/cloud-boothook'} from BootHookPartHandler: [['text/cloud-boothook']]
2022-02-10 18:22:52,113 - stages.py[DEBUG]: Added default handler for {'text/upstart-job'} from UpstartJobPartHandler: [['text/upstart-job']]
2022-02-10 18:22:52,113 - stages.py[DEBUG]: Added default handler for {'text/jinja2'} from JinjaTemplatePartHandler: [['text/jinja2']]
2022-02-10 18:22:52,113 - __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__begin__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,113 - __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__begin__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__begin__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__begin__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler JinjaTemplatePartHandler: [['text/jinja2']] (__begin__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: {'MIME-Version': '1.0', 'Content-Type': 'text/x-not-multipart', 'Content-Disposition': 'attachment; filename="part-001"'}
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Empty payload of type text/x-not-multipart
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__end__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,114 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/cloud-config.txt - wb: [600] 0 bytes
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__end__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__end__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__end__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,114 - __init__.py[DEBUG]: Calling handler JinjaTemplatePartHandler: [['text/jinja2']] (__end__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,114 - handlers.py[DEBUG]: finish: init-network/consume-user-data: SUCCESS: reading and applying user-data
2022-02-10 18:22:52,114 - handlers.py[DEBUG]: start: init-network/consume-vendor-data: reading and applying vendor-data
2022-02-10 18:22:52,115 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/cloud-config.txt (quiet=False)
2022-02-10 18:22:52,115 - util.py[DEBUG]: Read 0 bytes from /var/lib/cloud/instance/cloud-config.txt
2022-02-10 18:22:52,115 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,115 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:52,115 - stages.py[DEBUG]: vendordata will be consumed. disabled_handlers=None
2022-02-10 18:22:52,115 - stages.py[DEBUG]: Added default handler for {'text/cloud-config', 'text/cloud-config-jsonp'} from CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']]
2022-02-10 18:22:52,115 - stages.py[DEBUG]: Added default handler for {'text/x-shellscript'} from ShellScriptPartHandler: [['text/x-shellscript']]
2022-02-10 18:22:52,115 - stages.py[DEBUG]: Added default handler for {'text/cloud-boothook'} from BootHookPartHandler: [['text/cloud-boothook']]
2022-02-10 18:22:52,115 - stages.py[DEBUG]: Added default handler for {'text/upstart-job'} from UpstartJobPartHandler: [['text/upstart-job']]
2022-02-10 18:22:52,115 - stages.py[DEBUG]: Added default handler for {'text/jinja2'} from JinjaTemplatePartHandler: [['text/jinja2']]
2022-02-10 18:22:52,115 - __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__begin__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,115 - __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__begin__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,115 - __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__begin__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,115 - __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__begin__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,115 - __init__.py[DEBUG]: Calling handler JinjaTemplatePartHandler: [['text/jinja2']] (__begin__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,116 - __init__.py[DEBUG]: {'Content-Type': 'text/cloud-config; charset="us-ascii"', 'MIME-Version': '1.0', 'Content-Transfer-Encoding': '7bit', 'Content-Disposition': 'attachment; filename="cloud-config"'}
2022-02-10 18:22:52,116 - __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (text/cloud-config, cloud-config, 3) with frequency once-per-instance
2022-02-10 18:22:52,116 - util.py[DEBUG]: Attempting to load yaml from string of length 4205 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,120 - cloud_config.py[DEBUG]: Merging by applying [('dict', ['replace']), ('list', []), ('str', [])]
2022-02-10 18:22:52,120 - __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__end__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,124 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/vendor-cloud-config.txt - wb: [600] 4457 bytes
2022-02-10 18:22:52,124 - __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__end__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,125 - __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__end__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,125 - __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__end__, None, 2) with frequency once-per-instance
2022-02-10 18:22:52,125 - __init__.py[DEBUG]: Calling handler JinjaTemplatePartHandler: [['text/jinja2']] (__end__, None, 3) with frequency once-per-instance
2022-02-10 18:22:52,125 - handlers.py[DEBUG]: finish: init-network/consume-vendor-data: SUCCESS: reading and applying vendor-data
2022-02-10 18:22:52,125 - handlers.py[DEBUG]: start: init-network/consume-vendor-data2: reading and applying vendor-data2
2022-02-10 18:22:52,125 - stages.py[DEBUG]: no vendordata2 from datasource
2022-02-10 18:22:52,125 - handlers.py[DEBUG]: finish: init-network/consume-vendor-data2: SUCCESS: reading and applying vendor-data2
2022-02-10 18:22:52,125 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False)
2022-02-10 18:22:52,125 - util.py[DEBUG]: Read 3807 bytes from /etc/cloud/cloud.cfg
2022-02-10 18:22:52,125 - util.py[DEBUG]: Attempting to load yaml from string of length 3807 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,135 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90_dpkg.cfg (quiet=False)
2022-02-10 18:22:52,135 - util.py[DEBUG]: Read 90 bytes from /etc/cloud/cloud.cfg.d/90_dpkg.cfg
2022-02-10 18:22:52,135 - util.py[DEBUG]: Attempting to load yaml from string of length 90 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,136 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg (quiet=False)
2022-02-10 18:22:52,136 - util.py[DEBUG]: Read 3561 bytes from /etc/cloud/cloud.cfg.d/90-hetznercloud.cfg
2022-02-10 18:22:52,136 - util.py[DEBUG]: Attempting to load yaml from string of length 3561 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,145 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False)
2022-02-10 18:22:52,145 - util.py[DEBUG]: Read 2070 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg
2022-02-10 18:22:52,145 - util.py[DEBUG]: Attempting to load yaml from string of length 2070 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,148 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False)
2022-02-10 18:22:52,148 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,148 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:52,148 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/cloud-config.txt (quiet=False)
2022-02-10 18:22:52,148 - util.py[DEBUG]: Read 0 bytes from /var/lib/cloud/instance/cloud-config.txt
2022-02-10 18:22:52,148 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,148 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:52,148 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/vendor-cloud-config.txt (quiet=False)
2022-02-10 18:22:52,148 - util.py[DEBUG]: Read 4457 bytes from /var/lib/cloud/instance/vendor-cloud-config.txt
2022-02-10 18:22:52,149 - util.py[DEBUG]: Attempting to load yaml from string of length 4457 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,154 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/cloud-config.txt (quiet=False)
2022-02-10 18:22:52,154 - util.py[DEBUG]: Read 0 bytes from /var/lib/cloud/instance/cloud-config.txt
2022-02-10 18:22:52,154 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,154 - util.py[DEBUG]: loaded blob returned None, returning default.
2022-02-10 18:22:52,154 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/vendor-cloud-config.txt (quiet=False)
2022-02-10 18:22:52,154 - util.py[DEBUG]: Read 4457 bytes from /var/lib/cloud/instance/vendor-cloud-config.txt
2022-02-10 18:22:52,154 - util.py[DEBUG]: Attempting to load yaml from string of length 4457 with allowed root types (<class 'dict'>,)
2022-02-10 18:22:52,163 - handlers.py[DEBUG]: start: init-network/activate-datasource: activating datasource
2022-02-10 18:22:52,168 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/obj.pkl - wb: [400] 17804 bytes
2022-02-10 18:22:52,171 - handlers.py[DEBUG]: finish: init-network/activate-datasource: SUCCESS: activating datasource
2022-02-10 18:22:52,173 - main.py[DEBUG]: no di_report found in config.
2022-02-10 18:22:52,194 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'>
2022-02-10 18:22:52,195 - stages.py[DEBUG]: Running module migrator (<module 'cloudinit.config.cc_migrator' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_migrator.py'>) with frequency always
2022-02-10 18:22:52,195 - handlers.py[DEBUG]: start: init-network/config-migrator: running config-migrator with frequency always
2022-02-10 18:22:52,195 - helpers.py[DEBUG]: Running config-migrator using lock (<cloudinit.helpers.DummyLock object at 0x7f11d3336f40>)
2022-02-10 18:22:52,195 - cc_migrator.py[DEBUG]: Migrated 0 semaphore files to there canonicalized names
2022-02-10 18:22:52,196 - handlers.py[DEBUG]: finish: init-network/config-migrator: SUCCESS: config-migrator ran successfully
2022-02-10 18:22:52,196 - stages.py[DEBUG]: Running module seed_random (<module 'cloudinit.config.cc_seed_random' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_seed_random.py'>) with frequency once-per-instance
2022-02-10 18:22:52,196 - handlers.py[DEBUG]: start: init-network/config-seed_random: running config-seed_random with frequency once-per-instance
2022-02-10 18:22:52,196 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_seed_random - wb: [644] 24 bytes
2022-02-10 18:22:52,196 - helpers.py[DEBUG]: Running config-seed_random using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_seed_random'>)
2022-02-10 18:22:52,197 - cc_seed_random.py[DEBUG]: seed_random: adding 2048 bytes of random seed entropy to /dev/urandom
2022-02-10 18:22:52,197 - util.py[DEBUG]: Writing to /dev/urandom - ab: [None] 2048 bytes
2022-02-10 18:22:52,197 - cc_seed_random.py[DEBUG]: no command provided
2022-02-10 18:22:52,197 - handlers.py[DEBUG]: finish: init-network/config-seed_random: SUCCESS: config-seed_random ran successfully
2022-02-10 18:22:52,197 - stages.py[DEBUG]: Running module bootcmd (<module 'cloudinit.config.cc_bootcmd' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_bootcmd.py'>) with frequency always
2022-02-10 18:22:52,198 - handlers.py[DEBUG]: start: init-network/config-bootcmd: running config-bootcmd with frequency always
2022-02-10 18:22:52,198 - helpers.py[DEBUG]: Running config-bootcmd using lock (<cloudinit.helpers.DummyLock object at 0x7f11d32fcc10>)
2022-02-10 18:22:52,198 - cc_bootcmd.py[DEBUG]: Skipping module named bootcmd, no 'bootcmd' key in configuration
2022-02-10 18:22:52,198 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully
2022-02-10 18:22:52,198 - stages.py[DEBUG]: Running module write-files (<module 'cloudinit.config.cc_write_files' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_write_files.py'>) with frequency once-per-instance
2022-02-10 18:22:52,198 - handlers.py[DEBUG]: start: init-network/config-write-files: running config-write-files with frequency once-per-instance
2022-02-10 18:22:52,198 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_write_files - wb: [644] 24 bytes
2022-02-10 18:22:52,199 - helpers.py[DEBUG]: Running config-write-files using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_write_files'>)
2022-02-10 18:22:52,250 - cc_write_files.py[DEBUG]: Skipping module named write-files, no/empty 'write_files' key in configuration
2022-02-10 18:22:52,250 - handlers.py[DEBUG]: finish: init-network/config-write-files: SUCCESS: config-write-files ran successfully
2022-02-10 18:22:52,250 - stages.py[DEBUG]: Running module growpart (<module 'cloudinit.config.cc_growpart' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py'>) with frequency always
2022-02-10 18:22:52,250 - handlers.py[DEBUG]: start: init-network/config-growpart: running config-growpart with frequency always
2022-02-10 18:22:52,250 - helpers.py[DEBUG]: Running config-growpart using lock (<cloudinit.helpers.DummyLock object at 0x7f11d3336f40>)
2022-02-10 18:22:52,250 - cc_growpart.py[DEBUG]: No 'growpart' entry in cfg.  Using default: {'mode': 'auto', 'devices': ['/'], 'ignore_growroot_disabled': False}
2022-02-10 18:22:52,250 - subp.py[DEBUG]: Running command ['growpart', '--help'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,256 - util.py[DEBUG]: Reading from /proc/615/mountinfo (quiet=False)
2022-02-10 18:22:52,256 - util.py[DEBUG]: Read 3298 bytes from /proc/615/mountinfo
2022-02-10 18:22:52,256 - util.py[DEBUG]: Reading from /sys/class/block/sda1/partition (quiet=False)
2022-02-10 18:22:52,256 - util.py[DEBUG]: Read 2 bytes from /sys/class/block/sda1/partition
2022-02-10 18:22:52,256 - util.py[DEBUG]: Reading from /sys/devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio5/host0/target0:0:0/0:0:0:0/block/sda/dev (quiet=False)
2022-02-10 18:22:52,256 - util.py[DEBUG]: Read 4 bytes from /sys/devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio5/host0/target0:0:0/0:0:0:0/block/sda/dev
2022-02-10 18:22:52,257 - subp.py[DEBUG]: Running command ['growpart', '--dry-run', '/dev/sda', '1'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,297 - subp.py[DEBUG]: Running command ['growpart', '/dev/sda', '1'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,614 - util.py[DEBUG]: resize_devices took 0.358 seconds
2022-02-10 18:22:52,614 - cc_growpart.py[INFO]: '/' resized: changed (/dev/sda, 1) from 3398434816 to 81652596224
2022-02-10 18:22:52,614 - handlers.py[DEBUG]: finish: init-network/config-growpart: SUCCESS: config-growpart ran successfully
2022-02-10 18:22:52,614 - stages.py[DEBUG]: Running module resizefs (<module 'cloudinit.config.cc_resizefs' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_resizefs.py'>) with frequency always
2022-02-10 18:22:52,615 - handlers.py[DEBUG]: start: init-network/config-resizefs: running config-resizefs with frequency always
2022-02-10 18:22:52,615 - helpers.py[DEBUG]: Running config-resizefs using lock (<cloudinit.helpers.DummyLock object at 0x7f11d3336d90>)
2022-02-10 18:22:52,615 - util.py[DEBUG]: Reading from /proc/615/mountinfo (quiet=False)
2022-02-10 18:22:52,616 - util.py[DEBUG]: Read 3298 bytes from /proc/615/mountinfo
2022-02-10 18:22:52,616 - cc_resizefs.py[DEBUG]: resize_info: dev=/dev/sda1 mnt_point=/ path=/
2022-02-10 18:22:52,616 - cc_resizefs.py[DEBUG]: Resizing / (ext4) using resize2fs /dev/sda1
2022-02-10 18:22:52,616 - subp.py[DEBUG]: Running command ('resize2fs', '/dev/sda1') with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,765 - util.py[DEBUG]: Resizing took 0.149 seconds
2022-02-10 18:22:52,765 - cc_resizefs.py[DEBUG]: Resized root filesystem (type=ext4, val=True)
2022-02-10 18:22:52,766 - handlers.py[DEBUG]: finish: init-network/config-resizefs: SUCCESS: config-resizefs ran successfully
2022-02-10 18:22:52,766 - stages.py[DEBUG]: Running module disk_setup (<module 'cloudinit.config.cc_disk_setup' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_disk_setup.py'>) with frequency once-per-instance
2022-02-10 18:22:52,766 - handlers.py[DEBUG]: start: init-network/config-disk_setup: running config-disk_setup with frequency once-per-instance
2022-02-10 18:22:52,767 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_disk_setup - wb: [644] 24 bytes
2022-02-10 18:22:52,768 - helpers.py[DEBUG]: Running config-disk_setup using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_disk_setup'>)
2022-02-10 18:22:52,768 - handlers.py[DEBUG]: finish: init-network/config-disk_setup: SUCCESS: config-disk_setup ran successfully
2022-02-10 18:22:52,768 - stages.py[DEBUG]: Running module mounts (<module 'cloudinit.config.cc_mounts' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_mounts.py'>) with frequency once-per-instance
2022-02-10 18:22:52,768 - handlers.py[DEBUG]: start: init-network/config-mounts: running config-mounts with frequency once-per-instance
2022-02-10 18:22:52,768 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_mounts - wb: [644] 24 bytes
2022-02-10 18:22:52,769 - helpers.py[DEBUG]: Running config-mounts using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_mounts'>)
2022-02-10 18:22:52,769 - cc_mounts.py[DEBUG]: mounts configuration is []
2022-02-10 18:22:52,769 - util.py[DEBUG]: Reading from /etc/fstab (quiet=False)
2022-02-10 18:22:52,769 - util.py[DEBUG]: Read 558 bytes from /etc/fstab
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: Attempting to determine the real name of ephemeral0
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: changed default device ephemeral0 => None
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount ephemeral0
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: Attempting to determine the real name of swap
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: changed default device swap => None
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount swap
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: no need to setup swap
2022-02-10 18:22:52,770 - cc_mounts.py[DEBUG]: No modifications to fstab needed
2022-02-10 18:22:52,770 - handlers.py[DEBUG]: finish: init-network/config-mounts: SUCCESS: config-mounts ran successfully
2022-02-10 18:22:52,770 - stages.py[DEBUG]: Running module set_hostname (<module 'cloudinit.config.cc_set_hostname' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_set_hostname.py'>) with frequency once-per-instance
2022-02-10 18:22:52,770 - handlers.py[DEBUG]: start: init-network/config-set_hostname: running config-set_hostname with frequency once-per-instance
2022-02-10 18:22:52,771 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_set_hostname - wb: [644] 24 bytes
2022-02-10 18:22:52,771 - helpers.py[DEBUG]: Running config-set_hostname using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_set_hostname'>)
2022-02-10 18:22:52,771 - util.py[DEBUG]: Reading from /var/lib/cloud/data/set-hostname (quiet=False)
2022-02-10 18:22:52,771 - util.py[DEBUG]: Read 55 bytes from /var/lib/cloud/data/set-hostname
2022-02-10 18:22:52,772 - cc_set_hostname.py[DEBUG]: No hostname changes. Skipping set-hostname
2022-02-10 18:22:52,772 - handlers.py[DEBUG]: finish: init-network/config-set_hostname: SUCCESS: config-set_hostname ran successfully
2022-02-10 18:22:52,772 - stages.py[DEBUG]: Running module update_hostname (<module 'cloudinit.config.cc_update_hostname' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_update_hostname.py'>) with frequency always
2022-02-10 18:22:52,772 - handlers.py[DEBUG]: start: init-network/config-update_hostname: running config-update_hostname with frequency always
2022-02-10 18:22:52,772 - helpers.py[DEBUG]: Running config-update_hostname using lock (<cloudinit.helpers.DummyLock object at 0x7f11d32fc8b0>)
2022-02-10 18:22:52,772 - cc_update_hostname.py[DEBUG]: Updating hostname to k3s-agent-3 (k3s-agent-3)
2022-02-10 18:22:52,773 - util.py[DEBUG]: Reading from /etc/hostname (quiet=False)
2022-02-10 18:22:52,773 - util.py[DEBUG]: Read 12 bytes from /etc/hostname
2022-02-10 18:22:52,773 - __init__.py[DEBUG]: Attempting to update hostname to k3s-agent-3 in 1 files
2022-02-10 18:22:52,773 - util.py[DEBUG]: Reading from /var/lib/cloud/data/previous-hostname (quiet=False)
2022-02-10 18:22:52,773 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-hostname - wb: [644] 12 bytes
2022-02-10 18:22:52,774 - handlers.py[DEBUG]: finish: init-network/config-update_hostname: SUCCESS: config-update_hostname ran successfully
2022-02-10 18:22:52,774 - stages.py[DEBUG]: Running module update_etc_hosts (<module 'cloudinit.config.cc_update_etc_hosts' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_update_etc_hosts.py'>) with frequency once-per-instance
2022-02-10 18:22:52,774 - handlers.py[DEBUG]: start: init-network/config-update_etc_hosts: running config-update_etc_hosts with frequency once-per-instance
2022-02-10 18:22:52,774 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_update_etc_hosts - wb: [644] 24 bytes
2022-02-10 18:22:52,775 - helpers.py[DEBUG]: Running config-update_etc_hosts using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_update_etc_hosts'>)
2022-02-10 18:22:52,775 - util.py[DEBUG]: Reading from /etc/cloud/templates/hosts.debian.tmpl (quiet=False)
2022-02-10 18:22:52,776 - util.py[DEBUG]: Read 845 bytes from /etc/cloud/templates/hosts.debian.tmpl
2022-02-10 18:22:52,776 - templater.py[DEBUG]: Rendering content of '/etc/cloud/templates/hosts.debian.tmpl' using renderer jinja
2022-02-10 18:22:52,782 - util.py[DEBUG]: Writing to /etc/hosts - wb: [644] 549 bytes
2022-02-10 18:22:52,783 - handlers.py[DEBUG]: finish: init-network/config-update_etc_hosts: SUCCESS: config-update_etc_hosts ran successfully
2022-02-10 18:22:52,783 - stages.py[DEBUG]: Running module ca-certs (<module 'cloudinit.config.cc_ca_certs' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ca_certs.py'>) with frequency once-per-instance
2022-02-10 18:22:52,783 - handlers.py[DEBUG]: start: init-network/config-ca-certs: running config-ca-certs with frequency once-per-instance
2022-02-10 18:22:52,783 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_ca_certs - wb: [644] 23 bytes
2022-02-10 18:22:52,784 - helpers.py[DEBUG]: Running config-ca-certs using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_ca_certs'>)
2022-02-10 18:22:52,784 - cc_ca_certs.py[DEBUG]: Skipping module named ca-certs, no 'ca-certs' key in configuration
2022-02-10 18:22:52,784 - handlers.py[DEBUG]: finish: init-network/config-ca-certs: SUCCESS: config-ca-certs ran successfully
2022-02-10 18:22:52,784 - stages.py[DEBUG]: Running module rsyslog (<module 'cloudinit.config.cc_rsyslog' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_rsyslog.py'>) with frequency once-per-instance
2022-02-10 18:22:52,784 - handlers.py[DEBUG]: start: init-network/config-rsyslog: running config-rsyslog with frequency once-per-instance
2022-02-10 18:22:52,785 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_rsyslog - wb: [644] 24 bytes
2022-02-10 18:22:52,785 - helpers.py[DEBUG]: Running config-rsyslog using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_rsyslog'>)
2022-02-10 18:22:52,785 - cc_rsyslog.py[DEBUG]: Skipping module named rsyslog, no 'rsyslog' key in configuration
2022-02-10 18:22:52,785 - handlers.py[DEBUG]: finish: init-network/config-rsyslog: SUCCESS: config-rsyslog ran successfully
2022-02-10 18:22:52,785 - stages.py[DEBUG]: Running module users-groups (<module 'cloudinit.config.cc_users_groups' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_users_groups.py'>) with frequency once-per-instance
2022-02-10 18:22:52,785 - handlers.py[DEBUG]: start: init-network/config-users-groups: running config-users-groups with frequency once-per-instance
2022-02-10 18:22:52,786 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_users_groups - wb: [644] 24 bytes
2022-02-10 18:22:52,786 - helpers.py[DEBUG]: Running config-users-groups using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_users_groups'>)
2022-02-10 18:22:52,786 - __init__.py[INFO]: User root already exists, skipping.
2022-02-10 18:22:52,787 - subp.py[DEBUG]: Running command ['passwd', '-l', 'root'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,798 - util.py[DEBUG]: Reading from /etc/sudoers (quiet=False)
2022-02-10 18:22:52,798 - util.py[DEBUG]: Read 755 bytes from /etc/sudoers
2022-02-10 18:22:52,800 - util.py[DEBUG]: Writing to /etc/sudoers.d/90-cloud-init-users - wb: [440] 135 bytes
2022-02-10 18:22:52,800 - handlers.py[DEBUG]: finish: init-network/config-users-groups: SUCCESS: config-users-groups ran successfully
2022-02-10 18:22:52,800 - stages.py[DEBUG]: Running module ssh (<module 'cloudinit.config.cc_ssh' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ssh.py'>) with frequency once-per-instance
2022-02-10 18:22:52,801 - handlers.py[DEBUG]: start: init-network/config-ssh: running config-ssh with frequency once-per-instance
2022-02-10 18:22:52,801 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_ssh - wb: [644] 23 bytes
2022-02-10 18:22:52,802 - helpers.py[DEBUG]: Running config-ssh using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_ssh'>)
2022-02-10 18:22:52,803 - subp.py[DEBUG]: Running command ['ssh-keygen', '-t', 'rsa', '-N', '', '-f', '/etc/ssh/ssh_host_rsa_key'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,884 - util.py[DEBUG]: Group ssh_keys is not a valid group name
2022-02-10 18:22:52,884 - subp.py[DEBUG]: Running command ['ssh-keygen', '-t', 'dsa', '-N', '', '-f', '/etc/ssh/ssh_host_dsa_key'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,980 - util.py[DEBUG]: Group ssh_keys is not a valid group name
2022-02-10 18:22:52,981 - subp.py[DEBUG]: Running command ['ssh-keygen', '-t', 'ecdsa', '-N', '', '-f', '/etc/ssh/ssh_host_ecdsa_key'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,989 - util.py[DEBUG]: Group ssh_keys is not a valid group name
2022-02-10 18:22:52,990 - subp.py[DEBUG]: Running command ['ssh-keygen', '-t', 'ed25519', '-N', '', '-f', '/etc/ssh/ssh_host_ed25519_key'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:52,997 - util.py[DEBUG]: Group ssh_keys is not a valid group name
2022-02-10 18:22:52,998 - util.py[DEBUG]: Reading from /etc/ssh/ssh_host_rsa_key.pub (quiet=False)
2022-02-10 18:22:52,998 - util.py[DEBUG]: Read 570 bytes from /etc/ssh/ssh_host_rsa_key.pub
2022-02-10 18:22:52,998 - util.py[DEBUG]: Reading from /etc/ssh/ssh_host_ecdsa_key.pub (quiet=False)
2022-02-10 18:22:52,998 - util.py[DEBUG]: Read 178 bytes from /etc/ssh/ssh_host_ecdsa_key.pub
2022-02-10 18:22:52,998 - util.py[DEBUG]: Reading from /etc/ssh/ssh_host_ed25519_key.pub (quiet=False)
2022-02-10 18:22:52,998 - util.py[DEBUG]: Read 98 bytes from /etc/ssh/ssh_host_ed25519_key.pub
2022-02-10 18:22:52,999 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False)
2022-02-10 18:22:52,999 - util.py[DEBUG]: Read 3301 bytes from /etc/ssh/sshd_config
2022-02-10 18:22:53,000 - util.py[DEBUG]: Reading from /root/.ssh/authorized_keys (quiet=False)
2022-02-10 18:22:53,000 - util.py[DEBUG]: Read 0 bytes from /root/.ssh/authorized_keys
2022-02-10 18:22:53,001 - util.py[DEBUG]: Writing to /root/.ssh/authorized_keys - wb: [600] 405 bytes
2022-02-10 18:22:53,001 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False)
2022-02-10 18:22:53,001 - util.py[DEBUG]: Read 3301 bytes from /etc/ssh/sshd_config
2022-02-10 18:22:53,002 - util.py[DEBUG]: Reading from /root/.ssh/authorized_keys (quiet=False)
2022-02-10 18:22:53,002 - util.py[DEBUG]: Read 405 bytes from /root/.ssh/authorized_keys
2022-02-10 18:22:53,002 - util.py[DEBUG]: Writing to /root/.ssh/authorized_keys - wb: [600] 405 bytes
2022-02-10 18:22:53,002 - handlers.py[DEBUG]: finish: init-network/config-ssh: SUCCESS: config-ssh ran successfully
2022-02-10 18:22:53,003 - main.py[DEBUG]: Ran 15 modules with 0 failures
2022-02-10 18:22:53,003 - atomic_helper.py[DEBUG]: Atomically writing to file /var/lib/cloud/data/status.json (via temporary file /var/lib/cloud/data/tmp_olwl6u3) - w: [644] 518 bytes/chars
2022-02-10 18:22:53,003 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2022-02-10 18:22:53,003 - util.py[DEBUG]: Read 11 bytes from /proc/uptime
2022-02-10 18:22:53,003 - util.py[DEBUG]: cloud-init mode 'init' took 1.039 seconds (1.04)
2022-02-10 18:22:53,003 - handlers.py[DEBUG]: finish: init-network: SUCCESS: searching for network datasources
2022-02-10 18:22:53,509 - util.py[DEBUG]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:config' at Thu, 10 Feb 2022 18:22:53 +0000. Up 10.22 seconds.
2022-02-10 18:22:53,529 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'>
2022-02-10 18:22:53,530 - stages.py[DEBUG]: Running module emit_upstart (<module 'cloudinit.config.cc_emit_upstart' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_emit_upstart.py'>) with frequency always
2022-02-10 18:22:53,530 - handlers.py[DEBUG]: start: modules-config/config-emit_upstart: running config-emit_upstart with frequency always
2022-02-10 18:22:53,530 - helpers.py[DEBUG]: Running config-emit_upstart using lock (<cloudinit.helpers.DummyLock object at 0x7f7c6c0c40d0>)
2022-02-10 18:22:53,531 - cc_emit_upstart.py[DEBUG]: no /sbin/initctl located
2022-02-10 18:22:53,531 - cc_emit_upstart.py[DEBUG]: not upstart system, 'emit_upstart' disabled
2022-02-10 18:22:53,531 - handlers.py[DEBUG]: finish: modules-config/config-emit_upstart: SUCCESS: config-emit_upstart ran successfully
2022-02-10 18:22:53,531 - stages.py[DEBUG]: Running module snap (<module 'cloudinit.config.cc_snap' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_snap.py'>) with frequency once-per-instance
2022-02-10 18:22:53,531 - handlers.py[DEBUG]: start: modules-config/config-snap: running config-snap with frequency once-per-instance
2022-02-10 18:22:53,531 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_snap - wb: [644] 24 bytes
2022-02-10 18:22:53,532 - helpers.py[DEBUG]: Running config-snap using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_snap'>)
2022-02-10 18:22:53,532 - cc_snap.py[DEBUG]: Skipping module named snap, no 'snap' key in configuration
2022-02-10 18:22:53,532 - handlers.py[DEBUG]: finish: modules-config/config-snap: SUCCESS: config-snap ran successfully
2022-02-10 18:22:53,532 - stages.py[DEBUG]: Running module ssh-import-id (<module 'cloudinit.config.cc_ssh_import_id' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ssh_import_id.py'>) with frequency once-per-instance
2022-02-10 18:22:53,532 - handlers.py[DEBUG]: start: modules-config/config-ssh-import-id: running config-ssh-import-id with frequency once-per-instance
2022-02-10 18:22:53,532 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_ssh_import_id - wb: [644] 24 bytes
2022-02-10 18:22:53,533 - helpers.py[DEBUG]: Running config-ssh-import-id using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_ssh_import_id'>)
2022-02-10 18:22:53,533 - handlers.py[DEBUG]: finish: modules-config/config-ssh-import-id: SUCCESS: config-ssh-import-id ran successfully
2022-02-10 18:22:53,533 - stages.py[DEBUG]: Running module locale (<module 'cloudinit.config.cc_locale' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_locale.py'>) with frequency once-per-instance
2022-02-10 18:22:53,533 - handlers.py[DEBUG]: start: modules-config/config-locale: running config-locale with frequency once-per-instance
2022-02-10 18:22:53,533 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_locale - wb: [644] 23 bytes
2022-02-10 18:22:53,534 - helpers.py[DEBUG]: Running config-locale using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_locale'>)
2022-02-10 18:22:53,534 - util.py[DEBUG]: Reading from /etc/default/locale (quiet=False)
2022-02-10 18:22:53,534 - util.py[DEBUG]: Read 17 bytes from /etc/default/locale
2022-02-10 18:22:53,551 - cc_locale.py[DEBUG]: Setting locale to en_US.UTF-8
2022-02-10 18:22:53,551 - debian.py[DEBUG]: System has 'LANG=en_US.UTF-8' requested 'en_US.UTF-8', skipping regeneration.
2022-02-10 18:22:53,551 - handlers.py[DEBUG]: finish: modules-config/config-locale: SUCCESS: config-locale ran successfully
2022-02-10 18:22:53,551 - stages.py[DEBUG]: Running module set-passwords (<module 'cloudinit.config.cc_set_passwords' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_set_passwords.py'>) with frequency once-per-instance
2022-02-10 18:22:53,552 - handlers.py[DEBUG]: start: modules-config/config-set-passwords: running config-set-passwords with frequency once-per-instance
2022-02-10 18:22:53,552 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_set_passwords - wb: [644] 24 bytes
2022-02-10 18:22:53,552 - helpers.py[DEBUG]: Running config-set-passwords using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_set_passwords'>)
2022-02-10 18:22:53,552 - cc_set_passwords.py[DEBUG]: Leaving SSH config 'PasswordAuthentication' unchanged. ssh_pwauth=None
2022-02-10 18:22:53,552 - handlers.py[DEBUG]: finish: modules-config/config-set-passwords: SUCCESS: config-set-passwords ran successfully
2022-02-10 18:22:53,552 - stages.py[DEBUG]: Running module grub-dpkg (<module 'cloudinit.config.cc_grub_dpkg' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_grub_dpkg.py'>) with frequency once-per-instance
2022-02-10 18:22:53,552 - handlers.py[DEBUG]: start: modules-config/config-grub-dpkg: running config-grub-dpkg with frequency once-per-instance
2022-02-10 18:22:53,552 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_grub_dpkg - wb: [644] 24 bytes
2022-02-10 18:22:53,553 - helpers.py[DEBUG]: Running config-grub-dpkg using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_grub_dpkg'>)
2022-02-10 18:22:53,553 - subp.py[DEBUG]: Running command ['grub-probe', '-t', 'disk', '/boot'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:53,560 - subp.py[DEBUG]: Running command ['udevadm', 'info', '--root', '--query=symlink', '/dev/sda'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:53,563 - cc_grub_dpkg.py[DEBUG]: considering these device symlinks: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0,/dev/disk/by-path/pci-0000:06:00.0-scsi-0:0:0:0
2022-02-10 18:22:53,564 - cc_grub_dpkg.py[DEBUG]: filtered to these disk/by-id symlinks: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0
2022-02-10 18:22:53,564 - cc_grub_dpkg.py[DEBUG]: selected /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0
2022-02-10 18:22:53,564 - cc_grub_dpkg.py[DEBUG]: Setting grub debconf-set-selections with '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0','false'
2022-02-10 18:22:53,564 - subp.py[DEBUG]: Running command ['debconf-set-selections'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:53,721 - handlers.py[DEBUG]: finish: modules-config/config-grub-dpkg: SUCCESS: config-grub-dpkg ran successfully
2022-02-10 18:22:53,721 - stages.py[DEBUG]: Running module apt-pipelining (<module 'cloudinit.config.cc_apt_pipelining' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_pipelining.py'>) with frequency once-per-instance
2022-02-10 18:22:53,722 - handlers.py[DEBUG]: start: modules-config/config-apt-pipelining: running config-apt-pipelining with frequency once-per-instance
2022-02-10 18:22:53,722 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_apt_pipelining - wb: [644] 23 bytes
2022-02-10 18:22:53,723 - helpers.py[DEBUG]: Running config-apt-pipelining using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_apt_pipelining'>)
2022-02-10 18:22:53,723 - handlers.py[DEBUG]: finish: modules-config/config-apt-pipelining: SUCCESS: config-apt-pipelining ran successfully
2022-02-10 18:22:53,723 - stages.py[DEBUG]: Running module apt-configure (<module 'cloudinit.config.cc_apt_configure' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py'>) with frequency once-per-instance
2022-02-10 18:22:53,723 - handlers.py[DEBUG]: start: modules-config/config-apt-configure: running config-apt-configure with frequency once-per-instance
2022-02-10 18:22:53,723 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_apt_configure - wb: [644] 24 bytes
2022-02-10 18:22:53,724 - helpers.py[DEBUG]: Running config-apt-configure using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_apt_configure'>)
2022-02-10 18:22:53,725 - cc_apt_configure.py[DEBUG]: debconf_selections was not set in config
2022-02-10 18:22:53,725 - util.py[DEBUG]: Reading from /etc/os-release (quiet=True)
2022-02-10 18:22:53,725 - util.py[DEBUG]: Read 382 bytes from /etc/os-release
2022-02-10 18:22:53,726 - util.py[DEBUG]: Reading from /etc/system-image/channel.ini (quiet=True)
2022-02-10 18:22:53,726 - util.py[DEBUG]: Read 0 bytes from /etc/system-image/channel.ini
2022-02-10 18:22:53,726 - cc_apt_configure.py[DEBUG]: handling apt config: {}
2022-02-10 18:22:53,726 - subp.py[DEBUG]: Running command ['lsb_release', '--all'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:53,782 - subp.py[DEBUG]: Running command ['dpkg', '--print-architecture'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:53,787 - cc_apt_configure.py[DEBUG]: got primary mirror: None
2022-02-10 18:22:53,787 - cc_apt_configure.py[DEBUG]: got security mirror: None
2022-02-10 18:22:53,787 - subp.py[DEBUG]: Running command ['dpkg', '--print-architecture'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:53,790 - util.py[DEBUG]: search for mirror in candidates: '['https://mirror.hetzner.com/ubuntu/packages']'
2022-02-10 18:22:53,794 - util.py[DEBUG]: Resolving URL: https://mirror.hetzner.com/ubuntu/packages took 0.004 seconds
2022-02-10 18:22:53,794 - util.py[DEBUG]: found working mirror: 'https://mirror.hetzner.com/ubuntu/packages'
2022-02-10 18:22:53,794 - util.py[DEBUG]: search for mirror in candidates: '['https://mirror.hetzner.com/ubuntu/security']'
2022-02-10 18:22:53,795 - util.py[DEBUG]: Resolving URL: https://mirror.hetzner.com/ubuntu/security took 0.000 seconds
2022-02-10 18:22:53,795 - util.py[DEBUG]: found working mirror: 'https://mirror.hetzner.com/ubuntu/security'
2022-02-10 18:22:53,795 - __init__.py[DEBUG]: filtered distro mirror info: {'primary': 'https://mirror.hetzner.com/ubuntu/packages', 'security': 'https://mirror.hetzner.com/ubuntu/security'}
2022-02-10 18:22:53,795 - cc_apt_configure.py[DEBUG]: Apt Mirror info: {'primary': 'https://mirror.hetzner.com/ubuntu/packages', 'security': 'https://mirror.hetzner.com/ubuntu/security', 'PRIMARY': 'https://mirror.hetzner.com/ubuntu/packages', 'SECURITY': 'https://mirror.hetzner.com/ubuntu/security', 'MIRROR': 'https://mirror.hetzner.com/ubuntu/packages'}
2022-02-10 18:22:53,795 - cc_apt_configure.py[INFO]: No custom template provided, fall back to builtin
2022-02-10 18:22:53,795 - util.py[DEBUG]: Reading from /etc/cloud/templates/sources.list.ubuntu.tmpl (quiet=False)
2022-02-10 18:22:53,795 - util.py[DEBUG]: Read 2858 bytes from /etc/cloud/templates/sources.list.ubuntu.tmpl
2022-02-10 18:22:53,804 - util.py[DEBUG]: Writing to /etc/apt/sources.list - wb: [644] 3314 bytes
2022-02-10 18:22:53,808 - handlers.py[DEBUG]: finish: modules-config/config-apt-configure: SUCCESS: config-apt-configure ran successfully
2022-02-10 18:22:53,808 - stages.py[DEBUG]: Running module ubuntu-advantage (<module 'cloudinit.config.cc_ubuntu_advantage' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ubuntu_advantage.py'>) with frequency once-per-instance
2022-02-10 18:22:53,808 - handlers.py[DEBUG]: start: modules-config/config-ubuntu-advantage: running config-ubuntu-advantage with frequency once-per-instance
2022-02-10 18:22:53,808 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_ubuntu_advantage - wb: [644] 24 bytes
2022-02-10 18:22:53,809 - helpers.py[DEBUG]: Running config-ubuntu-advantage using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_ubuntu_advantage'>)
2022-02-10 18:22:53,809 - cc_ubuntu_advantage.py[DEBUG]: Skipping module named ubuntu-advantage, no 'ubuntu_advantage' configuration found
2022-02-10 18:22:53,809 - handlers.py[DEBUG]: finish: modules-config/config-ubuntu-advantage: SUCCESS: config-ubuntu-advantage ran successfully
2022-02-10 18:22:53,809 - stages.py[DEBUG]: Running module ntp (<module 'cloudinit.config.cc_ntp' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ntp.py'>) with frequency once-per-instance
2022-02-10 18:22:53,810 - handlers.py[DEBUG]: start: modules-config/config-ntp: running config-ntp with frequency once-per-instance
2022-02-10 18:22:53,810 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_ntp - wb: [644] 24 bytes
2022-02-10 18:22:53,810 - helpers.py[DEBUG]: Running config-ntp using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_ntp'>)
2022-02-10 18:22:53,810 - cc_ntp.py[DEBUG]: Skipping module named ntp, not present or disabled by cfg
2022-02-10 18:22:53,811 - handlers.py[DEBUG]: finish: modules-config/config-ntp: SUCCESS: config-ntp ran successfully
2022-02-10 18:22:53,811 - stages.py[DEBUG]: Running module timezone (<module 'cloudinit.config.cc_timezone' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_timezone.py'>) with frequency once-per-instance
2022-02-10 18:22:53,811 - handlers.py[DEBUG]: start: modules-config/config-timezone: running config-timezone with frequency once-per-instance
2022-02-10 18:22:53,811 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_timezone - wb: [644] 24 bytes
2022-02-10 18:22:53,812 - helpers.py[DEBUG]: Running config-timezone using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_timezone'>)
2022-02-10 18:22:53,812 - cc_timezone.py[DEBUG]: Skipping module named timezone, no 'timezone' specified
2022-02-10 18:22:53,812 - handlers.py[DEBUG]: finish: modules-config/config-timezone: SUCCESS: config-timezone ran successfully
2022-02-10 18:22:53,812 - stages.py[DEBUG]: Running module disable-ec2-metadata (<module 'cloudinit.config.cc_disable_ec2_metadata' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_disable_ec2_metadata.py'>) with frequency always
2022-02-10 18:22:53,812 - handlers.py[DEBUG]: start: modules-config/config-disable-ec2-metadata: running config-disable-ec2-metadata with frequency always
2022-02-10 18:22:53,812 - helpers.py[DEBUG]: Running config-disable-ec2-metadata using lock (<cloudinit.helpers.DummyLock object at 0x7f7c6be815b0>)
2022-02-10 18:22:53,812 - cc_disable_ec2_metadata.py[DEBUG]: Skipping module named disable-ec2-metadata, disabling the ec2 route not enabled
2022-02-10 18:22:53,813 - handlers.py[DEBUG]: finish: modules-config/config-disable-ec2-metadata: SUCCESS: config-disable-ec2-metadata ran successfully
2022-02-10 18:22:53,813 - stages.py[DEBUG]: Running module runcmd (<module 'cloudinit.config.cc_runcmd' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_runcmd.py'>) with frequency once-per-instance
2022-02-10 18:22:53,813 - handlers.py[DEBUG]: start: modules-config/config-runcmd: running config-runcmd with frequency once-per-instance
2022-02-10 18:22:53,813 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_runcmd - wb: [644] 24 bytes
2022-02-10 18:22:53,814 - helpers.py[DEBUG]: Running config-runcmd using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_runcmd'>)
2022-02-10 18:22:53,815 - util.py[DEBUG]: Shellified 1 commands.
2022-02-10 18:22:53,815 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/scripts/runcmd - wb: [700] 87 bytes
2022-02-10 18:22:53,816 - handlers.py[DEBUG]: finish: modules-config/config-runcmd: SUCCESS: config-runcmd ran successfully
2022-02-10 18:22:53,816 - stages.py[DEBUG]: Running module byobu (<module 'cloudinit.config.cc_byobu' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_byobu.py'>) with frequency once-per-instance
2022-02-10 18:22:53,816 - handlers.py[DEBUG]: start: modules-config/config-byobu: running config-byobu with frequency once-per-instance
2022-02-10 18:22:53,816 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_byobu - wb: [644] 24 bytes
2022-02-10 18:22:53,817 - helpers.py[DEBUG]: Running config-byobu using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_byobu'>)
2022-02-10 18:22:53,817 - cc_byobu.py[DEBUG]: Skipping module named byobu, no 'byobu' values found
2022-02-10 18:22:53,817 - handlers.py[DEBUG]: finish: modules-config/config-byobu: SUCCESS: config-byobu ran successfully
2022-02-10 18:22:53,817 - main.py[DEBUG]: Ran 14 modules with 0 failures
2022-02-10 18:22:53,817 - atomic_helper.py[DEBUG]: Atomically writing to file /var/lib/cloud/data/status.json (via temporary file /var/lib/cloud/data/tmp4x24mqwu) - w: [644] 546 bytes/chars
2022-02-10 18:22:53,818 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2022-02-10 18:22:53,818 - util.py[DEBUG]: Read 12 bytes from /proc/uptime
2022-02-10 18:22:53,818 - util.py[DEBUG]: cloud-init mode 'modules' took 0.406 seconds (0.41)
2022-02-10 18:22:53,818 - handlers.py[DEBUG]: finish: modules-config: SUCCESS: running modules for config
2022-02-10 18:22:54,235 - util.py[DEBUG]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:final' at Thu, 10 Feb 2022 18:22:54 +0000. Up 10.95 seconds.
2022-02-10 18:22:54,253 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.ubuntu.Distro'>
2022-02-10 18:22:54,254 - stages.py[DEBUG]: Running module package-update-upgrade-install (<module 'cloudinit.config.cc_package_update_upgrade_install' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_package_update_upgrade_install.py'>) with frequency once-per-instance
2022-02-10 18:22:54,254 - handlers.py[DEBUG]: start: modules-final/config-package-update-upgrade-install: running config-package-update-upgrade-install with frequency once-per-instance
2022-02-10 18:22:54,254 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_package_update_upgrade_install - wb: [644] 24 bytes
2022-02-10 18:22:54,255 - helpers.py[DEBUG]: Running config-package-update-upgrade-install using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_package_update_upgrade_install'>)
2022-02-10 18:22:54,255 - handlers.py[DEBUG]: finish: modules-final/config-package-update-upgrade-install: SUCCESS: config-package-update-upgrade-install ran successfully
2022-02-10 18:22:54,255 - stages.py[DEBUG]: Running module fan (<module 'cloudinit.config.cc_fan' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_fan.py'>) with frequency once-per-instance
2022-02-10 18:22:54,255 - handlers.py[DEBUG]: start: modules-final/config-fan: running config-fan with frequency once-per-instance
2022-02-10 18:22:54,255 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_fan - wb: [644] 23 bytes
2022-02-10 18:22:54,255 - helpers.py[DEBUG]: Running config-fan using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_fan'>)
2022-02-10 18:22:54,255 - cc_fan.py[DEBUG]: fan: no 'fan' config entry. disabling
2022-02-10 18:22:54,255 - handlers.py[DEBUG]: finish: modules-final/config-fan: SUCCESS: config-fan ran successfully
2022-02-10 18:22:54,255 - stages.py[DEBUG]: Running module landscape (<module 'cloudinit.config.cc_landscape' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_landscape.py'>) with frequency once-per-instance
2022-02-10 18:22:54,256 - handlers.py[DEBUG]: start: modules-final/config-landscape: running config-landscape with frequency once-per-instance
2022-02-10 18:22:54,256 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_landscape - wb: [644] 24 bytes
2022-02-10 18:22:54,256 - helpers.py[DEBUG]: Running config-landscape using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_landscape'>)
2022-02-10 18:22:54,256 - handlers.py[DEBUG]: finish: modules-final/config-landscape: SUCCESS: config-landscape ran successfully
2022-02-10 18:22:54,256 - stages.py[DEBUG]: Running module lxd (<module 'cloudinit.config.cc_lxd' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_lxd.py'>) with frequency once-per-instance
2022-02-10 18:22:54,256 - handlers.py[DEBUG]: start: modules-final/config-lxd: running config-lxd with frequency once-per-instance
2022-02-10 18:22:54,256 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_lxd - wb: [644] 24 bytes
2022-02-10 18:22:54,256 - helpers.py[DEBUG]: Running config-lxd using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_lxd'>)
2022-02-10 18:22:54,256 - cc_lxd.py[DEBUG]: Skipping module named lxd, not present or disabled by cfg
2022-02-10 18:22:54,256 - handlers.py[DEBUG]: finish: modules-final/config-lxd: SUCCESS: config-lxd ran successfully
2022-02-10 18:22:54,257 - stages.py[DEBUG]: Running module ubuntu-drivers (<module 'cloudinit.config.cc_ubuntu_drivers' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ubuntu_drivers.py'>) with frequency once-per-instance
2022-02-10 18:22:54,257 - handlers.py[DEBUG]: start: modules-final/config-ubuntu-drivers: running config-ubuntu-drivers with frequency once-per-instance
2022-02-10 18:22:54,257 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_ubuntu_drivers - wb: [644] 24 bytes
2022-02-10 18:22:54,257 - helpers.py[DEBUG]: Running config-ubuntu-drivers using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_ubuntu_drivers'>)
2022-02-10 18:22:54,257 - cc_ubuntu_drivers.py[DEBUG]: Skipping module named ubuntu-drivers, no 'drivers' key in config
2022-02-10 18:22:54,257 - handlers.py[DEBUG]: finish: modules-final/config-ubuntu-drivers: SUCCESS: config-ubuntu-drivers ran successfully
2022-02-10 18:22:54,257 - stages.py[DEBUG]: Running module puppet (<module 'cloudinit.config.cc_puppet' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_puppet.py'>) with frequency once-per-instance
2022-02-10 18:22:54,257 - handlers.py[DEBUG]: start: modules-final/config-puppet: running config-puppet with frequency once-per-instance
2022-02-10 18:22:54,257 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_puppet - wb: [644] 22 bytes
2022-02-10 18:22:54,257 - helpers.py[DEBUG]: Running config-puppet using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_puppet'>)
2022-02-10 18:22:54,258 - cc_puppet.py[DEBUG]: Skipping module named puppet, no 'puppet' configuration found
2022-02-10 18:22:54,258 - handlers.py[DEBUG]: finish: modules-final/config-puppet: SUCCESS: config-puppet ran successfully
2022-02-10 18:22:54,258 - stages.py[DEBUG]: Running module chef (<module 'cloudinit.config.cc_chef' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_chef.py'>) with frequency always
2022-02-10 18:22:54,258 - handlers.py[DEBUG]: start: modules-final/config-chef: running config-chef with frequency always
2022-02-10 18:22:54,258 - helpers.py[DEBUG]: Running config-chef using lock (<cloudinit.helpers.DummyLock object at 0x7f025a743280>)
2022-02-10 18:22:54,258 - cc_chef.py[DEBUG]: Skipping module named chef, no 'chef' key in configuration
2022-02-10 18:22:54,258 - handlers.py[DEBUG]: finish: modules-final/config-chef: SUCCESS: config-chef ran successfully
2022-02-10 18:22:54,258 - stages.py[DEBUG]: Running module mcollective (<module 'cloudinit.config.cc_mcollective' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_mcollective.py'>) with frequency once-per-instance
2022-02-10 18:22:54,258 - handlers.py[DEBUG]: start: modules-final/config-mcollective: running config-mcollective with frequency once-per-instance
2022-02-10 18:22:54,258 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_mcollective - wb: [644] 24 bytes
2022-02-10 18:22:54,258 - helpers.py[DEBUG]: Running config-mcollective using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_mcollective'>)
2022-02-10 18:22:54,258 - cc_mcollective.py[DEBUG]: Skipping module named mcollective, no 'mcollective' key in configuration
2022-02-10 18:22:54,258 - handlers.py[DEBUG]: finish: modules-final/config-mcollective: SUCCESS: config-mcollective ran successfully
2022-02-10 18:22:54,258 - stages.py[DEBUG]: Running module salt-minion (<module 'cloudinit.config.cc_salt_minion' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_salt_minion.py'>) with frequency once-per-instance
2022-02-10 18:22:54,259 - handlers.py[DEBUG]: start: modules-final/config-salt-minion: running config-salt-minion with frequency once-per-instance
2022-02-10 18:22:54,259 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_salt_minion - wb: [644] 23 bytes
2022-02-10 18:22:54,259 - helpers.py[DEBUG]: Running config-salt-minion using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_salt_minion'>)
2022-02-10 18:22:54,259 - cc_salt_minion.py[DEBUG]: Skipping module named salt-minion, no 'salt_minion' key in configuration
2022-02-10 18:22:54,259 - handlers.py[DEBUG]: finish: modules-final/config-salt-minion: SUCCESS: config-salt-minion ran successfully
2022-02-10 18:22:54,259 - stages.py[DEBUG]: Running module reset_rmc (<module 'cloudinit.config.cc_reset_rmc' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_reset_rmc.py'>) with frequency once-per-instance
2022-02-10 18:22:54,259 - handlers.py[DEBUG]: start: modules-final/config-reset_rmc: running config-reset_rmc with frequency once-per-instance
2022-02-10 18:22:54,259 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_reset_rmc - wb: [644] 21 bytes
2022-02-10 18:22:54,259 - helpers.py[DEBUG]: Running config-reset_rmc using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_reset_rmc'>)
2022-02-10 18:22:54,260 - cc_reset_rmc.py[DEBUG]: module disabled, RSCT_PATH not present
2022-02-10 18:22:54,260 - handlers.py[DEBUG]: finish: modules-final/config-reset_rmc: SUCCESS: config-reset_rmc ran successfully
2022-02-10 18:22:54,260 - stages.py[DEBUG]: Running module refresh_rmc_and_interface (<module 'cloudinit.config.cc_refresh_rmc_and_interface' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_refresh_rmc_and_interface.py'>) with frequency always
2022-02-10 18:22:54,260 - handlers.py[DEBUG]: start: modules-final/config-refresh_rmc_and_interface: running config-refresh_rmc_and_interface with frequency always
2022-02-10 18:22:54,260 - helpers.py[DEBUG]: Running config-refresh_rmc_and_interface using lock (<cloudinit.helpers.DummyLock object at 0x7f025a743400>)
2022-02-10 18:22:54,260 - cc_refresh_rmc_and_interface.py[DEBUG]: No 'rmcctrl' in path, disabled
2022-02-10 18:22:54,260 - handlers.py[DEBUG]: finish: modules-final/config-refresh_rmc_and_interface: SUCCESS: config-refresh_rmc_and_interface ran successfully
2022-02-10 18:22:54,260 - stages.py[DEBUG]: Running module rightscale_userdata (<module 'cloudinit.config.cc_rightscale_userdata' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_rightscale_userdata.py'>) with frequency once-per-instance
2022-02-10 18:22:54,260 - handlers.py[DEBUG]: start: modules-final/config-rightscale_userdata: running config-rightscale_userdata with frequency once-per-instance
2022-02-10 18:22:54,261 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_rightscale_userdata - wb: [644] 24 bytes
2022-02-10 18:22:54,261 - helpers.py[DEBUG]: Running config-rightscale_userdata using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_rightscale_userdata'>)
2022-02-10 18:22:54,261 - cc_rightscale_userdata.py[DEBUG]: Failed to get raw userdata in module rightscale_userdata
2022-02-10 18:22:54,261 - handlers.py[DEBUG]: finish: modules-final/config-rightscale_userdata: SUCCESS: config-rightscale_userdata ran successfully
2022-02-10 18:22:54,261 - stages.py[DEBUG]: Running module scripts-vendor (<module 'cloudinit.config.cc_scripts_vendor' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_vendor.py'>) with frequency once-per-instance
2022-02-10 18:22:54,261 - handlers.py[DEBUG]: start: modules-final/config-scripts-vendor: running config-scripts-vendor with frequency once-per-instance
2022-02-10 18:22:54,261 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_scripts_vendor - wb: [644] 24 bytes
2022-02-10 18:22:54,261 - helpers.py[DEBUG]: Running config-scripts-vendor using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_scripts_vendor'>)
2022-02-10 18:22:54,261 - handlers.py[DEBUG]: finish: modules-final/config-scripts-vendor: SUCCESS: config-scripts-vendor ran successfully
2022-02-10 18:22:54,261 - stages.py[DEBUG]: Running module scripts-per-once (<module 'cloudinit.config.cc_scripts_per_once' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_per_once.py'>) with frequency once
2022-02-10 18:22:54,262 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-once: running config-scripts-per-once with frequency once
2022-02-10 18:22:54,262 - helpers.py[DEBUG]: config-scripts-per-once already ran (freq=once)
2022-02-10 18:22:54,262 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-once: SUCCESS: config-scripts-per-once previously ran
2022-02-10 18:22:54,262 - stages.py[DEBUG]: Running module scripts-per-boot (<module 'cloudinit.config.cc_scripts_per_boot' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_per_boot.py'>) with frequency always
2022-02-10 18:22:54,262 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-boot: running config-scripts-per-boot with frequency always
2022-02-10 18:22:54,262 - helpers.py[DEBUG]: Running config-scripts-per-boot using lock (<cloudinit.helpers.DummyLock object at 0x7f025a6a02e0>)
2022-02-10 18:22:54,262 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-boot: SUCCESS: config-scripts-per-boot ran successfully
2022-02-10 18:22:54,262 - stages.py[DEBUG]: Running module scripts-per-instance (<module 'cloudinit.config.cc_scripts_per_instance' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_per_instance.py'>) with frequency once-per-instance
2022-02-10 18:22:54,262 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-instance: running config-scripts-per-instance with frequency once-per-instance
2022-02-10 18:22:54,262 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_scripts_per_instance - wb: [644] 24 bytes
2022-02-10 18:22:54,263 - helpers.py[DEBUG]: Running config-scripts-per-instance using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_scripts_per_instance'>)
2022-02-10 18:22:54,263 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-instance: SUCCESS: config-scripts-per-instance ran successfully
2022-02-10 18:22:54,263 - stages.py[DEBUG]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py'>) with frequency once-per-instance
2022-02-10 18:22:54,263 - handlers.py[DEBUG]: start: modules-final/config-scripts-user: running config-scripts-user with frequency once-per-instance
2022-02-10 18:22:54,263 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_scripts_user - wb: [644] 24 bytes
2022-02-10 18:22:54,263 - helpers.py[DEBUG]: Running config-scripts-user using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_scripts_user'>)
2022-02-10 18:22:54,264 - subp.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=False, capture=False)
2022-02-10 18:22:54,271 - handlers.py[DEBUG]: finish: modules-final/config-scripts-user: SUCCESS: config-scripts-user ran successfully
2022-02-10 18:22:54,271 - stages.py[DEBUG]: Running module ssh-authkey-fingerprints (<module 'cloudinit.config.cc_ssh_authkey_fingerprints' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ssh_authkey_fingerprints.py'>) with frequency once-per-instance
2022-02-10 18:22:54,272 - handlers.py[DEBUG]: start: modules-final/config-ssh-authkey-fingerprints: running config-ssh-authkey-fingerprints with frequency once-per-instance
2022-02-10 18:22:54,272 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_ssh_authkey_fingerprints - wb: [644] 24 bytes
2022-02-10 18:22:54,272 - helpers.py[DEBUG]: Running config-ssh-authkey-fingerprints using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_ssh_authkey_fingerprints'>)
2022-02-10 18:22:54,273 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False)
2022-02-10 18:22:54,273 - util.py[DEBUG]: Read 3301 bytes from /etc/ssh/sshd_config
2022-02-10 18:22:54,273 - util.py[DEBUG]: Reading from /root/.ssh/authorized_keys (quiet=False)
2022-02-10 18:22:54,273 - util.py[DEBUG]: Read 405 bytes from /root/.ssh/authorized_keys
2022-02-10 18:22:54,283 - handlers.py[DEBUG]: finish: modules-final/config-ssh-authkey-fingerprints: SUCCESS: config-ssh-authkey-fingerprints ran successfully
2022-02-10 18:22:54,284 - stages.py[DEBUG]: Running module keys-to-console (<module 'cloudinit.config.cc_keys_to_console' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_keys_to_console.py'>) with frequency once-per-instance
2022-02-10 18:22:54,284 - handlers.py[DEBUG]: start: modules-final/config-keys-to-console: running config-keys-to-console with frequency once-per-instance
2022-02-10 18:22:54,284 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_keys_to_console - wb: [644] 21 bytes
2022-02-10 18:22:54,284 - helpers.py[DEBUG]: Running config-keys-to-console using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_keys_to_console'>)
2022-02-10 18:22:54,284 - subp.py[DEBUG]: Running command ['/usr/lib/cloud-init/write-ssh-key-fingerprints', '', 'ssh-dss'] with allowed return codes [0] (shell=False, capture=True)
2022-02-10 18:22:54,327 - handlers.py[DEBUG]: finish: modules-final/config-keys-to-console: SUCCESS: config-keys-to-console ran successfully
2022-02-10 18:22:54,327 - stages.py[DEBUG]: Running module phone-home (<module 'cloudinit.config.cc_phone_home' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_phone_home.py'>) with frequency once-per-instance
2022-02-10 18:22:54,328 - handlers.py[DEBUG]: start: modules-final/config-phone-home: running config-phone-home with frequency once-per-instance
2022-02-10 18:22:54,328 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_phone_home - wb: [644] 23 bytes
2022-02-10 18:22:54,329 - helpers.py[DEBUG]: Running config-phone-home using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_phone_home'>)
2022-02-10 18:22:54,329 - cc_phone_home.py[DEBUG]: Skipping module named phone-home, no 'phone_home' configuration found
2022-02-10 18:22:54,329 - handlers.py[DEBUG]: finish: modules-final/config-phone-home: SUCCESS: config-phone-home ran successfully
2022-02-10 18:22:54,329 - stages.py[DEBUG]: Running module final-message (<module 'cloudinit.config.cc_final_message' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_final_message.py'>) with frequency always
2022-02-10 18:22:54,330 - handlers.py[DEBUG]: start: modules-final/config-final-message: running config-final-message with frequency always
2022-02-10 18:22:54,330 - helpers.py[DEBUG]: Running config-final-message using lock (<cloudinit.helpers.DummyLock object at 0x7f025a6ced90>)
2022-02-10 18:22:54,330 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2022-02-10 18:22:54,330 - util.py[DEBUG]: Read 12 bytes from /proc/uptime
2022-02-10 18:22:54,334 - util.py[DEBUG]: Cloud-init v. 21.4-0ubuntu1~20.04.1 finished at Thu, 10 Feb 2022 18:22:54 +0000. Datasource DataSourceHetzner.  Up 11.14 seconds
2022-02-10 18:22:54,334 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/boot-finished - wb: [644] 67 bytes
2022-02-10 18:22:54,335 - handlers.py[DEBUG]: finish: modules-final/config-final-message: SUCCESS: config-final-message ran successfully
2022-02-10 18:22:54,335 - stages.py[DEBUG]: Running module power-state-change (<module 'cloudinit.config.cc_power_state_change' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_power_state_change.py'>) with frequency once-per-instance
2022-02-10 18:22:54,335 - handlers.py[DEBUG]: start: modules-final/config-power-state-change: running config-power-state-change with frequency once-per-instance
2022-02-10 18:22:54,335 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/17883645/sem/config_power_state_change - wb: [644] 24 bytes
2022-02-10 18:22:54,336 - helpers.py[DEBUG]: Running config-power-state-change using lock (<FileLock using file '/var/lib/cloud/instances/17883645/sem/config_power_state_change'>)
2022-02-10 18:22:54,336 - cc_power_state_change.py[DEBUG]: no power_state provided. doing nothing
2022-02-10 18:22:54,336 - handlers.py[DEBUG]: finish: modules-final/config-power-state-change: SUCCESS: config-power-state-change ran successfully
2022-02-10 18:22:54,336 - main.py[DEBUG]: Ran 22 modules with 0 failures
2022-02-10 18:22:54,336 - atomic_helper.py[DEBUG]: Atomically writing to file /var/lib/cloud/data/status.json (via temporary file /var/lib/cloud/data/tmpd1juxowf) - w: [644] 574 bytes/chars
2022-02-10 18:22:54,337 - atomic_helper.py[DEBUG]: Atomically writing to file /var/lib/cloud/data/result.json (via temporary file /var/lib/cloud/data/tmpxcmop6op) - w: [644] 68 bytes/chars
2022-02-10 18:22:54,337 - util.py[DEBUG]: Creating symbolic link from '/run/cloud-init/result.json' => '../../var/lib/cloud/data/result.json'
2022-02-10 18:22:54,337 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2022-02-10 18:22:54,337 - util.py[DEBUG]: Read 12 bytes from /proc/uptime
2022-02-10 18:22:54,337 - util.py[DEBUG]: cloud-init mode 'modules' took 0.199 seconds (0.19)
2022-02-10 18:22:54,337 - handlers.py[DEBUG]: finish: modules-final: SUCCESS: running modules for final
root@k3s-agent-3:/var/log#

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Great debugging! What I think is happening is that you must have touched agent-3 manually in the UI in a previous iteration. For terraform, that's means it does not know what to do with it.

So please, down scale to 3 agents again. Delete agent-3 if somehow it survives on hetzner, and retry to upscale.

And one advice, leave the UI alone, only observe or delete when needed (to fix) through the hcloud cli. As in the UI we are too used to modify things and that screws up everything for Terraform.

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Or if you are sure you did not touch anything manually. Then, it's the IP, something is wrong with it. In the past, such weird things happen, and had to create another Hetzner project. As that reinitializes everything! Also, it would give you the opportunity to try our new system based on MicroOS. You need to git pull master, it changes everything!

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

Yes I deleted it. I did a test how Kube-Hetzner performs if a node dies (what could happen in real world)

I am on git master branch (I didn't recognize, that there is a k3os branch)

servers_num               = 3
agents_num                = 3

running terraform plan and terraform apply --auto-approve


❯ tf apply --auto-approve
local_file.traefik_config: Refreshing state... [id=25ba84696ee16d68f5b98f6ea6b70bb14c3c530c]
hcloud_ssh_key.default: Refreshing state... [id=5492430]
hcloud_placement_group.k3s_placement_group: Refreshing state... [id=19653]
hcloud_network.k3s: Refreshing state... [id=1352333]
random_password.k3s_token: Refreshing state... [id=none]
hcloud_firewall.k3s: Refreshing state... [id=290151]
local_file.hetzner_ccm_config: Refreshing state... [id=f5ec6cb5689cb5830d04857365d567edae562174]
local_file.hetzner_csi_config: Refreshing state... [id=aa232912bcf86722e32b698e1e077522c7f02a9d]
hcloud_network_subnet.k3s: Refreshing state... [id=1352333-10.0.0.0/16]
hcloud_server.first_control_plane: Refreshing state... [id=17736249]
hcloud_server.agents[0]: Refreshing state... [id=17736379]
hcloud_server.agents[1]: Refreshing state... [id=17736385]
hcloud_server.agents[2]: Refreshing state... [id=17736383]
hcloud_server.control_planes[1]: Refreshing state... [id=17736378]
hcloud_server.control_planes[0]: Refreshing state... [id=17736377]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

agents_public_ip = [
  "142.132.184.xxx",
  "138.201.116.xxx",
  "138.201.246.xxx",
]
controlplanes_public_ip = [
  "49.12.221.xxx",
  "78.46.165.xxx",
  "78.47.101.xxx",
]

Bildschirmfoto 2022-02-11 um 10 34 29


❯ hcloud server list
ID         NAME                  STATUS    IPV4              IPV6                      DATACENTER
17736249   k3s-control-plane-0   running   49.12.221.xxx    2a01:4f8:1c17:xxx::/64   fsn1-dc14
17736377   k3s-control-plane-1   running   78.46.165.xxx     2a01:4f8:1c17:4xx4::/64   fsn1-dc14
17736378   k3s-control-plane-2   running   78.47.101.xxx     2a01:4f8:1c17:xxxx::/64   fsn1-dc14
17736379   k3s-agent-0           running   142.132.184.xxx    2a01:4f8:1c17:xxxx::/64   fsn1-dc14
17736383   k3s-agent-2           running   138.201.246.xxx   2a01:4f8:1c17:f7b6::/64   fsn1-dc14
17736385   k3s-agent-1           running   138.201.116.xxx   2a01:4f8:1c17:xxx::/64   fsn1-dc14

Start scale up:

servers_num               = 3
agents_num                = 6

tf apply --auto-approve

❯ terraform apply -auto-approve
hcloud_network.k3s: Refreshing state... [id=1352333]
hcloud_ssh_key.default: Refreshing state... [id=5492430]
hcloud_placement_group.k3s_placement_group: Refreshing state... [id=19653]
local_file.traefik_config: Refreshing state... [id=25ba84696ee16d68f5b98f6ea6b70bb14c3c530c]
random_password.k3s_token: Refreshing state... [id=none]
hcloud_firewall.k3s: Refreshing state... [id=290151]
local_file.hetzner_ccm_config: Refreshing state... [id=f5ec6cb5689cb5830d04857365d567edae562174]
hcloud_network_subnet.k3s: Refreshing state... [id=1352333-10.0.0.0/16]
local_file.hetzner_csi_config: Refreshing state... [id=aa232912bcf86722e32b698e1e077522c7f02a9d]
hcloud_server.first_control_plane: Refreshing state... [id=17736249]
hcloud_server.agents[0]: Refreshing state... [id=17736379]
hcloud_server.control_planes[0]: Refreshing state... [id=17736377]
hcloud_server.agents[2]: Refreshing state... [id=17736383]
hcloud_server.agents[1]: Refreshing state... [id=17736385]
hcloud_server.control_planes[1]: Refreshing state... [id=17736378]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:

  • create

Terraform will perform the following actions:

  # hcloud_server.agents[3] will be created
  + resource "hcloud_server" "agents" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 290151,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "k3s_upgrade" = "true"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-agent-3"
      + placement_group_id = 19653
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx21"
      + ssh_keys           = [
          + "5492430",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.0.8"
          + mac_address = (known after apply)
          + network_id  = 1352333
        }
    }

  # hcloud_server.agents[4] will be created
  + resource "hcloud_server" "agents" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 290151,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "k3s_upgrade" = "true"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-agent-4"
      + placement_group_id = 19653
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx21"
      + ssh_keys           = [
          + "5492430",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.0.9"
          + mac_address = (known after apply)
          + network_id  = 1352333
        }
    }

  # hcloud_server.agents[5] will be created
  + resource "hcloud_server" "agents" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 290151,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "k3s_upgrade" = "true"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-agent-5"
      + placement_group_id = 19653
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx21"
      + ssh_keys           = [
          + "5492430",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.0.10"
          + mac_address = (known after apply)
          + network_id  = 1352333
        }
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  ~ agents_public_ip = [
        # (2 unchanged elements hidden)
        "138.201.246.186",
      + (known after apply),
      + (known after apply),
      + (known after apply),
]
hcloud_server.agents[4]: Creating...
hcloud_server.agents[5]: Creating...
hcloud_server.agents[3]: Creating...
hcloud_server.agents[4]: Still creating... [10s elapsed]
hcloud_server.agents[3]: Still creating... [10s elapsed]
hcloud_server.agents[5]: Still creating... [10s elapsed]
hcloud_server.agents[4]: Provisioning with 'file'...
hcloud_server.agents[5]: Provisioning with 'file'...
hcloud_server.agents[5]: Still creating... [20s elapsed]
hcloud_server.agents[4]: Still creating... [20s elapsed]
hcloud_server.agents[5]: Still creating... [30s elapsed]
hcloud_server.agents[4]: Still creating... [30s elapsed]

Bildschirmfoto 2022-02-11 um 10 55 16

Bildschirmfoto 2022-02-11 um 10 52 21

again, like in my latest post, tf is installing agents on all agents except "agent-3".
Somewhere in the provisioning process, terraform or k3os must know which agents exists. I may have to delete thet "orphaned" entry. This is the reason I created an issue at k3os, may the problem relies on k3os?

(symbol photo)
Bildschirmfoto 2022-02-10 um 23 04 25

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

@exocode Please, could you scale back down. Make sure there are no agent-3 surviving in your hetzner project, and scale back up, BUT with the output redirected to a file? And post that file here, please?

I think that should work:

terraform apply -auto-approve > scale_up.log

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

As it fails to enter rescue mode and install the system somehow.

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

That is the exact output after running the command you provided:

❯ terraform apply -auto-approve > scale_up.log


╷
│ Error: hcloud/inlineAttachServerToNetwork: attach server to network: provided IP is not available (ip_not_available)
│
│   with hcloud_server.agents[1],
│   on agents.tf line 1, in resource "hcloud_server" "agents":
│    1: resource "hcloud_server" "agents" {
│
â•ĩ
╷
│ Error: hcloud/inlineAttachServerToNetwork: attach server to network: provided IP is not available (ip_not_available)
│
│   with hcloud_server.agents[3],
│   on agents.tf line 1, in resource "hcloud_server" "agents":
│    1: resource "hcloud_server" "agents" {
│
â•ĩ

and the log is attached here:
scale_up.log

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Ok, this will get you the logs you want, with cat 'scale_up.log' | grep 'hcloud_server.agents\[3\]' | less we see that it never gets created.

ksnip_20220211-115504

Basically, something is up with the IP, they are already in the network or the placement group, and firewall.

This is screwed up! You'll probably have to redo your cluster, and as I said yesterday, it's better now with the new system based not on k3os (that is defunct) but on MicroOS.

And will also put a big warning in the scale up - scale down section, NEVER TOUCH the nodes in Hetzner manually if you want this to be possible. So good learned lessons for both of us here.

And about hi CPU load, it takes the nodes down, like shutdown, it does not remove it, so that should not be an issue.

Last but not least, it would be nice to have a separate terraform script here in kube-hetzner that creates individual nodes, either agents or control plane, so that they can be manually join into a cluster, even if the rest was touched manually. Will think about such an implementation, it should be fairly easy but will come after #60.

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

@mysticaltech I am on master now, added a hcloud_token, increased agent node to "6" and added this firewall rule in main.tf

  #Custom Postgres
  rule {
    direction = "out"
    protocol  = "tcp"
    port      = "43044"
    destination_ips = [
      "0.0.0.0/0"
    ]
  }

This is my first attempt and it fails with the very first control-plane (also a second run fails with the same error)


terraform init
terraform plan
terraform apply --auto-approve

...
..

hcloud_server.first_control_plane (remote-exec): Reading state information... 0%
hcloud_server.first_control_plane (remote-exec): Reading state information... Done
hcloud_server.first_control_plane (remote-exec): The following additional packages will be installed:
hcloud_server.first_control_plane (remote-exec):   libaria2-0 libc-ares2
hcloud_server.first_control_plane (remote-exec): The following NEW packages will be installed:
hcloud_server.first_control_plane (remote-exec):   aria2 libaria2-0 libc-ares2
hcloud_server.first_control_plane (remote-exec): 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
hcloud_server.first_control_plane (remote-exec): Need to get 1,571 kB of archives.
hcloud_server.first_control_plane (remote-exec): After this operation, 6,225 kB of additional disk space will be used.
hcloud_server.first_control_plane (remote-exec): 0% [Working]
hcloud_server.first_control_plane (remote-exec): Get:1 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libc-ares2 amd64 1.17.1-1+deb11u1 [102 kB]
hcloud_server.first_control_plane (remote-exec): 0% [1 libc-ares2 1,197 B/102 kB 1%]
hcloud_server.first_control_plane (remote-exec): 12% [Working]
hcloud_server.first_control_plane (remote-exec): Get:2 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libaria2-0 amd64 1.35.0-3 [1,107 kB]
hcloud_server.first_control_plane (remote-exec): 12% [2 libaria2-0 12.3 kB/1,107 kB 1%]
hcloud_server.first_control_plane (remote-exec): 75% [Waiting for headers]
hcloud_server.first_control_plane (remote-exec): Get:3 http://mirror.hetzner.com/debian/packages bullseye/main amd64 aria2 amd64 1.35.0-3 [362 kB]
hcloud_server.first_control_plane (remote-exec): 80% [3 aria2 101 kB/362 kB 28%]
hcloud_server.first_control_plane (remote-exec): 100% [Working]
hcloud_server.first_control_plane (remote-exec): Fetched 1,571 kB in 0s (5,632 kB/s)
hcloud_server.first_control_plane (remote-exec): Selecting previously unselected package libc-ares2:amd64.
hcloud_server.first_control_plane (remote-exec): (Reading database ...
hcloud_server.first_control_plane (remote-exec): (Reading database ... 5%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 10%
...
hcloud_server.first_control_plane (remote-exec): (Reading database ... 95%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 100%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 62163 files and directories currently installed.)
hcloud_server.first_control_plane (remote-exec): Preparing to unpack .../libc-ares2_1.17.1-1+deb11u1_amd64.deb ...
hcloud_server.first_control_plane (remote-exec): Unpacking libc-ares2:amd64 (1.17.1-1+deb11u1) ...
hcloud_server.first_control_plane (remote-exec): Selecting previously unselected package libaria2-0:amd64.
hcloud_server.first_control_plane (remote-exec): Preparing to unpack .../libaria2-0_1.35.0-3_amd64.deb ...
hcloud_server.first_control_plane (remote-exec): Unpacking libaria2-0:amd64 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Selecting previously unselected package aria2.
hcloud_server.first_control_plane (remote-exec): Preparing to unpack .../aria2_1.35.0-3_amd64.deb ...
hcloud_server.first_control_plane (remote-exec): Unpacking aria2 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Setting up libc-ares2:amd64 (1.17.1-1+deb11u1) ...
hcloud_server.first_control_plane (remote-exec): Setting up libaria2-0:amd64 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Setting up aria2 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Processing triggers for man-db (2.9.4-2) ...
hcloud_server.first_control_plane (remote-exec): Processing triggers for libc-bin (2.31-13+deb11u2) ...
hcloud_server.first_control_plane (remote-exec): + aria2c --follow-metalink=mem https://raw.githubusercontent.com/kube-hetzner/kube-hetzner/master/.files/openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4

hcloud_server.first_control_plane (remote-exec): 02/11 12:50:22 [NOTICE] Downloading 1 item(s)
hcloud_server.first_control_plane (remote-exec): [#64af2c 0B/0B CN:1 DL:0B]

hcloud_server.first_control_plane (remote-exec): 02/11 12:50:23 [NOTICE] Download complete: [MEMORY]openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4
hcloud_server.first_control_plane (remote-exec): [#47a61a 57MiB/600MiB(9%) CN:5 DL:87MiB
hcloud_server.first_control_plane (remote-exec): [#47a61a 292MiB/600MiB(48%) CN:5 DL:177
hcloud_server.first_control_plane: Still creating... [50s elapsed]
hcloud_server.first_control_plane (remote-exec): [#47a61a 560MiB/600MiB(93%) CN:3 DL:212

hcloud_server.first_control_plane (remote-exec): 02/11 12:50:27 [NOTICE] Download complete: /root/openSUSE-MicroOS.x86_64-16.0.0-k3s-kvm-and-xen-Snapshot20220207.qcow2

hcloud_server.first_control_plane (remote-exec): Download Results:
hcloud_server.first_control_plane (remote-exec): gid   |stat|avg speed  |path/URI
hcloud_server.first_control_plane (remote-exec): ======+====+===========+=======================================================
hcloud_server.first_control_plane (remote-exec): 64af2c|OK  |    13MiB/s|[MEMORY]openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4
hcloud_server.first_control_plane (remote-exec): 47a61a|OK  |   180MiB/s|/root/openSUSE-MicroOS.x86_64-16.0.0-k3s-kvm-and-xen-Snapshot20220207.qcow2

hcloud_server.first_control_plane (remote-exec): Status Legend:
hcloud_server.first_control_plane (remote-exec): (OK):download completed.
hcloud_server.first_control_plane (remote-exec): + ls+ grep -ie ^opensuse.*microos.*k3s.*qcow2$
hcloud_server.first_control_plane (remote-exec):  -a
hcloud_server.first_control_plane (remote-exec): + qemu-img convert -p -f qcow2 -O host_device openSUSE-MicroOS.x86_64-16.0.0-k3s-kvm-and-xen-Snapshot20220207.qcow2 /dev/sda
hcloud_server.first_control_plane (remote-exec):     (0.00/100%)
hcloud_server.first_control_plane (remote-exec):     (1.00/100%)
hcloud_server.first_control_plane (remote-exec):     (2.01/100%)
....
hcloud_server.first_control_plane (remote-exec):     (91.40/100%)
hcloud_server.first_control_plane (remote-exec):     (92.41/100%)
hcloud_server.first_control_plane (remote-exec):     (93.41/100%)
hcloud_server.first_control_plane (remote-exec):     (94.41/100%)
hcloud_server.first_control_plane (remote-exec):     (95.42/100%)
hcloud_server.first_control_plane (remote-exec):     (96.42/100%)
hcloud_server.first_control_plane (remote-exec):     (97.42/100%)
hcloud_server.first_control_plane: Still creating... [1m0s elapsed]
hcloud_server.first_control_plane (remote-exec):     (98.43/100%)
hcloud_server.first_control_plane (remote-exec):     (99.43/100%)
hcloud_server.first_control_plane (remote-exec):     (100.00/100%)
hcloud_server.first_control_plane (remote-exec):     (100.00/100%)
hcloud_server.first_control_plane (remote-exec): + sgdisk -e /dev/sda
hcloud_server.first_control_plane (remote-exec): The operation has completed successfully.
hcloud_server.first_control_plane (remote-exec): + parted -s /dev/sda resizepart 4 99%
hcloud_server.first_control_plane (remote-exec): + parted -s /dev/sda mkpart primary ext2 99% 100%
hcloud_server.first_control_plane (remote-exec): + partprobe /dev/sda
hcloud_server.first_control_plane (remote-exec): + udevadm settle
hcloud_server.first_control_plane (remote-exec): + fdisk -l /dev/sda
hcloud_server.first_control_plane (remote-exec): Disk /dev/sda: 38.15 GiB, 40961572864 bytes, 80003072 sectors
hcloud_server.first_control_plane (remote-exec): Disk model: QEMU HARDDISK
hcloud_server.first_control_plane (remote-exec): Units: sectors of 1 * 512 = 512 bytes
hcloud_server.first_control_plane (remote-exec): Sector size (logical/physical): 512 bytes / 512 bytes
hcloud_server.first_control_plane (remote-exec): I/O size (minimum/optimal): 512 bytes / 512 bytes
hcloud_server.first_control_plane (remote-exec): Disklabel type: gpt
hcloud_server.first_control_plane (remote-exec): Disk identifier: E5179698-B417-4D07-BD17-C6AF2B022D8B

hcloud_server.first_control_plane (remote-exec): Device        Start      End  Sectors  Size Type
hcloud_server.first_control_plane (remote-exec): /dev/sda1      2048     6143     4096    2M BIOS
hcloud_server.first_control_plane (remote-exec): /dev/sda2      6144    47103    40960   20M EFI
hcloud_server.first_control_plane (remote-exec): /dev/sda3     47104 31438847 31391744   15G Linu
hcloud_server.first_control_plane (remote-exec): /dev/sda4  31438848 79203041 47764194 22.8G Linu
hcloud_server.first_control_plane (remote-exec): /dev/sda5  79204352 80001023   796672  389M Linu
hcloud_server.first_control_plane (remote-exec): + mount /dev/sda4 /mnt/
hcloud_server.first_control_plane (remote-exec): + btrfs filesystem resize max /mnt
hcloud_server.first_control_plane (remote-exec): Resize '/mnt' of 'max'
hcloud_server.first_control_plane (remote-exec): + umount /mnt
hcloud_server.first_control_plane (remote-exec): + mke2fs -L ignition /dev/sda5
hcloud_server.first_control_plane (remote-exec): mke2fs 1.46.2 (28-Feb-2021)
hcloud_server.first_control_plane (remote-exec): Discarding device blocks: done
hcloud_server.first_control_plane (remote-exec): Creating filesystem with 398336 1k blocks and 99960 inodes
hcloud_server.first_control_plane (remote-exec): Filesystem UUID: 5a252c6a-acfe-45b1-b8b1-0d5fafc0b749
hcloud_server.first_control_plane (remote-exec): Superblock backups stored on blocks:
hcloud_server.first_control_plane (remote-exec): 	8193, 24577, 40961, 57345, 73729, 204801, 221185

hcloud_server.first_control_plane (remote-exec): Allocating group tables: done
hcloud_server.first_control_plane (remote-exec): Writing inode tables: done
hcloud_server.first_control_plane (remote-exec): Writing superblocks and filesystem accounting information: done

hcloud_server.first_control_plane (remote-exec): + mount /dev/sda5 /mnt
hcloud_server.first_control_plane (remote-exec): + mkdir /mnt/ignition
hcloud_server.first_control_plane (remote-exec): + cp /root/config.ign /mnt/ignition/config.ign
hcloud_server.first_control_plane (remote-exec): + umount /mnt
hcloud_server.first_control_plane: Provisioning with 'local-exec'...
hcloud_server.first_control_plane (local-exec): Executing: ["/bin/sh" "-c" "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa [email protected] '(sleep 2; reboot)&'; sleep 3"]
hcloud_server.first_control_plane (local-exec): Warning: Permanently added '23.88.37.116' (ECDSA) to the list of known hosts.
hcloud_server.first_control_plane (local-exec): Connection to 23.88.37.116 closed by remote host.
hcloud_server.first_control_plane: Still creating... [1m10s elapsed]
hcloud_server.first_control_plane: Provisioning with 'local-exec'...
hcloud_server.first_control_plane (local-exec): Executing: ["/bin/sh" "-c" "until ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa -o ConnectTimeout=2 [email protected] true 2> /dev/null\ndo\n  echo \"Waiting for MicroOS to reboot and become available...\"\n  sleep 2\ndone\n"]
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane: Still creating... [1m20s elapsed]
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and beco
...
hcloud_server.first_control_plane: Still creating... [4m40s elapsed]
hcloud_server.first_control_plane: Still creating... [4m50s elapsed]
hcloud_server.first_control_plane: Still creating... [5m0s elapsed]
hcloud_server.first_control_plane: Still creating... [5m10s elapsed]
hcloud_server.first_control_plane: Still creating... [5m20s elapsed]
hcloud_server.first_control_plane: Still creating... [5m30s elapsed]
hcloud_server.first_control_plane: Still creating... [5m40s elapsed]
hcloud_server.first_control_plane: Still creating... [5m50s elapsed]
hcloud_server.first_control_plane: Still creating... [6m0s elapsed]
hcloud_server.first_control_plane: Still creating... [6m10s elapsed]
hcloud_server.first_control_plane: Still creating... [6m20s elapsed]
hcloud_server.first_control_plane: Still creating... [6m30s elapsed]
hcloud_server.first_control_plane: Still creating... [6m40s elapsed]
╷
│ Error: file provisioner error
│
│   with hcloud_server.first_control_plane,
│   on master.tf line 61, in resource "hcloud_server" "first_control_plane":
│   61:   provisioner "file" {
│
│ timeout - last error: dial tcp 23.88.37.116:22: connect: operation timed out
â•ĩ

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

@exocode Yes, the code previously in the staging branch was merged into master yesterday.

Now that error just means that something is wrong with your Hetzner project. Just create another one, it's always good to start with a clean slate.

And before that, just destroy, and retry! See https://github.com/kube-hetzner/kube-hetzner#takedown

ksnip_20220211-135355

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Remember, you'll just need to create a new token in your new project, for it to be used.

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

Sorry, I dont get it: Please tell me exactly what you are doing. I did the same three days ago, without a hassle...
Today it's not working anymore.. (working with git SHA 4497a7f...)

  • New project
  • new token,
  • added my public id_rsa.pub and private id_rsa ... in

terraform.tfvars

# You need to replace these
hcloud_token = "MYTOKENISHERE"
public_key   = "/Users/jan/.ssh/id_rsa.pub"
private_key = "/Users/jan/.ssh/id_rsa"

then running:

tf init
tf apply

 ī…š  īŧ ~/Coding/RubymineProjects/metashop/kube-hetzner  ī‡“ ī„Ļ custom_firewall_rules !1 ?1 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
❯ tf apply --auto-approve
random_password.k3s_token: Refreshing state... [id=none]
hcloud_ssh_key.k3s: Refreshing state... [id=5546260]
local_file.traefik_config: Refreshing state... [id=25ba84696ee16d68f5b98f6ea6b70bb14c3c530c]
hcloud_placement_group.k3s: Refreshing state... [id=21327]
hcloud_network.k3s: Refreshing state... [id=1366303]
hcloud_firewall.k3s: Refreshing state... [id=299258]
hcloud_network_subnet.k3s: Refreshing state... [id=1366303-10.0.0.0/16]
local_file.kured_config: Refreshing state... [id=40fc9944ff7c7fd7faa108c8ffaa8c13042b0ebf]
local_file.hetzner_ccm_config: Refreshing state... [id=f5ec6cb5689cb5830d04857365d567edae562174]
local_file.hetzner_csi_config: Refreshing state... [id=aa232912bcf86722e32b698e1e077522c7f02a9d]
hcloud_server.first_control_plane: Refreshing state... [id=17907745]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # hcloud_network.k3s has been changed
  ~ resource "hcloud_network" "k3s" {
        id                = "1366303"
      + labels            = {}
        name              = "k3s"
        # (2 unchanged attributes hidden)
    }
  # hcloud_ssh_key.k3s has been changed
  ~ resource "hcloud_ssh_key" "k3s" {
        id          = "5546260"
      + labels      = {}
        name        = "k3s"
        # (2 unchanged attributes hidden)
    }
  # hcloud_firewall.k3s has been changed
  ~ resource "hcloud_firewall" "k3s" {
        id     = "299258"
        name   = "k3s"
        # (1 unchanged attribute hidden)

      + apply_to {
          + server = 17907745
        }

        # (13 unchanged blocks hidden)
    }
  # hcloud_placement_group.k3s has been changed
  ~ resource "hcloud_placement_group" "k3s" {
        id      = "21327"
        name    = "k3s"
      ~ servers = [
          + 17907745,
        ]
        # (2 unchanged attributes hidden)
    }

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # hcloud_server.agents[0] will be created
  + resource "hcloud_server" "agents" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 299258,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-agent-0"
      + placement_group_id = 21327
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx21"
      + ssh_keys           = [
          + "5546260",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.1.1"
          + mac_address = (known after apply)
          + network_id  = 1366303
        }
    }

  # hcloud_server.agents[1] will be created
  + resource "hcloud_server" "agents" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 299258,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-agent-1"
      + placement_group_id = 21327
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx21"
      + ssh_keys           = [
          + "5546260",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.1.2"
          + mac_address = (known after apply)
          + network_id  = 1366303
        }
    }

  # hcloud_server.control_planes[0] will be created
  + resource "hcloud_server" "control_planes" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 299258,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-control-plane-1"
      + placement_group_id = 21327
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx11"
      + ssh_keys           = [
          + "5546260",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.0.3"
          + mac_address = (known after apply)
          + network_id  = 1366303
        }
    }

  # hcloud_server.control_planes[1] will be created
  + resource "hcloud_server" "control_planes" {
      + backup_window      = (known after apply)
      + backups            = false
      + datacenter         = (known after apply)
      + delete_protection  = false
      + firewall_ids       = [
          + 299258,
        ]
      + id                 = (known after apply)
      + image              = "ubuntu-20.04"
      + ipv4_address       = (known after apply)
      + ipv6_address       = (known after apply)
      + ipv6_network       = (known after apply)
      + keep_disk          = false
      + labels             = {
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
        }
      + location           = "fsn1"
      + name               = "k3s-control-plane-2"
      + placement_group_id = 21327
      + rebuild_protection = false
      + rescue             = "linux64"
      + server_type        = "cpx11"
      + ssh_keys           = [
          + "5546260",
        ]
      + status             = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "10.0.0.4"
          + mac_address = (known after apply)
          + network_id  = 1366303
        }
    }

  # hcloud_server.first_control_plane is tainted, so must be replaced
-/+ resource "hcloud_server" "first_control_plane" {
      + backup_window      = (known after apply)
      ~ datacenter         = "fsn1-dc14" -> (known after apply)
      ~ id                 = "17907745" -> (known after apply)
      ~ ipv4_address       = "49.12.11.173" -> (known after apply)
      ~ ipv6_address       = "2a01:4f8:c17:413d::1" -> (known after apply)
      ~ ipv6_network       = "2a01:4f8:c17:413d::/64" -> (known after apply)
        name               = "k3s-control-plane-0"
      ~ status             = "running" -> (known after apply)
        # (12 unchanged attributes hidden)

      - network {
          - alias_ips   = [] -> null
          - ip          = "10.0.0.2" -> null
          - mac_address = "86:00:00:03:ca:f8" -> null
          - network_id  = 1366303 -> null
        }
      + network {
          + alias_ips   = []
          + ip          = "10.0.0.2"
          + mac_address = (known after apply)
          + network_id  = 1366303
        }
    }

Plan: 5 to add, 0 to change, 1 to destroy.

Changes to Outputs:
  + agents_public_ip        = [
      + (known after apply),
      + (known after apply),
    ]
  + controlplanes_public_ip = [
      + (known after apply),
      + (known after apply),
      + (known after apply),
    ]
hcloud_server.first_control_plane: Destroying... [id=17907745]
hcloud_server.first_control_plane: Destruction complete after 1s
hcloud_server.first_control_plane: Creating...
hcloud_server.first_control_plane: Still creating... [10s elapsed]
hcloud_server.first_control_plane: Provisioning with 'file'...
hcloud_server.first_control_plane: Still creating... [20s elapsed]
hcloud_server.first_control_plane: Still creating... [30s elapsed]
hcloud_server.first_control_plane: Provisioning with 'remote-exec'...
hcloud_server.first_control_plane (remote-exec): Connecting to remote host via SSH...
hcloud_server.first_control_plane (remote-exec):   Host: 23.88.37.116
hcloud_server.first_control_plane (remote-exec):   User: root
hcloud_server.first_control_plane (remote-exec):   Password: false
hcloud_server.first_control_plane (remote-exec):   Private key: true
hcloud_server.first_control_plane (remote-exec):   Certificate: false
hcloud_server.first_control_plane (remote-exec):   SSH Agent: true
hcloud_server.first_control_plane (remote-exec):   Checking Host Key: false
hcloud_server.first_control_plane (remote-exec):   Target Platform: unix
hcloud_server.first_control_plane (remote-exec): Connected!
hcloud_server.first_control_plane (remote-exec): + apt-get install -y aria2
hcloud_server.first_control_plane (remote-exec): Reading package lists... 0%
hcloud_server.first_control_plane: Still creating... [40s elapsed]
hcloud_server.first_control_plane (remote-exec): Reading package lists... 0%
hcloud_server.first_control_plane (remote-exec): Reading package lists... 16%
hcloud_server.first_control_plane (remote-exec): Reading package lists... Done
hcloud_server.first_control_plane (remote-exec): Building dependency tree... 0%
hcloud_server.first_control_plane (remote-exec): Building dependency tree... 0%
hcloud_server.first_control_plane (remote-exec): Building dependency tree... 50%
hcloud_server.first_control_plane (remote-exec): Building dependency tree... 50%
hcloud_server.first_control_plane (remote-exec): Building dependency tree... Done
hcloud_server.first_control_plane (remote-exec): Reading state information... 0%
hcloud_server.first_control_plane (remote-exec): Reading state information... 0%
hcloud_server.first_control_plane (remote-exec): Reading state information... Done
hcloud_server.first_control_plane (remote-exec): The following additional packages will be installed:
hcloud_server.first_control_plane (remote-exec):   libaria2-0 libc-ares2
hcloud_server.first_control_plane (remote-exec): The following NEW packages will be installed:
hcloud_server.first_control_plane (remote-exec):   aria2 libaria2-0 libc-ares2
hcloud_server.first_control_plane (remote-exec): 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
hcloud_server.first_control_plane (remote-exec): Need to get 1,571 kB of archives.
hcloud_server.first_control_plane (remote-exec): After this operation, 6,225 kB of additional disk space will be used.
hcloud_server.first_control_plane (remote-exec): 0% [Working]
hcloud_server.first_control_plane (remote-exec): Get:1 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libc-ares2 amd64 1.17.1-1+deb11u1 [102 kB]
hcloud_server.first_control_plane (remote-exec): 0% [1 libc-ares2 8,192 B/102 kB 8%]
hcloud_server.first_control_plane (remote-exec): 12% [Working]
hcloud_server.first_control_plane (remote-exec): Get:2 http://mirror.hetzner.com/debian/packages bullseye/main amd64 libaria2-0 amd64 1.35.0-3 [1,107 kB]
hcloud_server.first_control_plane (remote-exec): 13% [2 libaria2-0 16.4 kB/1,107 kB 1%]
hcloud_server.first_control_plane (remote-exec): 75% [Waiting for headers]
hcloud_server.first_control_plane (remote-exec): Get:3 http://mirror.hetzner.com/debian/packages bullseye/main amd64 aria2 amd64 1.35.0-3 [362 kB]
hcloud_server.first_control_plane (remote-exec): 79% [3 aria2 77.7 kB/362 kB 21%]
hcloud_server.first_control_plane (remote-exec): 100% [Working]
hcloud_server.first_control_plane (remote-exec): Fetched 1,571 kB in 0s (5,010 kB/s)
hcloud_server.first_control_plane (remote-exec): Selecting previously unselected package libc-ares2:amd64.
hcloud_server.first_control_plane (remote-exec): (Reading database ...
hcloud_server.first_control_plane (remote-exec): (Reading database ... 5%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 10%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 15%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 20%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 25%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 30%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 35%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 40%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 45%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 50%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 55%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 60%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 65%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 70%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 75%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 80%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 85%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 90%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 95%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 100%
hcloud_server.first_control_plane (remote-exec): (Reading database ... 62163 files and directories currently installed.)
hcloud_server.first_control_plane (remote-exec): Preparing to unpack .../libc-ares2_1.17.1-1+deb11u1_amd64.deb ...
hcloud_server.first_control_plane (remote-exec): Unpacking libc-ares2:amd64 (1.17.1-1+deb11u1) ...
hcloud_server.first_control_plane (remote-exec): Selecting previously unselected package libaria2-0:amd64.
hcloud_server.first_control_plane (remote-exec): Preparing to unpack .../libaria2-0_1.35.0-3_amd64.deb ...
hcloud_server.first_control_plane (remote-exec): Unpacking libaria2-0:amd64 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Selecting previously unselected package aria2.
hcloud_server.first_control_plane (remote-exec): Preparing to unpack .../aria2_1.35.0-3_amd64.deb ...
hcloud_server.first_control_plane (remote-exec): Unpacking aria2 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Setting up libc-ares2:amd64 (1.17.1-1+deb11u1) ...
hcloud_server.first_control_plane (remote-exec): Setting up libaria2-0:amd64 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Setting up aria2 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Processing triggers for man-db (2.9.4-2) ...
hcloud_server.first_control_plane (remote-exec): Processing triggers for libc-bin (2.31-13+deb11u2) ...
hcloud_server.first_control_plane (remote-exec): + aria2c --follow-metalink=mem https://raw.githubusercontent.com/kube-hetzner/kube-hetzner/master/.files/openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4

hcloud_server.first_control_plane (remote-exec): 02/11 16:03:23 [NOTICE] Downloading 1 item(s)
hcloud_server.first_control_plane (remote-exec): [#eb520d 0B/0B CN:1 DL:0B]

hcloud_server.first_control_plane (remote-exec): 02/11 16:03:24 [NOTICE] Download complete: [MEMORY]openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4
hcloud_server.first_control_plane (remote-exec): [#a37143 37MiB/600MiB(6%) CN:5 DL:59MiB
hcloud_server.first_control_plane (remote-exec): [#a37143 227MiB/600MiB(37%) CN:5 DL:140
hcloud_server.first_control_plane (remote-exec): [#a37143 473MiB/600MiB(78%) CN:5 DL:181
hcloud_server.first_control_plane (remote-exec): [#a37143 583MiB/600MiB(97%) CN:2 DL:162
hcloud_server.first_control_plane: Still creating... [50s elapsed]
hcloud_server.first_control_plane (remote-exec): [#a37143 589MiB/600MiB(98%) CN:1 DL:127
hcloud_server.first_control_plane (remote-exec): 02/11 16:03:29 [NOTICE] Download complete: /root/openSUSE-MicroOS.x86_64-16.0.0-k3s-kvm-and-xen-Snapshot20220207.qcow2

hcloud_server.first_control_plane (remote-exec): Download Results:
hcloud_server.first_control_plane (remote-exec): gid   |stat|avg speed  |path/URI
hcloud_server.first_control_plane (remote-exec): ======+====+===========+=======================================================
hcloud_server.first_control_plane (remote-exec): eb520d|OK  |    13MiB/s|[MEMORY]openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4
hcloud_server.first_control_plane (remote-exec): a37143|OK  |   124MiB/s|/root/openSUSE-MicroOS.x86_64-16.0.0-k3s-kvm-and-xen-Snapshot20220207.qcow2

hcloud_server.first_control_plane (remote-exec): Status Legend:
hcloud_server.first_control_plane (remote-exec): (OK):download completed.
hcloud_server.first_control_plane (remote-exec): + ls+  -agrep -ie ^opensuse.*microos.*k3s.*qcow2$

hcloud_server.first_control_plane (remote-exec): + qemu-img convert -p -f qcow2 -O host_device openSUSE-MicroOS.x86_64-16.0.0-k3s-kvm-and-xen-Snapshot20220207.qcow2 /dev/sda
hcloud_server.first_control_plane (remote-exec):     (0.00/100%)
hcloud_server.first_control_plane (remote-exec):     (1.00/100%)
hcloud_server.first_control_plane (remote-exec):     (2.01/100%)
hcloud_server.first_control_plane (remote-exec):     (3.01/100%)
hcloud_server.first_control_plane (remote-exec):     (4.01/100%)
hcloud_server.first_control_plane (remote-exec):     (5.02/100%)
hcloud_server.first_control_plane (remote-exec):     (6.02/100%)
hcloud_server.first_control_plane (remote-exec):     (7.03/100%)
hcloud_server.first_control_plane (remote-exec):     (8.04/100%)
hcloud_server.first_control_plane (remote-exec):     (9.05/100%)
hcloud_server.first_control_plane (remote-exec):     (10.05/100%)
hcloud_server.first_control_plane (remote-exec):     (11.05/100%)
hcloud_server.first_control_plane (remote-exec):     (12.06/100%)
hcloud_server.first_control_plane (remote-exec):     (13.06/100%)
hcloud_server.first_control_plane (remote-exec):     (14.06/100%)
hcloud_server.first_control_plane (remote-exec):     (15.07/100%)
hcloud_server.first_control_plane (remote-exec):     (16.07/100%)
hcloud_server.first_control_plane (remote-exec):     (17.07/100%)
hcloud_server.first_control_plane (remote-exec):     (18.08/100%)
hcloud_server.first_control_plane (remote-exec):     (19.08/100%)
hcloud_server.first_control_plane (remote-exec):     (20.09/100%)
hcloud_server.first_control_plane (remote-exec):     (21.09/100%)
hcloud_server.first_control_plane (remote-exec):     (22.09/100%)
hcloud_server.first_control_plane (remote-exec):     (23.10/100%)
hcloud_server.first_control_plane (remote-exec):     (24.10/100%)
hcloud_server.first_control_plane (remote-exec):     (25.10/100%)
hcloud_server.first_control_plane (remote-exec):     (26.11/100%)
hcloud_server.first_control_plane (remote-exec):     (27.11/100%)
hcloud_server.first_control_plane (remote-exec):     (28.11/100%)
hcloud_server.first_control_plane (remote-exec):     (29.12/100%)
hcloud_server.first_control_plane (remote-exec):     (30.12/100%)
hcloud_server.first_control_plane (remote-exec):     (31.12/100%)
hcloud_server.first_control_plane (remote-exec):     (32.13/100%)
hcloud_server.first_control_plane (remote-exec):     (33.13/100%)
hcloud_server.first_control_plane (remote-exec):     (34.13/100%)
hcloud_server.first_control_plane (remote-exec):     (35.14/100%)
hcloud_server.first_control_plane (remote-exec):     (36.14/100%)
hcloud_server.first_control_plane (remote-exec):     (37.15/100%)
hcloud_server.first_control_plane (remote-exec):     (38.15/100%)
hcloud_server.first_control_plane (remote-exec):     (39.15/100%)
hcloud_server.first_control_plane (remote-exec):     (40.16/100%)
hcloud_server.first_control_plane (remote-exec):     (41.16/100%)
hcloud_server.first_control_plane (remote-exec):     (42.16/100%)
hcloud_server.first_control_plane (remote-exec):     (43.17/100%)
hcloud_server.first_control_plane (remote-exec):     (44.17/100%)
hcloud_server.first_control_plane (remote-exec):     (45.17/100%)
hcloud_server.first_control_plane (remote-exec):     (46.18/100%)
hcloud_server.first_control_plane (remote-exec):     (47.18/100%)
hcloud_server.first_control_plane (remote-exec):     (48.18/100%)
hcloud_server.first_control_plane (remote-exec):     (49.19/100%)
hcloud_server.first_control_plane (remote-exec):     (50.19/100%)
hcloud_server.first_control_plane (remote-exec):     (51.20/100%)
hcloud_server.first_control_plane (remote-exec):     (52.20/100%)
hcloud_server.first_control_plane (remote-exec):     (53.20/100%)
hcloud_server.first_control_plane (remote-exec):     (54.21/100%)
hcloud_server.first_control_plane (remote-exec):     (55.21/100%)
hcloud_server.first_control_plane (remote-exec):     (56.21/100%)
hcloud_server.first_control_plane (remote-exec):     (57.22/100%)
hcloud_server.first_control_plane (remote-exec):     (58.22/100%)
hcloud_server.first_control_plane (remote-exec):     (59.27/100%)
hcloud_server.first_control_plane (remote-exec):     (60.27/100%)
hcloud_server.first_control_plane (remote-exec):     (61.28/100%)
hcloud_server.first_control_plane (remote-exec):     (62.28/100%)
hcloud_server.first_control_plane (remote-exec):     (63.29/100%)
hcloud_server.first_control_plane (remote-exec):     (64.29/100%)
hcloud_server.first_control_plane (remote-exec):     (65.30/100%)
hcloud_server.first_control_plane (remote-exec):     (66.30/100%)
hcloud_server.first_control_plane (remote-exec):     (67.31/100%)
hcloud_server.first_control_plane (remote-exec):     (68.31/100%)
hcloud_server.first_control_plane (remote-exec):     (69.32/100%)
hcloud_server.first_control_plane (remote-exec):     (70.32/100%)
hcloud_server.first_control_plane (remote-exec):     (71.33/100%)
hcloud_server.first_control_plane (remote-exec):     (72.33/100%)
hcloud_server.first_control_plane (remote-exec):     (73.33/100%)
hcloud_server.first_control_plane (remote-exec):     (74.34/100%)
hcloud_server.first_control_plane (remote-exec):     (75.34/100%)
hcloud_server.first_control_plane (remote-exec):     (76.35/100%)
hcloud_server.first_control_plane (remote-exec):     (77.35/100%)
hcloud_server.first_control_plane (remote-exec):     (78.35/100%)
hcloud_server.first_control_plane (remote-exec):     (79.36/100%)
hcloud_server.first_control_plane (remote-exec):     (80.36/100%)
hcloud_server.first_control_plane (remote-exec):     (81.37/100%)
hcloud_server.first_control_plane (remote-exec):     (82.37/100%)
hcloud_server.first_control_plane (remote-exec):     (83.38/100%)
hcloud_server.first_control_plane (remote-exec):     (84.38/100%)
hcloud_server.first_control_plane (remote-exec):     (85.38/100%)
hcloud_server.first_control_plane (remote-exec):     (86.39/100%)
hcloud_server.first_control_plane (remote-exec):     (87.39/100%)
hcloud_server.first_control_plane (remote-exec):     (88.39/100%)
hcloud_server.first_control_plane (remote-exec):     (89.40/100%)
hcloud_server.first_control_plane (remote-exec):     (90.40/100%)
hcloud_server.first_control_plane (remote-exec):     (91.40/100%)
hcloud_server.first_control_plane (remote-exec):     (92.41/100%)
hcloud_server.first_control_plane (remote-exec):     (93.41/100%)
hcloud_server.first_control_plane (remote-exec):     (94.41/100%)
hcloud_server.first_control_plane (remote-exec):     (95.42/100%)
hcloud_server.first_control_plane (remote-exec):     (96.42/100%)
hcloud_server.first_control_plane (remote-exec):     (97.42/100%)
hcloud_server.first_control_plane: Still creating... [1m0s elapsed]
hcloud_server.first_control_plane (remote-exec):     (98.43/100%)
hcloud_server.first_control_plane (remote-exec):     (99.43/100%)
hcloud_server.first_control_plane (remote-exec):     (100.00/100%)
hcloud_server.first_control_plane (remote-exec):     (100.00/100%)
hcloud_server.first_control_plane (remote-exec): + sgdisk -e /dev/sda
hcloud_server.first_control_plane (remote-exec): The operation has completed successfully.
hcloud_server.first_control_plane (remote-exec): + parted -s /dev/sda resizepart 4 99%
hcloud_server.first_control_plane (remote-exec): + parted -s /dev/sda mkpart primary ext2 99% 100%
hcloud_server.first_control_plane (remote-exec): + partprobe /dev/sda
hcloud_server.first_control_plane (remote-exec): + udevadm settle
hcloud_server.first_control_plane (remote-exec): + fdisk -l /dev/sda
hcloud_server.first_control_plane (remote-exec): Disk /dev/sda: 38.15 GiB, 40961572864 bytes, 80003072 sectors
hcloud_server.first_control_plane (remote-exec): Disk model: QEMU HARDDISK
hcloud_server.first_control_plane (remote-exec): Units: sectors of 1 * 512 = 512 bytes
hcloud_server.first_control_plane (remote-exec): Sector size (logical/physical): 512 bytes / 512 bytes
hcloud_server.first_control_plane (remote-exec): I/O size (minimum/optimal): 512 bytes / 512 bytes
hcloud_server.first_control_plane (remote-exec): Disklabel type: gpt
hcloud_server.first_control_plane (remote-exec): Disk identifier: E5179698-B417-4D07-BD17-C6AF2B022D8B

hcloud_server.first_control_plane (remote-exec): Device        Start      End  Sectors  Size Type
hcloud_server.first_control_plane (remote-exec): /dev/sda1      2048     6143     4096    2M BIOS
hcloud_server.first_control_plane (remote-exec): /dev/sda2      6144    47103    40960   20M EFI
hcloud_server.first_control_plane (remote-exec): /dev/sda3     47104 31438847 31391744   15G Linu
hcloud_server.first_control_plane (remote-exec): /dev/sda4  31438848 79203041 47764194 22.8G Linu
hcloud_server.first_control_plane (remote-exec): /dev/sda5  79204352 80001023   796672  389M Linu
hcloud_server.first_control_plane (remote-exec): + mount /dev/sda4 /mnt/
hcloud_server.first_control_plane (remote-exec): + btrfs filesystem resize max /mnt
hcloud_server.first_control_plane (remote-exec): Resize '/mnt' of 'max'
hcloud_server.first_control_plane (remote-exec): + umount /mnt
hcloud_server.first_control_plane (remote-exec): + mke2fs -L ignition /dev/sda5
hcloud_server.first_control_plane (remote-exec): mke2fs 1.46.2 (28-Feb-2021)
hcloud_server.first_control_plane (remote-exec): Discarding device blocks: done
hcloud_server.first_control_plane (remote-exec): Creating filesystem with 398336 1k blocks and 99960 inodes
hcloud_server.first_control_plane (remote-exec): Filesystem UUID: 33642c40-e2ed-437f-bd1d-22e2c8fc509e
hcloud_server.first_control_plane (remote-exec): Superblock backups stored on blocks:
hcloud_server.first_control_plane (remote-exec): 	8193, 24577, 40961, 57345, 73729, 204801, 221185

hcloud_server.first_control_plane (remote-exec): Allocating group tables: done
hcloud_server.first_control_plane (remote-exec): Writing inode tables: done
hcloud_server.first_control_plane (remote-exec): Writing superblocks and filesystem accounting information: done

hcloud_server.first_control_plane (remote-exec): + mount /dev/sda5 /mnt
hcloud_server.first_control_plane (remote-exec): + mkdir /mnt/ignition
hcloud_server.first_control_plane (remote-exec): + cp /root/config.ign /mnt/ignition/config.ign
hcloud_server.first_control_plane (remote-exec): + umount /mnt
hcloud_server.first_control_plane: Provisioning with 'local-exec'...
hcloud_server.first_control_plane (local-exec): Executing: ["/bin/sh" "-c" "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /Users/jan/.ssh/id_rsa [email protected] '(sleep 2; reboot)&'; sleep 3"]
hcloud_server.first_control_plane (local-exec): Warning: Permanently added '23.88.37.116' (ECDSA) to the list of known hosts.
hcloud_server.first_control_plane (local-exec): Connection to 23.88.37.116 closed by remote host.
hcloud_server.first_control_plane: Still creating... [1m10s elapsed]
hcloud_server.first_control_plane: Provisioning with 'local-exec'...
hcloud_server.first_control_plane (local-exec): Executing: ["/bin/sh" "-c" "until ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /Users/jan/.ssh/id_rsa -o ConnectTimeout=2 [email protected] true 2> /dev/null\ndo\n  echo \"Waiting for MicroOS to reboot and become available...\"\n  sleep 2\ndone\n"]
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane: Still creating... [1m20s elapsed]
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane: Still creating... [1m30s elapsed]
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane: Still creating... [1m40s elapsed]
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane: Still creating... [1m50s elapsed]
hcloud_server.first_control_plane (local-exec): Waiting for MicroOS to reboot and become available...
hcloud_server.first_control_plane: Provisioning with 'file'...
hcloud_server.first_control_plane: Still creating... [2m0s elapsed]
hcloud_server.first_control_plane: Still creating... [2m10s elapsed]
hcloud_server.first_control_plane: Still creating... [2m20s elapsed]
hcloud_server.first_control_plane: Still creating... [2m30s elapsed]
hcloud_server.first_control_plane: Still creating... [2m40s elapsed]
hcloud_server.first_control_plane: Still creating... [2m50s elapsed]
hcloud_server.first_control_plane: Still creating... [3m0s elapsed]
hcloud_server.first_control_plane: Still creating... [3m10s elapsed]
hcloud_server.first_control_plane: Still creating... [3m20s elapsed]
hcloud_server.first_control_plane: Still creating... [3m30s elapsed]
hcloud_server.first_control_plane: Still creating... [3m40s elapsed]
hcloud_server.first_control_plane: Still creating... [3m50s elapsed]
hcloud_server.first_control_plane: Still creating... [4m0s elapsed]
hcloud_server.first_control_plane: Still creating... [4m10s elapsed]
hcloud_server.first_control_plane: Still creating... [4m20s elapsed]
hcloud_server.first_control_plane: Still creating... [4m30s elapsed]
hcloud_server.first_control_plane: Still creating... [4m40s elapsed]
hcloud_server.first_control_plane: Still creating... [4m50s elapsed]
hcloud_server.first_control_plane: Still creating... [5m0s elapsed]
hcloud_server.first_control_plane: Still creating... [5m10s elapsed]
hcloud_server.first_control_plane: Still creating... [5m20s elapsed]
hcloud_server.first_control_plane: Still creating... [5m30s elapsed]
hcloud_server.first_control_plane: Still creating... [5m40s elapsed]
hcloud_server.first_control_plane: Still creating... [5m50s elapsed]
hcloud_server.first_control_plane: Still creating... [6m0s elapsed]
hcloud_server.first_control_plane: Still creating... [6m10s elapsed]
hcloud_server.first_control_plane: Still creating... [6m20s elapsed]
hcloud_server.first_control_plane: Still creating... [6m30s elapsed]
hcloud_server.first_control_plane: Still creating... [6m40s elapsed]
hcloud_server.first_control_plane: Still creating... [6m50s elapsed]
╷
│ Error: file provisioner error
│
│   with hcloud_server.first_control_plane,
│   on master.tf line 61, in resource "hcloud_server" "first_control_plane":
│   61:   provisioner "file" {
│
│ timeout - last error: dial tcp 23.88.37.116:22: connect: operation timed out
â•ĩ

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

First, before anything else, terraform destroy.

So the key is to delete your old Hetzner project, which will delete all resources attached to it. Then create a new project, with the + sign, and in that project, go to security and create a new API key. That is your token.

In hcloud, go to hcloud context delete name-of-old-one, and then hcloud context create name-of-new-one.

Last but not least, change the hcloud token in your terraforms.tfvars to the new one from the new Hetzner project.

Then, terraform apply.

from terraform-hcloud-kube-hetzner.

mysticaltech avatar mysticaltech commented on May 25, 2024

Also @exocode, the terraforms.tfvars have also changed, make sure yours align with the new format.

from terraform-hcloud-kube-hetzner.

exocode avatar exocode commented on May 25, 2024

So the key is to delete your old Hetzner project, which will delete all resources attached to it. Then create a new project, with the + sign, and in that project, go to security and create a new API key. That is your token.

Exactly what I did... except the hcloud context.

There is a new error:

hcloud_server.first_control_plane (remote-exec): Setting up libc-ares2:amd64 (1.17.1-1+deb11u1) ...
hcloud_server.first_control_plane (remote-exec): Setting up libaria2-0:amd64 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Setting up aria2 (1.35.0-3) ...
hcloud_server.first_control_plane (remote-exec): Processing triggers for man-db (2.9.4-2) ...
hcloud_server.first_control_plane (remote-exec): Processing triggers for libc-bin (2.31-13+deb11u2) ...
hcloud_server.first_control_plane: Still creating... [40s elapsed]
hcloud_server.first_control_plane (remote-exec): + aria2c --follow-metalink=mem https://raw.githubusercontent.com/kube-hetzner/kube-hetzner/master/.files/openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4

hcloud_server.first_control_plane (remote-exec): 02/12 09:04:39 [NOTICE] Downloading 1 item(s)
hcloud_server.first_control_plane (remote-exec): [#b40a4a 0B/0B CN:1 DL:0B]

hcloud_server.first_control_plane (remote-exec): 02/12 09:04:40 [ERROR] CUID#7 - Download aborted. URI=https://raw.githubusercontent.com/kube-hetzner/kube-hetzner/master/.files/openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4
hcloud_server.first_control_plane (remote-exec): Exception: [AbstractCommand.cc:351] errorCode=3 URI=https://raw.githubusercontent.com/kube-hetzner/kube-hetzner/master/.files/openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4
hcloud_server.first_control_plane (remote-exec):   -> [HttpSkipResponseCommand.cc:218] errorCode=3 Resource not found

hcloud_server.first_control_plane (remote-exec): 02/12 09:04:40 [NOTICE] Download GID#b40a4a72309ab10d not complete:

hcloud_server.first_control_plane (remote-exec): Download Results:
hcloud_server.first_control_plane (remote-exec): gid   |stat|avg speed  |path/URI
hcloud_server.first_control_plane (remote-exec): ======+====+===========+=======================================================
hcloud_server.first_control_plane (remote-exec): b40a4a|ERR |       0B/s|https://raw.githubusercontent.com/kube-hetzner/kube-hetzner/master/.files/openSUSE-MicroOS.x86_64-k3s-kvm-and-xen.qcow2.meta4

hcloud_server.first_control_plane (remote-exec): Status Legend:
hcloud_server.first_control_plane (remote-exec): (ERR):error occurred.

hcloud_server.first_control_plane (remote-exec): aria2 will resume download if the transfer is restarted.
hcloud_server.first_control_plane (remote-exec): If there are any errors, then see the log file. See '-l' option in help/man page for details.
╷
│ Error: remote-exec provisioner error
│
│   with hcloud_server.first_control_plane,
│   on master.tf line 33, in resource "hcloud_server" "first_control_plane":
│   33:   provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_1395390451.sh": Process exited with status 3
â•ĩ

from terraform-hcloud-kube-hetzner.

phaer avatar phaer commented on May 25, 2024

@exocode Please check if your error is reproducible on 2-3 runs (with destroys in between them). If it is, use hetzners web console to see the servers output while it's booting, and run your terraform with TF_LOG=debug and attach its output.

from terraform-hcloud-kube-hetzner.

phaer avatar phaer commented on May 25, 2024

Ah, it looks like we've started to discuss the same issue (timeout during first file provisioner on first_control_plane) here as well: #67 (comment)

So I am closing this one for now because I don't think we can't do anything about the original error in the title and its best to keep discussion about your follow-up error in one issue.

Feel free to re-open if I am mistaken, @exocode!

from terraform-hcloud-kube-hetzner.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤ī¸ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.