Giter VIP home page Giter VIP logo

ui-driver-hetzner's Introduction

Rancher 2 Hetzner Cloud UI Driver

Build Status

Rancher 2.X UI driver for the Hetzner Cloud. For the Rancher 1 version check out the readme from the v1.6 branch which you can find here.

Usage

  • Add a Machine Driver in Rancher 2 (Cluster Management -> Drivers -> Node Drivers)
Key Value
Download URL https://github.com/JonasProgrammer/docker-machine-driver-hetzner/releases/download/3.3.0/docker-machine-driver-hetzner_3.3.0_linux_amd64.tar.gz
Custom UI URL https://storage.googleapis.com/hcloud-rancher-v2-ui-driver/component.js
Whitelist Domains storage.googleapis.com
  • Wait for the driver to become "Active"
  • Go to Clusters -> Add Cluster, your driver and custom UI should show up.

Authentication screen Configuration screen

Compatibility

The following component.js is always compatible with the latest Rancher 2.X version:

https://storage.googleapis.com/hcloud-rancher-v2-ui-driver/component.js

Rancher 2.0

Use this component.js to support Rancher 2.0 version:

https://storage.googleapis.com/hcloud-rancher-v2-ui-driver/component-v20.js

Tested linux distributions

To use Debian e.g. with a non default Storage Driver, you have to set it manually in the Engine Options of the Node Template in Rancher.

Recommend

Image Docker Version Docker Storage Driver
Ubuntu 18.04 18.06 overlay2 (default)
Ubuntu 16.04 18.06 aufs (default)
Debian 9 18.06 overlay2, overlay
CentOS 7 18.06 devicemapper (default)
Fedora 27 not supported (due docker-install)
Fedora 28 not supported (due docker-install)

Development

This package contains a small web-server that will serve up the custom driver UI at http://localhost:3000/component.js. You can run this while developing and point the Rancher settings there.

  • npm start
  • The driver name can be optionally overridden: npm start -- --name=DRIVERNAME
  • The compiled files are viewable at http://localhost:3000.
  • Note: The development server does not currently automatically restart when files are changed.

Building

For other users to see your driver, you need to build it and host the output on a server accessible from their browsers.

  • npm run build
  • Copy the contents of the dist directory onto a webserver.
    • If your Rancher is configured to use HA or SSL, the server must also be available via HTTPS.

Useful resources

Error creating machine: Error running provisioning: ssh command error:

Try to use overlay2 and if it does not work overlay as Storage Driver in the Engine Options in the bottom.

How secure is the Private Network feature?

Traffic between Cloud Servers inside a Network is private and isolated, but not automatically encrypted. We recommend you use TLS or similar protocols to encrypt sensitive traffic.

Reference: Hetzner Cloud documentation

The Rancher traffic between the agents and the Rancher related traffic to the nodes is fully encrypted over HTTPS/TLS.

The custom application specific traffic is not encrypted. You can use e.g. the Weave CNI-Provider for that: https://rancher.com/docs/rancher/v2.x/en/faq/networking/cni-providers/#weave

Requirements for Private Networks

  • Rancher host needs to be in the same Private Network as the selected one in the Node template
  • Under the global settings of Rancher the server-url needs to be the internal IP of the Private Network (you can find it in the Hetzner Cloud Console). Otherwise the traffic won't go through the Internal network.

How to close the open ports on the public interface?

You could use it e.g. in combination with that tool: https://github.com/vitobotta/hetzner-cloud-init

ui-driver-hetzner's People

Contributors

4ch3los avatar audifire avatar d0whc3r avatar dependabot-preview[bot] avatar dependabot[bot] avatar janus-reith avatar kingjan1999 avatar mbernasocchi avatar mehmetcansahin avatar mwoelk avatar mxschmitt avatar notanormalnerd avatar proligde avatar ronaldgrn avatar rr4444 avatar vincent99 avatar westlywright avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ui-driver-hetzner's Issues

Expose ports only to a private network

Currently, when you use the new networks feature, all ports are exposed on both interfaces. So when you do a nmap on the public ip and the private ip, you get the same result:

PORT      STATE SERVICE
22/tcp    open  ssh
80/tcp    open  http
443/tcp   open  https
2376/tcp  open  docker
2379/tcp  open  unknown
2380/tcp  open  unknown
6443/tcp  open  unknown
8181/tcp  open  unknown
10250/tcp open  unknown
10251/tcp open  unknown
10252/tcp open  unknown
10254/tcp open  unknown
10257/tcp open  unknown
10259/tcp open  unknown
18080/tcp open  unknown

(I didn't scan for open udp ports)

The docker service simply binds to 0.0.0.0
It would be nice to optionally bind all the K8s ports only to the internal network IP (Hetzner default is 10.0.0.0/16), so that they are not exposed to the internet any more. Is this possible?

RancherOS

Hi! I'm currently using Ubuntu as the OS for my nodes but I was thinking of using RancherOS instead. Can I use the node driver with this OS? If yes how? Should I create a server manually, install RancherOS and create a snapshot? Can I select a snapshot as the image in a node template? Thanks!

Ubuntu 18.04 provisioning not working due broken apt sources.list

Some days ago everything was working fine but starting today I am no longer able to provision a new cluster.

The error I run into is this:
"Error creating machine: Error running provisioning: Error running "sudo apt-get update": ssh command error:"

Maybe the driver is colliding with the cloud-init that is run by Hetzner itself?

API Token not saved

Hi, I just found out your module

I am unable to save the api token in the "cloud credentials" page of Rancher, basically each time I create a node template the system asks me for a token.

Is this the intended behavior?

Add support Cloud-init

If a new server is created at console.hetzner.cloud its possible to add a cloud-init to configure the new server. Can you add this to the UI in Rancher?

Add Node Template not working

Node driver version: 1.2.1
Rancher version: 2.1.2

I tried to add new node templates, but the UI shows only a blank page.
image

I tried it under "add new Cluster" and under "Node Templates" -> "Add Template"
Due this I'm not able to extend our cluster with new nodes.

Any ideas?

Debian and CentOS are not working

Currently only Ubuntu provisioning is working. The other terminate with: ExecutionException: Error running provisioning: ssh command error:
I'm afraid I have no idea what could be the reason for this at the moment.
// Edit: Debian and Fedora was solved by using the overlay storage driver. CentOS was fixed by using the Docker install-script from https://get.docker.com

Fix Styling

Fix styling, so that the driver looks similar to the digital ocean driver.

Stuck at Waiting for SSH to be available...

Hi,

I'm trying to deploy new cluster by using pretty much standard config - only attaching private network and marked use private network for communication. If I understand those two features correctly, it means that nodes within that cluster will communicate via internal network provided by Hetzner.

Config:
defult-conf

The above image shows all configs I've set - so nothing custom and even after an hour + the node is stuck at Waiting for SSH to be available...

I'm using Rancher 2.3.6 and UI Driver 2.1.0 (latest).

Thanks for the help in advance!

Node Template UI not working

I installed the driver in Rancher 2.0 and all I get is a blank UI (and the page freezes) when trying to create a new node template. I also get the following error:

opcode-compiler.js:36 Uncaught TypeError: (0 , this.funcs[r]) is not a function
    at e.compile (opcode-compiler.js:36)
    at n.i.expr (opcode-compiler.js:1157)
    at n.i.compileParams (opcode-compiler.js:1509)
    at n.i.compileArgs (opcode-compiler.js:1520)
    at n.i.modifier (opcode-compiler.js:1283)
    at opcode-compiler.js:70
    at e.compile (opcode-compiler.js:36)
    at e.compile (opcode-compiler.js:763)
    at Object.evaluate (runtime.js:493)
    at e.evaluate (runtime.js:30)
...

Screenshot

bildschirmfoto 2018-05-12 um 16 32 48

docs: explain private networks feature

Frequently asked questions

is it secure?

Traffic between Cloud Servers inside a Network is private and isolated, but not automatically encrypted. We recommend you use TLS or similar protocols to encrypt sensitive traffic.

Reference: Hetzner Cloud documentation

The Rancher traffic between the agents and the Rancher related traffic to the nodes is fully encrypted over HTTPS.

The custom application specific traffic is not encrypted. But if you use the Hetzner Private Network, then it's not reachable by other people (Hetzner says that).

Requirements for Private Networks

  • Rancher host needs to be in the same Private Network as the selected one in the Node template.
  • Under the global settings of Rancher the server-url needs to be the internal IP of the Private Network (you can find it in the Hetzner Cloud Console). Otherwise the traffic won't go through the Internal network.

How to close the open ports on the public interface?

https://github.com/vitobotta/hetzner-cloud-init

TODO more verbose description

Adding external nods/workers

Hi there,

I was wondering if it's somehow possible to add an external node/worker e.g a dedicated server. The Cloud servers are quite expensive if you need high CPU amounts.

All the best

Driver installation remains stuck in Downloading state

The installation remains stuck in Downloading state. I waited overnight but it is still the same:

image

I had used this driver successfully in previous installation of Rancher.

Worked in this version:
image

Does not work in current version:
image

The rancher instance is itself running on Hetzner with full internet access so I don't think it's a network issue.

kubelet fails to start when using external cloud provider

Hi and thanks for the great tool. It seems to work pretty flawless, apart from the following:

I'm trying to bring up a cluster and it works, unless i set

--cloud-provider=external

I need that because of hcloud-cloud-controller-manager.

It seems to setup fine for a couple of minutes but the status of the nodes remains 'waiting to register with Kubernetes' and after a while I get an error above saying

[workerPlane] Failed to bring up Worker Plane: [Failed to verify healthcheck: Failed to check http://localhost:10248/healthz for service [kubelet] on host [<ip2>]: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused, log: caused by: Not Found]
The ranger log contains the following. I substituted some ips with ip1,2

-- snip -- 2019/12/14 20:19:03 [INFO] Restarting container [kubelet] on host [<ip1>], try #1 2019/12/14 20:19:03 [INFO] cluster [c-ml72h] provisioning: [sidekick] Sidekick container already created on host [<ip2>] 2019/12/14 20:19:03 [INFO] Restarting container [kubelet] on host [<ip2>], try #1 2019/12/14 20:19:03 [INFO] cluster [c-ml72h] provisioning: [healthcheck] Start Healthcheck on service [kubelet] on host [<ip2>] 2019/12/14 20:19:03 [INFO] cluster [c-ml72h] provisioning: [healthcheck] Start Healthcheck on service [kubelet] on host [78.47.185.86] 2019/12/14 20:19:03 [INFO] cluster [c-ml72h] provisioning: [healthcheck] Start Healthcheck on service [kubelet] on host [<ip1>] 2019/12/14 20:19:53 [ERROR] cluster [c-ml72h] provisioning: [workerPlane] Failed to bring up Worker Plane: [Failed to verify healthcheck: Failed to check http://localhost:10248/healthz for service [kubelet] on host [<ip2>]: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused, log: caused by: Not Found] 2019/12/14 20:19:53 [INFO] kontainerdriver rancherkubernetesengine stopped 2019/12/14 20:19:53 [ERROR] ClusterController c-ml72h [cluster-provisioner-controller] failed with : [workerPlane] Failed to bring up Worker Plane: [Failed to verify healthcheck: Failed to check http://localhost:10248/healthz for service [kubelet] on host [<ip2>]: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused, log: caused by: Not Found]

When I log into a node with an error it seems kubelet is restarting and contains the following error

-- snip -- I1214 20:28:15.675659 3509 flags.go:33] FLAG: --volume-stats-agg-period="1m0s" I1214 20:28:15.675722 3509 feature_gate.go:216] feature gates: &{map[]} I1214 20:28:15.675814 3509 feature_gate.go:216] feature gates: &{map[]} I1214 20:28:15.676257 3509 mount_linux.go:153] Detected OS without systemd I1214 20:28:15.677822 3509 server.go:410] Version: v1.16.3 I1214 20:28:15.678046 3509 feature_gate.go:216] feature gates: &{map[]} I1214 20:28:15.678251 3509 feature_gate.go:216] feature gates: &{map[]} W1214 20:28:15.678577 3509 plugins.go:115] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release I1214 20:28:15.678946 3509 aws.go:1212] Building AWS cloudprovider I1214 20:28:15.679164 3509 aws.go:1178] Zone not specified in configuration file; querying AWS metadata service F1214 20:28:15.680747 3509 server.go:271] failed to run Kubelet: could not init cloud provider "aws": unable to determine AWS zone from cloud provider config or EC2 instance metadata: EC2MetadataError: failed to make EC2Metadata request caused by: Not Found

Any ideas where this goes wrong? Does the ui driver depend on a certain configuration?

Thanks!

Rancher doesn't wait for cloud-init

When deploying a Kubernetes cluster to Hetzner Cloud and specifying a cloud init script, Rancher doesn't seem to wait for it to finish, which results in sporadic errors and finally a node recreation loop.

Right now I've worked around this by disabling SSH in my cloud-init, doing my changes as quickly as possible and re-enabling SSH. This only works if it's fast enough, and only works some of the time.
If I install any updates with apt for instance, or update package lists, it will fail 100% of the time.

Is there anything that can be done on the node driver side? Or is there a Rancher/RKE option for specifying timeouts that I could play with?

Just for reference, the issue is somewhat similar to this one: docker/machine#3358

Not working on Rancher v2.1.4

When I try adding new node template on it, using the latest documented version, I get the following error on my console

Uncaught TypeError: Cannot read property 'manager' of null
    at Object.evaluate (0:29715)
    at e.evaluate (0:28614)
    at e.evaluateSyscall (0:31506)
    at e.evaluateInner (0:31478)
    at e.evaluateOuter (0:31470)
    at e.next (0:33421)
    at e.execute (0:33406)
    at i.handleException (0:32362)
    at e.handleException (0:32535)
    at e.throw (0:32269)

Any idea of a possible fix?

cloud-config not saved

If I create a new "Node Template" where cloud-config has some values, for example the following, and I click save, and I reopen the "Node Template" this field is empty then.

Example I wanna save in a node template

#cloud-config
 
package_update: true
package_upgrade: true

packages:
  - open-iscsi

"Mysterious" admin user created in Rancher upon cluster deployment (?)

Hi! I have been testing this for a couple of weeks now, and around a week ago I noticed for the first time that a new admin user had been created somehow in my Rancher installation. Likely it's still a test installation but I was shocked and I decided to investigate. From searching I didn't find any known vulnerabilities concerning Rancher that might explain a hack or something, and considering that my Rancher server is fairly good from a security point of view (I follow the typical best practices in configuration etc), I wasn't sure I had actually been hacked. I deleted the admin user, deleted my test cluster, created a new one always with Hetzner Cloud using this driver, and after a little while I found an admin user in Rancher again. I deleted the cluster and that admin user, I deployed another cluster and sure thing, after a while another admin user. I then deployed a couple of clusters with Digital Ocean (using Rancher's built in integration) after deleting the Hetzner one, and used those for testing for a couple of days. No new admin users. I then deployed a new cluster with Hetzner and once again after a little while I found an admin user. I repeated this process several times, and this happens only when I deploy clusters with Hetzner, never when I deploy Digital Ocean clusters. Coincidence?

Now, I don't want to accuse you or anything, I just want to understand if there is the possibility that something may have been compromised with this driver and perhaps you don't even know. I see in the README that the actual binary driver (not the UI) is from another Github project by somebody else, and that's a ready binary so I can't see the contents. Could it be that the compiled binary has something within it that can compromise Rancher when installed? I have now deleted the Hetzner driver from Rancher and created the servers for a new Hetzner cluster with Ansible instead, and then I provisioned the Kubernetes cluster still with Rancher but this time using the custom nodes mode. So far no admin user but I will report back if it happens again.

Thanks in advance if you have any possible explanation for this weird thing.

Support for Rancher 2.0

Hi,

the current driver (1.0.1) isn't correctly working with Rancher 2.0. I cannot add a node template when creating a new cluster. Is there any support planned for Rancher 2.0?

Best regards,
Kersten

not working in rancher 2.1.0

Hi,

today, I upgrade my rancher version to 2.1.0.

Now i've got the following error message, when i want to add new templates:

0:29715 Uncaught TypeError: Cannot read property 'manager' of null at Object.evaluate (0:29715) at e.evaluate (0:28614) at e.evaluateSyscall (0:31506) at e.evaluateInner (0:31478) at e.evaluateOuter (0:31470) at e.next (0:33421) at e.execute (0:33406) at i.handleException (0:32362) at e.handleException (0:32535) at e.throw (0:32269) (anonymous) @ 0:29715 e.evaluate @ 0:28614 e.evaluateSyscall @ 0:31506 e.evaluateInner @ 0:31478 e.evaluateOuter @ 0:31470 e.next @ 0:33421 e.execute @ 0:33406 i.handleException @ 0:32362 e.handleException @ 0:32535 e.throw @ 0:32269 n.evaluate @ 0:29300 e.execute @ 0:32256 e.rerender @ 0:32561 l.render @ 0:41200 A @ 0:45847 e._renderRoots @ 0:41481 e._renderRootsTransaction @ 0:41514 e._revalidate @ 0:41557 e.invoke @ 0:34688 e.flush @ 0:34604 e.flush @ 0:34766 e._end @ 0:35237 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._end @ 0:35242 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._end @ 0:35242 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._end @ 0:35242 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._end @ 0:35242 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._end @ 0:35242 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._end @ 0:35242 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._end @ 0:35242 _boundAutorunEnd @ 0:34934 characterData (async) s @ 0:34444 e._scheduleAutorun @ 0:35423 e._ensureInstance @ 0:35415 e.schedule @ 0:35050 (anonymous) @ 0:54960 x @ 0:64352 E @ 0:64317 w @ 0:64325 (anonymous) @ 0:64448 o.onload @ 0:257418 load (async) (anonymous) @ 0:257416 (anonymous) @ 0:64443 e @ 0:64900 e.loadScript @ 0:257406 (anonymous) @ 0:231313 loadCustomUi @ 0:231302 (anonymous) @ 0:231210 v @ 0:64239 T @ 0:64417 C @ 0:64402 e.invoke @ 0:34690 e.flush @ 0:34604 e.flush @ 0:34766 e._end @ 0:35237 e.end @ 0:34964 e._run @ 0:35282 e._join @ 0:35258 e.join @ 0:35019 d @ 0:23941 e.handler @ 0:42891 (anonymous) @ 0:60090 dispatch @ 0:13099 g.handle @ 0:12907

Can't establish dialer connection: can not build dialer to [c-wd69j:m-tqr6b]

While installing rancher on hetzner i tried to use this driver to provision the servers for a new cluster. My rancher installation is brand new and the version is the newest out there.

I created a node template for the cx21 type and created a simple cluster with one etcd/controlplane/worker, but i got the same error with bigger clusters. After creating the cluster rancher starts normally to provision the servers and in hetzner cloud i can see the new nodes beeing created, but then it gets stuck and shows

This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready.

Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) []

on the rancher ui. I looked into the logs of the rancher container (single node installation) and found these lines:

2019/11/13 10:07:59 [INFO] Provisioning cluster [c-wd69j]
2019/11/13 10:07:59 [INFO] Creating cluster [c-wd69j]
2019/11/13 10:08:04 [INFO] kontainerdriver rancherkubernetesengine listening on address 127.0.0.1:45485
2019/11/13 10:08:04 [ERROR] Cluster c-wd69j previously failed to create
2019/11/13 10:08:04 [INFO] cluster [c-wd69j] provisioning: Initiating Kubernetes cluster
2019/11/13 10:08:04 [INFO] cluster [c-wd69j] provisioning: [certificates] Generating admin certificates and kubeconfig
2019/11/13 10:08:04 [INFO] cluster [c-wd69j] provisioning: Successfully Deployed state file at [management-state/rke/rke-548000793/cluster.rkestate]
2019/11/13 10:08:04 [INFO] kontainerdriver rancherkubernetesengine stopped
2019/11/13 10:08:04 [INFO] cluster [c-wd69j] provisioning: Building Kubernetes cluster
2019/11/13 10:08:04 [INFO] cluster [c-wd69j] provisioning: [dialer] Setup tunnel for host [78.46.164.138]
2019/11/13 10:08:04 [ERROR] cluster [c-wd69j] provisioning: Failed to set up SSH tunneling for host [78.46.164.138]: Can't establish dialer connection: can not build dialer to [c-wd69j:m-tqr6b]
2019/11/13 10:08:04 [INFO] cluster [c-wd69j] provisioning: [dialer] Setup tunnel for host [78.46.214.230]
2019/11/13 10:08:04 [ERROR] cluster [c-wd69j] provisioning: Failed to set up SSH tunneling for host [78.46.214.230]: Can't establish dialer connection: can not build dialer to [c-wd69j:m-6qscr]
2019/11/13 10:08:04 [ERROR] cluster [c-wd69j] provisioning: Removing host [78.46.164.138] from node lists
2019/11/13 10:08:04 [ERROR] cluster [c-wd69j] provisioning: Removing host [78.46.214.230] from node lists
2019/11/13 10:08:04 [ERROR] cluster [c-wd69j] provisioning: Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) []
2019/11/13 10:08:04 [ERROR] ClusterController c-wd69j [cluster-provisioner-controller] failed with : Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) []
2019-11-13 10:08:13.179671 I | mvcc: store.index: compact 73394
2019-11-13 10:08:13.184619 I | mvcc: finished scheduled compaction at 73394 (took 3.08339ms)
2019-11-13 10:13:13.190368 I | mvcc: store.index: compact 74037
2019-11-13 10:13:13.199361 I | mvcc: finished scheduled compaction at 74037 (took 4.986543ms)

( I am new to writing issues and also new to hetzner cloud. if i explained the issue not enough, please tell me and i will try my best to provide more information)

Driver not working in Rancher 2.2.0-rc2

I tried to use the driver in Rancher 2.2.0-rc2 to test the implementation of weave network plugin.
Unfortunately the driver isn't working anymore in this version.

After adding to rancher it stucks on state "downloading".

I know, this is a pre-release of rancher. But maybe there is a easy and short fix for that?

Add worker stuck at: "Error creating machine: Error running provisioning: error installing docker"

Hello,

i've used Rancher with 2 workers for a few weeks now. Today i wanted to add another worker.
I can see the newly created vm in my Hetzner cloud console but the installation always stuck at "Error creating machine: Error running provisioning: error installing docker". Then Rancher deletes the server and creates a new one until the same issue happens again. I've waited about 1 hour but its still the same.

Im using Rancher v1.15.4-rancher-1-2 with the latest V2.0.1 Node driver.

Is there a way to check where this issue is coming from?

Kind regards

EDIT:

I think it was an issue with Hetzner. I've tried it after about 3 hours again and it worked fine again.

cannot provision a node

When I provision a node on Rancher, I get this error

Error setting machine configuration from flags provided: --hetzner-image and --hetzner-image-id are mutually exclusive; Timeout waiting for ssh key

What you think is wrong here?

I set Download URL as https://github.com/JonasProgrammer/docker-machine-driver-hetzner/releases/download/2.0.1/docker-machine-driver-hetzner_2.0.1_linux_amd64.tar.gz and custom UI URL to https://storage.googleapis.com/hcloud-rancher-v2-ui-driver/component.js

rancher 2.3.3 expects driver file name kontainer-engine-driver-*

tried to install this driver in a new rancher 2.3.3 instance on a debian 10 machine with docker 19.03 and got following error in the rancher/rancher container log:

2019/12/05 11:58:46 [INFO] update kontainerdriver kd-x4h2c 2019/12/05 11:58:46 [ERROR] Returning previous error: failed to find driver in archive. There must be a file of form kontainer-engine-driver-* 2019/12/05 11:58:46 [ERROR] KontainerDriverController kd-x4h2c [mgmt-kontainer-driver-lifecycle] failed with : failed to find driver in archive. There must be a file of form kontainer-engine-driver-*
the actual driver binary in the archive is named "docker-machine-driver-hetzner"
the cluster drivers state in the ui is getting stuck on "downloading"

ssh into server

Hey there,

I was wondering if it's possible to ssh into one of the machines in order to install e.g helm etc.
Or is there another solution to install such things?

Thanks for your work! :)

[BUG] Error creating machine: Error running provisioning: Error running "sudo apt-get update": ssh command error:

If creating a node with hetzner driver and a cloud-init script in template results in error:

Error creating machine: Error running provisioning: Error running "sudo apt-get update": ssh command error:

cloud-init script:

#cloud-config
locale: de_CH.UTF-8
timezone: Europe/Berlin
package_update: true    # Implied with `package_upgrade: true`
#package_upgrade: true
packages:
 - tree
 - screen
 - apt-transport-https
 - ca-certificates
 - curl
 - software-properties-common
 - nload
 - htop
 - jq
#package_reboot_if_required: true
users:
# Create new user `user1`
  - name: user1
    groups: sudo
    lock_passwd: false
    shell: /bin/bash
    passwd: $6$...
    ssh-authorized-keys:
      - ssh-rsa A...r
  - name: root
    ssh-authorized-keys:
      - ssh-rsa A...d
# Configure SSH
runcmd:
  - sed -i 's/#HostKey \/etc\/ssh\/ssh_host_ed25519_key/HostKey \/etc\/ssh\/ssh_host_ed25519_key/g' /etc/ssh/sshd_config
  - sed -i 's/[#]*PermitRootLogin yes/PermitRootLogin prohibit-password/g' /etc/ssh/sshd_config
  - sed -i 's/[#]*PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config
  - /etc/init.d/ssh restart

I already tried to comment out some lines of my cloud-init script to find the problem but it didn't help :( If I remove the whole script from template the provisioning is working normal.

If I use the cloud-init script in Hetzner Cloud Console directly, it works also.

OS: Ubuntu 18.04
Sizeing: CX21
Datacenter: nbg
Dockerversion: 18.09
Rancher: v2.2.5

[Security] Lack of firewall leaves etcd port open. Do I need to be worried?

I ran Aqua Security's Kube Hunter agains a cluster deployed with this node driver. It reported no vulnerabilities, but because there is no firewall, it reports that kubelet and also etcd ports are open. Do I need to be worried about this? I think unauthenticated access is not permitted, but is the fact that the ports are open a risk anyway? What can happen, apart from someone DoSsing the ports (which can happen with any open service)?

Before using this node driver I was deploying Kubernetes with Rancher as "custom nodes", so I prepared the nodes with Ansible first to set up firewall and disable root login. I absolutely love this node driver because it makes it possible to use Hetzner Cloud and save some money compared to other clouds, and makes scaling and management of node pools so easy with Rancher.

But I also want to be safe... What do you think about the kubelet and etcd ports being open from a security standpoint? Do you perform any additional tasks when deploying Kubernetes with this node driver?

Thanks!

hetznerConfig networks=NotNullable

Hi,

I have noticed that since the network feature was implemented I can not edit my templates without a network being selected. This problem does not exist when I create a new template.

Error message
Validation failed in API: hetznerConfig networks=NotNullable 422:

Server does not provisioning

I have created one CX11 with Debian 9. It's in infinite loop of provisioning. After a hour it was still in provisioning.

I have done the same using Ubuntu 16.04 and it was active for a minute and then its now offline
image

It looks like it got deleted. I can't find it anymore in my account

This tool hates me

Can't close the "Edit node template" dialog

I'm not able to close the Edit node template dialog: when I click on the cancel button nothing happen and I can see the following error in console: Uncaught TypeError: this.cancel is not a function.

Rancher version: 2.3.1

EDIT:
Potentially related to rancher issue 21618

Thanks

Network feature does not use internal IP:s for cluster internal traffic

I have Rancher 2.2.4 cluster with latest ui driver and docker-machine-driver-hetzner_1.4.0_linux_amd64.tar.gz installed.

Actual behaviour

After provisioning nodes, they get tag network=network-1 witch is correct. But listing nodes gives external ip and internal IP is the actual external IP.
kubectl get nodes -o wide

NAME     STATUS   ROLES               AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
ctrl-1   Ready    controlplane,etcd   4d23h   v1.14.1   95.216.0.1       <none>        Ubuntu 18.04.2 LTS   4.15.0-54-generic   docker://18.9.7
w-1      Ready    worker              3m41s   v1.14.1   95.216.0.2       <none>        Ubuntu 18.04.2 LTS   4.15.0-54-generic   docker://18.9.7
wrk-1    Ready    worker              7m22s   v1.14.1   95.216.0.3       <none>        Ubuntu 18.04.2 LTS   4.15.0-54-generic   docker://18.9.7

Expected behaviour

External IP would be machine external IP and internal IP would be from network-1

Use existing SSH key from Hetzner

Hi,

currently it is not possible to use an existing ssh key in the Rancher UI through the docker-machine-driver-hetzner (--hetzner-existing-key-path, --hetzner-existing-key-id), right? (I'm talking about Rancher 2.0). So that every automatically created node gets the same ssh key...
Are there any workarounds, like some cloud init stuff? Or are there any plans to support that or is it not possible?

Best regards,
Kersten

Migrate deployment to GitHub Actions and provide versioned builds

I would really appreciate if you could supply versioned assets.

In my opinion a non-versioned asset could not be considered production ready. In a production-ready configuration I must know which exact version of the UI I'm using in order to be able to handle any problems (#67 for example) upgrading or downgrading the UI, impossible operation right now.
We are building a cloud service based on clusters provided with rancher and hetzner cloud, we cannot have unmanageable problems, even if they are "only" of UI.

My proposal is to keep the master pipeline as is and create a pipeline for tags where the asset name contains the tag version; for example the asset URL for the v1.0.0 tag would be https://storage.googleapis.com/hcloud-rancher-v2-ui-driver/component_v1-0-0.js

Automatic firewall setup and config during cluster creation

Hello,

I would like to request a new feature that I think would be great and really useful. and I guess it can be done automatically from this driver. You know when a new cluster is created, then driver creates the cloud instances and setup everything necessary. Once the setup is completed we can go and check on cloud instances and we will notice that nodes are actually not protected with firewall. Firewall is inactive. I know that we can create rules through user data config, but it would be best if driver sets the firewall rules (sets all required inbound/outbound ports) automatically.

As far as i rembeber if we setup cluster using Amazon EKS, all firewall stuff on amazon is set automatically. It would be great if we have something with this driver too using nodes firewalls (iptables, ufw,firewalld or whatever is used).

Let me know what you think about this idea or how you do this on your clusters.

Thank you in advance!

Best regards,
Ali Nebi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.