Giter VIP home page Giter VIP logo

vpn2's Introduction

Gardener

Gardener Logo

CI Build status Slack channel #gardener Go Report Card GoDoc CII Best Practices

Gardener implements the automated management and operation of Kubernetes clusters as a service and provides a fully validated extensibility framework that can be adjusted to any programmatic cloud or infrastructure provider.

Gardener is 100% Kubernetes-native and exposes its own Cluster API to create homogeneous clusters on all supported infrastructures. This API differs from SIG Cluster Lifecycle's Cluster API that only harmonizes how to get to clusters, while Gardener's Cluster API goes one step further and also harmonizes the make-up of the clusters themselves. That means, Gardener gives you homogeneous clusters with exactly the same bill of material, configuration and behavior on all supported infrastructures, which you can see further down below in the section on our K8s Conformance Test Coverage.

In 2020, SIG Cluster Lifecycle's Cluster API made a huge step forward with v1alpha3 and the newly added support for declarative control plane management. This made it possible to integrate managed services like GKE or Gardener. We would be more than happy, if the community would be interested, to contribute a Gardener control plane provider. For more information on the relation between Gardener API and SIG Cluster Lifecycle's Cluster API, please see here.

Gardener's main principle is to leverage Kubernetes concepts for all of its tasks.

In essence, Gardener is an extension API server that comes along with a bundle of custom controllers. It introduces new API objects in an existing Kubernetes cluster (which is called garden cluster) in order to use them for the management of end-user Kubernetes clusters (which are called shoot clusters). These shoot clusters are described via declarative cluster specifications which are observed by the controllers. They will bring up the clusters, reconcile their state, perform automated updates and make sure they are always up and running.

To accomplish these tasks reliably and to offer a high quality of service, Gardener controls the main components of a Kubernetes cluster (etcd, API server, controller manager, scheduler). These so-called control plane components are hosted in Kubernetes clusters themselves (which are called seed clusters). This is the main difference compared to many other OSS cluster provisioning tools: The shoot clusters do not have dedicated master VMs. Instead, the control plane is deployed as a native Kubernetes workload into the seeds (the architecture is commonly referred to as kubeception or inception design). This does not only effectively reduce the total cost of ownership but also allows easier implementations for "day-2 operations" (like cluster updates or robustness) by relying on all the mature Kubernetes features and capabilities.

Gardener reuses the identical Kubernetes design to span a scalable multi-cloud and multi-cluster landscape. Such familiarity with known concepts has proven to quickly ease the initial learning curve and accelerate developer productivity:

  • Kubernetes API Server = Gardener API Server
  • Kubernetes Controller Manager = Gardener Controller Manager
  • Kubernetes Scheduler = Gardener Scheduler
  • Kubelet = Gardenlet
  • Node = Seed cluster
  • Pod = Shoot cluster

Please find more information regarding the concepts and a detailed description of the architecture in our Gardener Wiki and our blog posts on kubernetes.io: Gardener - the Kubernetes Botanist (17.5.2018) and Gardener Project Update (2.12.2019).


K8s Conformance Test Coverage certified kubernetes logo

Gardener takes part in the Certified Kubernetes Conformance Program to attest its compatibility with the K8s conformance testsuite. Currently Gardener is certified for K8s versions up to v1.27, see the conformance spreadsheet.

Continuous conformance test results of the latest stable Gardener release are uploaded regularly to the CNCF test grid:

Provider/K8s v1.28 v1.27 v1.26 v1.25 v1.24
AWS N/A Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests Gardener v1.24 Conformance Tests
Azure N/A Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests Gardener v1.24 Conformance Tests
GCP N/A Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests Gardener v1.24 Conformance Tests
OpenStack N/A Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests Gardener v1.24 Conformance Tests
Alicloud N/A Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests Gardener v1.24 Conformance Tests
Equinix Metal N/A N/A N/A N/A N/A
vSphere N/A N/A N/A N/A N/A

Get an overview of the test results at testgrid.

Start using or developing the Gardener locally

See our documentation in the /docs repository, please find the index here.

Setting up your own Gardener landscape in the Cloud

The quickest way to test drive Gardener is to install it virtually onto an existing Kubernetes cluster, just like you would install any other Kubernetes-ready application. You can do this with our Gardener Helm Chart.

Alternatively you can use our garden setup project to create a fully configured Gardener landscape which also includes our Gardener Dashboard.

Feedback and Support

Feedback and contributions are always welcome!

All channels for getting in touch or learning about our project are listed under the community section. We are cordially inviting interested parties to join our bi-weekly meetings.

Please report bugs or suggestions about our Kubernetes clusters as such or the Gardener itself as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn More!

Please find further resources about our project here:

vpn2's People

Contributors

axel7born avatar dependabot[bot] avatar dimitar-kostadinov avatar docktofuture avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar jensh007 avatar martinweindel avatar marwinski avatar scheererj avatar timuthy avatar unmarshall avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vpn2's Issues

Connection Issues with kubectl exec|logs

What happened:

We sporadically see two issues that might or might not be related:

  1. kubectl exec|logs sessions are randomly reset. Sometimes connections stay open all day long and sometimes connections are dropped every couple of minutes.

  2. Sometimes the kubectl exec command fails with errors like the following although all the pods are running and the VPN is established:

Error from server: error dialing backend: proxy error from vpn-seed-server:9443 while dialing 10.250.0.14:10250, code 503: 503 Service Unavailable

In addition we see another issue which makes the above issues more crititical:

  1. We also noticed that openvpn connection establishment between seed- and shoot clusters takes roughly 5 seconds compared to 1 second in a local setup with an identical configuration. We suspect that the vpn2 pod does not have sufficient CPU especially during connection establishment.

What you expected to happen:

None of the above. VPN tunnel shall be established in one second.

How to reproduce it (as minimally and precisely as possible):

This happens on all clusters and this is not a new issue (except maybe 2). It appears to happen much more often in larger environments such as our canary environment.

Anything else we need to know:

We sometimes see that the openvpn tunnel is reset and restarted. From past experience we suspect that this is caused by the intermittent cloud provider load balancers. There might be little that can be done about this, however our experiments indicate that this is not a significant problem:

  • Upon termination of the vpn tunnel the tunnel is re-established with an identical configuration.
  • Existing connections (kubectl logs, kubectl exec, kubectl port-forward) remain open and just appear to hang for 5 seconds (see above for the 5 second problem, this could be reduced to 1 second)
  • New kubectl exec attempts at that time fail while the vpn is down (possibly because of a 1 second connect timeout)
  • New kubectl logs attempts hang until the connection has been re-established
  • We did not test webhooks but assume they are also not affected as they have a connect timeout of 10 seconds by default. Once the TCP connection has been established the timeout will be much longer.

From our investigation we strongly suspect that the reset or termination of the vpn tunnel is not a real issue (even if it does happen every couple of minutes). Existing connections hang and new ones can be established if the connect timeout is not limited to one second or so (otherwise a retry will do the trick). This appears to apply to kubectl exec but not kubectl logs.

In this context, connections will only stay alive when openvpn restarts or recovers in the same pod. Due to NAT, connections will be terminated if for example the vpn-shoot pod restarts (as the stateful conntrack table in not kept).

It might be useful in this case to investigate what happens to existing connection if this happens. Those should be actively terminated as the TCP timeout is quite long and this would cause applications and/or infrastructure to hang.

Issue (2) appears to be related to the envoy configuration. It can be reproduced as follows:

  • Create a shoot cluster with one node
  • Log on to the node and reboot it, e.g. shutdown -r now
  • See the node restart, once all pods are running again you will still see this error message for a couple of minutes for kubectl exec:
Error from server: error dialing backend: proxy error from vpn-seed-server:9443 while dialing 10.250.0.14:10250, code 503: 503 Service Unavailable
  • Exec into the envoy sidecar container in the vpn-seed pod. Verify that you can indeed connect to the kubelet, e.g. do a nc -vz 10.250.0.14:10250

As for issue (1) we have seen this already with the "old" openvpn solution as well as the early ssh tunnel. We did believe the root cause was that the vpn tunnel was re-established. This investigation now has shown that this cannot be the main reason which is unknown.

Environment:

Any shoot cluster presumably on any infrastructure.

Add support for arm64 images

What would you like to be added:
Images build and published through this repository are only amd64 compatible. We should enable a multi-arch docker build to also offer arm64 images.

Why is this needed:
To run seeds/shoots on arm64 based hardware. In addition, more and more developers work with Apple Silicon based machines and an arm64 based image is required to use the local provider setup.

vpn-seed-server goes into CrashLoopBackOff on local kind cluster

What happened:
Details of the issue are mentioned here

What you expected to happen:
expected vpn-seed-server to start successfully.
Error logs:

2022-04-21 03:30:36 WARNING: --topology net30 support for server configs with IPv4 pools will be removed in a future release. Please migrate to --topology subnet as soon as possible.
2022-04-21 03:30:36 DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-256-CBC' to --data-ciphers or change --cipher 'AES-256-CBC' to --data-ciphers-fallback 'AES-256-CBC' to silence this warning.
2022-04-21 03:30:36 WARNING: file '/srv/secrets/vpn-server/tls.key' is group or others accessible
2022-04-21 03:30:36 OpenVPN 2.5.2 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on May  4 2021
2022-04-21 03:30:36 library versions: OpenSSL 1.1.1k  25 Mar 2021, LZO 2.10
2022-04-21 03:30:36 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
2022-04-21 03:30:36 TUN/TAP device tun0 opened
2022-04-21 03:30:36 /sbin/ip link set dev tun0 up mtu 1500
2022-04-21 03:30:36 /sbin/ip link set dev tun0 up
2022-04-21 03:30:36 /sbin/ip addr add dev tun0 local 192.168.123.1 peer 192.168.123.2
iptables v1.8.6 (legacy): can't initialize iptables table `filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
iptables v1.8.6 (legacy): can't initialize iptables table `filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
2022-04-21 03:30:36 /firewall.sh on tun0 1500 1623 192.168.123.1 192.168.123.2 init
2022-04-21 03:30:36 WARNING: Failed running command (--up/--down): external program exited with error status: 3
2022-04-21 03:30:36 Exiting due to fatal error

How to reproduce it (as minimally and precisely as possible):
Try and setup local kind garden cluster using steps mentioned here

Anything else we need to know:
This seems to be a problem when run on the new M1 macos which as ARM64.

Environment:
GOOS = darwin
GOARCH = arm64
MacOs version: Monterey 12.3.1

Connection Issues when not NODE_NETWORK is set

What happened:

When the VPN server has no NODE_NETWORK configured it will constantly reconnect.

Configuring a dummy value temporarily fixes the issue.

server (seed):

using openvpn_network=192.168.123.0/24
2023-04-17 10:39:28 WARNING: file '/srv/secrets/vpn-server/tls.key' is group or others accessible
2023-04-17 10:39:28 OpenVPN 2.5.6 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Apr 17 2022
2023-04-17 10:39:28 library versions: OpenSSL 1.1.1s  1 Nov 2022, LZO 2.10
2023-04-17 10:39:28 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
2023-04-17 10:39:28 TUN/TAP device tun0 opened
2023-04-17 10:39:28 /sbin/ip link set dev tun0 up mtu 1500
2023-04-17 10:39:28 /sbin/ip link set dev tun0 up
2023-04-17 10:39:28 /sbin/ip addr add dev tun0 192.168.123.1/24
2023-04-17 10:39:28 /firewall.sh on tun0 tun0 1500 1623 192.168.123.1 255.255.255.0 init
2023-04-17 10:39:28 Listening for incoming TCP connection on [AF_INET][undef]:1194
2023-04-17 10:39:28 TCPv4_SERVER link local (bound): [AF_INET][undef]:1194
2023-04-17 10:39:28 TCPv4_SERVER link remote: [AF_UNSPEC]
2023-04-17 10:39:28 Initialization Sequence Completed
2023-04-17 10:39:29 TCP connection established with [AF_INET]10.40.0.1:45364
2023-04-17 10:39:29 10.40.0.1:45364 Connection reset, restarting [0]
2023-04-17 10:39:37 TCP connection established with [AF_INET]10.40.0.1:47204

client (shoot)

[Mon Apr 17 09:43:46 UTC 2023]: using vpn-seed-server, dev tun0
[Mon Apr 17 09:43:46 UTC 2023]: openvpn --dev tun0 --remote api.fra.codesphere.internal.gardener.codesphere.com. --config openvpn.config
2023-04-17 09:43:46 OpenVPN 2.5.6 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Apr 17 2022
2023-04-17 09:43:46 library versions: OpenSSL 1.1.1s  1 Nov 2022, LZO 2.10
2023-04-17 09:43:46 TCP/UDP: Preserving recently used remote address: [AF_INET]34.77.130.229:8132
2023-04-17 09:43:46 Attempting to establish TCP connection with [AF_INET]34.77.130.229:8132 [nonblock]
2023-04-17 09:43:46 TCP connection established with [AF_INET]34.77.130.229:8132
2023-04-17 09:43:48 TCP_CLIENT link local: (not bound)
2023-04-17 09:43:48 TCP_CLIENT link remote: [AF_INET]34.77.130.229:8132
2023-04-17 09:43:48 [vpn-seed-server] Peer Connection Initiated with [AF_INET]34.77.130.229:8132
2023-04-17 09:43:48 TUN/TAP device tun0 opened
2023-04-17 09:43:48 /sbin/ip link set dev tun0 up mtu 1500
2023-04-17 09:43:48 /sbin/ip link set dev tun0 up
2023-04-17 09:43:48 /sbin/ip addr add dev tun0 192.168.123.10/24
2023-04-17 09:43:48 Initialization Sequence Completed

What you expected to happen:

Some providers in gardener like equinix expect not to have a node network configured to correctly work.
So the VPN should also work without a required node network.

How to reproduce it (as minimally and precisely as possible):

Create a shoot without a node network defined in the networks config.

Environment:

  • Gardener 1.62.x
  • Extension Equinix
  • VPN: 0.15.0 (also tested with 0.14.0 and 0.13.0)

Add Firewall rules to vpn-seed-server

What would you like to be added:

Please add the following firewall rules to the vpn-seed-server:

iptables -A INPUT -m state --state RELATED,ESTABLISHED -i tun0 -j ACCEPT
iptables -A INPUT -i tun0 -j DROP

Why is this needed:

Currently the OpenVPN server port 1194 as well as the envoy proxy port are reachable from the vpn-shoot pod. While this is probably not an issue and I have not been able to exploit this we should add these rules just to be sure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.