Giter VIP home page Giter VIP logo

etcd-custom-image's Introduction

Gardener Logo

REUSE status CI Build status Slack channel #gardener Go Report Card GoDoc CII Best Practices

Gardener implements the automated management and operation of Kubernetes clusters as a service and provides a fully validated extensibility framework that can be adjusted to any programmatic cloud or infrastructure provider.

Gardener is 100% Kubernetes-native and exposes its own Cluster API to create homogeneous clusters on all supported infrastructures. This API differs from SIG Cluster Lifecycle's Cluster API that only harmonizes how to get to clusters, while Gardener's Cluster API goes one step further and also harmonizes the make-up of the clusters themselves. That means, Gardener gives you homogeneous clusters with exactly the same bill of material, configuration and behavior on all supported infrastructures, which you can see further down below in the section on our K8s Conformance Test Coverage.

In 2020, SIG Cluster Lifecycle's Cluster API made a huge step forward with v1alpha3 and the newly added support for declarative control plane management. This made it possible to integrate managed services like GKE or Gardener. We would be more than happy, if the community would be interested, to contribute a Gardener control plane provider. For more information on the relation between Gardener API and SIG Cluster Lifecycle's Cluster API, please see here.

Gardener's main principle is to leverage Kubernetes concepts for all of its tasks.

In essence, Gardener is an extension API server that comes along with a bundle of custom controllers. It introduces new API objects in an existing Kubernetes cluster (which is called garden cluster) in order to use them for the management of end-user Kubernetes clusters (which are called shoot clusters). These shoot clusters are described via declarative cluster specifications which are observed by the controllers. They will bring up the clusters, reconcile their state, perform automated updates and make sure they are always up and running.

To accomplish these tasks reliably and to offer a high quality of service, Gardener controls the main components of a Kubernetes cluster (etcd, API server, controller manager, scheduler). These so-called control plane components are hosted in Kubernetes clusters themselves (which are called seed clusters). This is the main difference compared to many other OSS cluster provisioning tools: The shoot clusters do not have dedicated master VMs. Instead, the control plane is deployed as a native Kubernetes workload into the seeds (the architecture is commonly referred to as kubeception or inception design). This does not only effectively reduce the total cost of ownership but also allows easier implementations for "day-2 operations" (like cluster updates or robustness) by relying on all the mature Kubernetes features and capabilities.

Gardener reuses the identical Kubernetes design to span a scalable multi-cloud and multi-cluster landscape. Such familiarity with known concepts has proven to quickly ease the initial learning curve and accelerate developer productivity:

  • Kubernetes API Server = Gardener API Server
  • Kubernetes Controller Manager = Gardener Controller Manager
  • Kubernetes Scheduler = Gardener Scheduler
  • Kubelet = Gardenlet
  • Node = Seed cluster
  • Pod = Shoot cluster

Please find more information regarding the concepts and a detailed description of the architecture in our Gardener Wiki and our blog posts on kubernetes.io: Gardener - the Kubernetes Botanist (17.5.2018) and Gardener Project Update (2.12.2019).


K8s Conformance Test Coverage certified kubernetes logo

Gardener takes part in the Certified Kubernetes Conformance Program to attest its compatibility with the K8s conformance testsuite. Currently Gardener is certified for K8s versions up to v1.30, see the conformance spreadsheet.

Continuous conformance test results of the latest stable Gardener release are uploaded regularly to the CNCF test grid:

Provider/K8s v1.30 v1.29 v1.28 v1.27 v1.26 v1.25
AWS Gardener v1.30 Conformance Tests Gardener v1.29 Conformance Tests Gardener v1.28 Conformance Tests Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests
Azure Gardener v1.30 Conformance Tests Gardener v1.29 Conformance Tests Gardener v1.28 Conformance Tests Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests
GCP Gardener v1.30 Conformance Tests Gardener v1.29 Conformance Tests Gardener v1.28 Conformance Tests Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests
OpenStack Gardener v1.30 Conformance Tests Gardener v1.29 Conformance Tests Gardener v1.28 Conformance Tests Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests
Alicloud Gardener v1.30 Conformance Tests Gardener v1.29 Conformance Tests Gardener v1.28 Conformance Tests Gardener v1.27 Conformance Tests Gardener v1.26 Conformance Tests Gardener v1.25 Conformance Tests
Equinix Metal N/A N/A N/A N/A N/A N/A
vSphere N/A N/A N/A N/A N/A N/A

Get an overview of the test results at testgrid.

Start using or developing the Gardener locally

See our documentation in the /docs repository, please find the index here.

Setting up your own Gardener landscape in the Cloud

The quickest way to test drive Gardener is to install it virtually onto an existing Kubernetes cluster, just like you would install any other Kubernetes-ready application. You can do this with our Gardener Helm Chart.

Alternatively you can use our garden setup project to create a fully configured Gardener landscape which also includes our Gardener Dashboard.

Feedback and Support

Feedback and contributions are always welcome!

All channels for getting in touch or learning about our project are listed under the community section. We are cordially inviting interested parties to join our bi-weekly meetings.

Please report bugs or suggestions about our Kubernetes clusters as such or the Gardener itself as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn More!

Please find further resources about our project here:

etcd-custom-image's People

Contributors

aaronfern avatar abdasgupta avatar ccwienk avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar ishan16696 avatar jensh007 avatar msohn avatar raphaelvogel avatar rfranzke avatar shreyas-s-rao avatar timuthy avatar vpnachev avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

etcd-custom-image's Issues

Upgrade to etcd v3.4.26 to fix vulnerabilities from go runtime

What would you like to be added:

Hi colleagues, the current etcd image (eu.gcr.io/gardener-project/gardener/etcd:v3.4.13-bootstrap-10) maintained by gardener has around 140+ vulnerabilities (mainly from go runtime).

I check the official etcd v3.4.26 is using the latest go runtime to fix all these vulnerabilities, is it possible to upgrade to this version?

BTW I also saw this issue which slowly upgrade to v3.6.x to avoid vulnerabilities from base image, is there any timeline?

Why is this needed:

fix vulnerabilities from go runtime for security compliance.

Transition to Distroless image

What would you like to be added:
The etcd-custom-image should be based on a distroless image. See also etcd-io/etcd#13556.

Why is this needed:

  • To reduce the common perception of an extended attack surface of non-distroless images.
  • To reduce maintenance effort of updating images and tooling.

This issue depends on #16.

Implement etcd_bootstrap_script.sh in Golang

What would you like to be added:
The current bootstrap script etcd_bootstrap_script.sh should be replaced by a Golang based implementation.

Why is this needed:
The etcd_bootstrap_script.sh has several dependencies to OS utilities like bash, curl, etc. This makes it very hard to change to Distroless container base image in the future. The etcd project moved to Distroless for 3.6.x (etcd-io/etcd#13556 reduce the attack surface and we should do the same. This will also eliminate some tedious maintenance tasks like updating base images and utilities because of reported vulnerabilities.

Tasks

Project Structure

  • Create project structure
  • Implement bootstrap script logic
  • Add signal handler
  • Add command and flags
  • Add readiness endpoint
  • Add TLS support
  • Add Makefile
  • Add Dockerfile
  • Add vendor directory
  • Add unit tests
    • app
    • bootstrap
    • brclient
    • util
    • types
      - [x] Use 3.5.6 version of etcd for etcd-wrapper.
  • Use 3.4.26 of etcd in etcd-wrapper (See #33). Move to the latest version of etcd in another increment (another PR). Backup-Restore continues to use the current client version.
  • Add hack/local-dev scripts to help ease local KIND + skaffold based tests.
  • CI/CD
    • Pipeline definition
    • Scripts
  • Documentation
    • Ops guide
    • Operator docs
    • Developer docs
  • Security requirements:
    • Change base image to distroless for both etcd-wrapper and etcd-backup-restore
    • Add an init container to change the ownership of /var/etcd/data from root to nonroot user (required for existing clusters)
    • Add pod security context to run all containers by default as nonroot.
  • Add Dockerfile for ephemeral container which will be used for all diagnostics, this container will also be run as nonroot.
  • Adapt etcd-druid and run druid integration and e2e
  • Run g/g e2e tests
    • Run additional test where one of the pods is using etcd-wrapper and other two continue to use etcd-custom-image. This is possible during a stuck software update or a partially successful software update of the statefulset. Ensure that both can co-exists together.
    • Run additional test where one tries to roll-back from pods using etcd-wrapper to pods using etcd-custom-image.

Fetching etcd configuration from backup-restore is prone to transient errors

What happened:

As part of the bootstrap, a REST endpoint hosted by backup-restore container is invoked:

curl "$BACKUP_ENDPOINT/config" -o $CONFIG_FILE

Backup-Restore container queries the current etcd StatefulSet to figure out the correct initial cluster state. It then modifies the etcd configuration with this state and returns to its caller. It is possible that due to transient issues with the API server, it is unable to get the StatefulSet. Currently the bootstrapping fails if there is a transient error.

What you expected to happen:

Bootstrapping process should be tolerant (to a limited extent) to transient failures that are returned from backup-restore container.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Environment:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.