Giter VIP home page Giter VIP logo

kubernetes-entrypoint's Introduction

Stackanetes

Stackanetes is an initiative to make operating OpenStack as simple as running any application on Kubernetes. Stackanetes deploys standard OpenStack services into containers and uses Kubernetes’ robust application lifecycle management capabilities to deliver a single platform for companies to run OpenStack Infrastructure-as-a-Service (IaaS) and container workloads.

Overview

Demonstration Video

Stackanetes: Technical Preview

Services

Stackanetes sets up the following OpenStack components:

  • Cinder
  • Glance
  • Horizon
  • Keystone
  • Neutron
  • Nova
  • Searchlight

In addition to these, a few other applications are deployed:

  • MariaDB
  • Memcached
  • RabbitMQ
  • RADOS Gateway
  • Traefik
  • Elasticsearch
  • Open vSwitch

Services are divided and scheduled into two groups, with the exception of the Open vSwitch agents which run everywhere:

  • The control plane, which runs all the OpenStack APIs and every other supporting applications,
  • The compute plane, which is dedicated to run Nova's virtual machines.

Gotta go fast

Leaving aside the configuration of the requirements, Stackanetes can fully deploy OpenStack from scratch in ~5-8min. But that's not the only strength of Stackanetes, its true power resides in its ability to help managing OpenStack's lifecycle.

Requirements

Stackanetes requires Kubernetes 1.3+ with:

  • At least two schedulable nodes,
  • At least one virtualization-ready node,
  • Overlay network & DNS add-on,
  • Kubelet running with --allow-privileged=true,

While Glance may operate with local storage, a Ceph cluster is needed for Cinder. Nova's live-migration feature requires proper DNS resolution of the Kubernetes nodes' hostnames.

The rkt engine can be used in place of the default runtime with Kubernetes 1.4+ and rkt 1.20+. Note however that a known issue about mount propagation flags may prevent the Kubernetes' service account secret from being mounted properly on the Nova's libvirt pod, causing it to fail at startup.

High-availability & Networking

Thanks to Kubernetes' deployments, OpenStack APIs can be made highly-available using a single parameter, called deployment.replicas.

Internal traffic (i.e. inside the Kubernetes cluster) is load-balanced natively using Kubernetes' services. When Ingress is enabled, external traffic (i.e. from outside of the Kubernetes cluster) to OpenStack is routed from any of the Kubernetes' node to an Traefik instance, which then selects the appropriate service and forward the requests accordingly. By leveraging Kubernetes' services and health checks, high-availability of the OpenStack endpoints is achieved transparently: a simple round-robin DNS that resolves to few Kubernetes' nodes is sufficient.

When it comes to data availability for Cinder and Glance, Stackanetes relies on the storage backend being used.

High availability is not yet guaranteed for Elasticsearch (Searchlight).

Getting started

Preparing the environment

Kubernetes

To setup Kubernetes, the CoreOS guides may be used.

At least two nodes must be labelled for Stackanetes' usage:

kubectl label node minion1 openstack-control-plane=enabled
kubectl label node minion2 openstack-compute-node=enabled

Following Galera guidelines, it's required to keep odd number of openstack-control-plane nodes. For development setup purposes, it's allowed to build one-node cluster.

DNS

To enable Nova's live-migration, there must be a DNS server, accessible inside the cluster, able to resolve each hostname of the Kubernetes nodes. The IP address of this server will then have to be provided in the Stackanetes configuration.

If external access is wanted, the Ingress feature should be enabled in Stackanetes configuration and the external DNS environment should be configured to resolve the following names (modulo a custom host that may have been configured) to at least some Kubernetes' nodes:

identity.openstack.cluster
horizon.openstack.cluster
image.openstack.cluster
network.openstack.cluster
volume.openstack.cluster
compute.openstack.cluster
novnc.compute.openstack.cluster
search.openstack.cluster

Ceph

If data high availability, Nova's live migration or Cinder is desired, Ceph must be used. Deploying Ceph can be achieved easily using bare containers or even by using kubernetes.

Few users and pools have to be created. The user and pool names can be customized. Note down the keyrings, they will be used in the configuration.

ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create vms 128
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

kpm

kpm is the package manager and command-line tool used to deploy stackanetes. The most straight-forward way to install it is to use PyPI:

apt-get update && apt-get install -y python-pip python-dev
pip install kpm>=0.24.2

Deploying

Cloning

Technically, cloning Stackanetes is not necessary beside getting the default configuration file but is believed to be a good practice to understand the architecture of the project or if modifying the project is intended.

git clone https://github.com/stackanetes/stackanetes.git
cd stackanetes

Configuration

All the configuration is done in one place: the parameters.yaml file in the stackanetes meta-package. The file is self-documented.

While it is no strictly necessary, it is possible to persist changes to that file for reproducible deployments across environments, without the need of sharing it out of band. To do this, the stackanetes meta-package has to renamed and pushed to the CNR Registry. Pushing is also required when any modifications are made to the Stackanetes packages.

cd stackanetes
kpm login kpm.sh
kpm push -f kpm.sh/<USERNAME>
cd ..

Deployment

All we have to do is ask kpm to deploy Stackanetes. In the example below, we specify a namespace, a configuration file containing all non-default parameters (stackanetes/parameters.yaml if the changes have been made in place) and the registry where the packages should be pulled.

kpm deploy kpm.sh/stackanetes/stackanetes --namespace openstack --variables stackanetes/parameters.yaml

For a finer-grained deployment story, kpm also supports versioning and release channels.

Access

Once Stackanetes is fully deployed, we can log in to Horizon or use the CLI directly.

If Ingress is enabled, Horizon may be accessed on http://horizon.openstack.cluster:30080/. Otherwise, it will be available on port 80 of any defined external IP. The default credentials are admin / password.

The file env_openstack.sh contains the default environment variables that will enable interaction using the various OpenStack clients.

Update

When the configuration is updated (e.g. a new Ceph monitor is added) or customized packages are pushed, Stackanetes can be updated with the exact same command that has been used to deploy it. kpm will compute the differences between the actual deployment and the desired one and update the required resources: it will for instance trigger a rolling upgrade when a deployment is modified.

Note that manual rollouts still have to be done when only ConfigMaps are modified.

kubernetes-entrypoint's People

Contributors

antoni avatar dtadrzak avatar ian-howell avatar mzylowski avatar quentin-m avatar seaneagan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-entrypoint's Issues

kubeconfig support.

I was exploring the option of using kubernetes-entrypoint on the client (in ci/cd) instead of in cluster to wait for services to come up before applying tests to them. It seems like something that kubernetes-entrypoint would be a good fit for. Currently though, it only seems to support in pod credentials rather then using a specified kubeconfig file. Could this be easily added?

dependency is not honored

Hello,

I have a dependency for stateful cinder-scheduler set on a service cinder-api. I see that cinder-api container is in init sate, but cinder scheduler is already in running which should not happened. cinder scheduler dependency container tells that the dependency has already been resolved.

Here is the link with all logs we capture for this run.

http://logs.openstack.org/97/424697/2/check/gate-kolla-kubernetes-deploy-centos-binary-2-helm-entrypoint-nv/54871ee/

Appreciate if somebody could take a look at them. If you have questions, please let me know.
Thank you
Serguei

files check

In some of my containers, I write out a file instead of a socket. Could we get a feature for blocking until file exists instead of it specifically being a socket?

Provide ARM64 binary

At Linaro we are working on getting Kubernetes running on ARM64 servers and use Kolla to build images for it. But lack of kubernetes-entrypoint binary for aarch64 makes it complicated.

We may help with getting it built.

daemonset labels

In some cases, I have different instances of the same daemonset with slightly different config based on hardware. for example, I might run two different daemonsets for flannel, each specifying which --interface xxx to bind to, if the host has multiple nics and floating names.

When launching a pod that depends on the daemonset, it may not really care which of them it runs against, but instead should match a daemonset pod based on a label, so it can match either. Could a new type be added that lets wait until a given labeled pod is running on the same host as you?

init-container

I'd like to use kubernetes-entrypoint in a k8s init-container instead of embededing it in directly into each regular container. This allows unmodified containers to be used, and also allows you to restrict k8s service tokens to just the init container, enhancing security.

I think this would work well, but the current implementation in kubernetes-entrypoint.go looks to fail if command is empty:

if comm = env.SplitEnvToList("COMMAND", " "); len(comm) == 0 {
logger.Error.Printf("COMMAND env is empty")
os.Exit(1)
}

Can a feature be added to allow it to simply os.Exit(0) after deps are resolved when set so it can be used as an init-container?

Debugging flag

For some scenarios it would be great to be able to activate a debugging flag. When it is on, entrypoint should generate additional logs detailing why the condition has not met and proof of that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.