Giter VIP home page Giter VIP logo

kontena's Introduction

Kontena Classic

IMPORTANT! This project is deprecated. Please see Kontena Pharos - The Simple, Solid, Certifified Kubernetes Distribution That Just Works!

Build Status Join the chat at https://slack.kontena.io

Kontena Classic is a developer friendly, open source platform for orchestrating applications that are run on Docker containers. It simplifies deploying and running containerized applications on any infrastructure. By leveraging technologies such as Docker, Container Linux and Weave Net, it provides complete solution for organizations of any size.

Kontena Classic is built to maximize developer happiness; it is designed for application developers and therefore does not require ops teams to setup or maintain. Therefore, it is an ideal choice for organizations without aspiration to configure and maintain scalable Docker container infrastructure.

Kontena Classic Introduction

To accelerate and break barriers for containerized application development, Kontena Classic features some of the most essential technologies built-in such as:

  • Multi-host, multi AZ container orchestration
  • Overlay network technology by Weaveworks
  • Zero-downtime dynamic load balancing
  • Abstraction to describe services running in containers
  • Private Docker image repository
  • Kontena Vault - a secure storage for managing secrets
  • VPN access to backend containers
  • Heroku-like application deployment workflow

Kontena Classic supports any application that can run in a Docker container, and can run on any machine that supports CoreOS. You can run Kontena on the cloud provider of your choice or on your own servers. We hope you enjoy!

Learn more about Kontena:

Getting Started

Please see our Quick Start guide.

Contact Us

Found a bug? Suggest a feature? Have a question? Please submit an issue or email us at [email protected].

Follow us on Twitter: @KontenaInc.

Slack: Join the Kontena Community Slack channel.

License

Kontena Classic software is open source, and you can use it for any purpose, personal or commercial. Kontena is licensed under the Apache License, Version 2.0. See LICENSE for full license text.

kontena's People

Contributors

artofhuman avatar avemark avatar catataw avatar christofferh avatar colinmollenhour avatar coolzilj avatar counterbeing avatar d2s avatar daniel-bytes avatar hans-d avatar jakolehm avatar jakubknejzlik avatar jnummelin avatar kke avatar krismichalski avatar kvirkki avatar manfrin avatar maniankara avatar marcbosserhoff avatar mattdavenport avatar matti avatar mikeyyuen avatar mikkopiu avatar miskun avatar msokk avatar muhh avatar nevalla avatar saana-kontenacm avatar spcomb avatar webflo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kontena's Issues

refactor connect, login and register flows

In order to streamline the usage of Kontena, following enhancements should be made:

  1. kontena register should be changed so that it is possible to register at any time. The command without any arguments should register user to Kontena's default and public auth provider service at address https://auth.kontena.io. User may provide address to some other auth provider service as an argument for this command.
  2. Master Node should have configurable parameter for defining the auth provider address. Any login attempts to Master Node should be authenticated via this auth provider. By default, this address should be Kontena's default and public auth provider service at address https://auth.kontena.io.
  3. kontena login should be changed so that it is requiring an argument describing the Master Node address. Show Kontena ASCII art during login :)
  4. Get rid of kontena connect as there is no more need for it.

Hide grid prefix from service names

As mentioned in #88 it's confusing to show full id/path for services because user has already switched to grid. We should at least remove grid prefix from service list and autocomplete for now.

Use static etcd bootstrapping

Discovery seems to cause some race conditions when nodes are booted in parallel. We know node overlay ip's before hand so static bootstrapping should be a better option. And it does not call to internet.

Easy nodes provisioning through Kontena CLI

At the moment, Kontena installation is quite complex operation. Idea is to make it possible for users to provision Kontena Nodes on any cloud provider through cli with few simple commands.

Service sidekicks

Service definition should include "sidekick" containers that share network/process space from main container. Concept should be somewhat similar to app spec/pod.

coreOS

Hi.
This projects looks promising.
Is it possible to run the server and the agents on coreOS?
If yes, what would be the setup process?

Thx

kontena deploy fails with "not found"

I made a kontena.yml that looks similar to the following:

# jenkins configuration
jenkins:
    image: jenkinsci/jenkins:latest
    stateful: true
    instances: 1
    user: jenkins
    ports:
        - "9181:8080"
        - "9150:50000"
    affinity:
        - container!=gitblit

# gitblit configuration
gitblit:
    image: elderresearch/docker-gitblit:latest
    stateful: true
    instances: 1
    user: gitblit
    ports:
        - "9180:80"
        - "9118:9418"
        - "9122:29418"
    affinity:
        - container!=jenkins

However, when I run kontena deploy with this kontena.yml file, it creates the services then fails with error: {"error":"Not found"}.. For some reason it also creates the services as demo/vagrant-jenkins and demo/vagrant-gitblit. If I call it to deploy manually with kontena service deploy vagrant-jenkins and kontena service deploy vagrant-gitblit, it works.

Why does it append the demo/ portion if we're already on the demo grid? (Surely that's redundant and confusing, because my first instinct was to do kontena service deploy demo/vagrant-jenkins, which doesn't work.) Secondly, it looks that kontena deploy is calling the services wrong.

restart failed containers

When I start a kontena service with:

kontena service create ghost-blog ghost:0.5 --stateful -p 8181:2368 
kontena service deploy ghost-blog

it correctly starts a docker container on node1.

But when I stop the docker container on node1 using

docker stop ghost-blog-1

I expected kontena to restart the container and keep the service running.

Kontena seems to immediately recognize that the container is stopped, but does not restart it.

$ kontena service show ghost-blog
first-grid/ghost-blog:
  status: running
  stateful: yes
  scaling: 1
  image: ghost:0.5
  cmd: -
  env: 
  ports:
    - 8181:2368/tcp
  links: 
  containers:
    ghost-blog-1:
      rev: 2015-07-27 11:42:11 UTC
      node: node1
      dns: ghost-blog-1.kontena.local
      ip: 
      public ip: xxx.xxx.xxx.xxx
      status: stopped

Do you plan to support automatic restart of failed containers?

Kontena Node loss doesn't result in change of state

I terminated an instance that was running as a Kontena Node. After creating a new node and registering it with the cluster nothing happened. Kontena did not try and restart VPN, the registry, or services that were running on that host. Is this desired behavior? Kontena is still staging that the registry is up and running just fine, even though the host it is on is gone:

$ kontena service show registry
mygrid/registry:
  status: running
  stateful: yes
  scaling: 1
  image: kontena/registry:2.1
  dns: registry.mygrid.kontena.local
  affinity: 
    - node==ip-10-0-0-99
  cmd: 
  env: 
    - REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/registry
    - REGISTRY_HTTP_ADDR=0.0.0.0:80
  ports:
  volumes:
    - /registry
  volumes_from:
  links: 
  cap_add:
  cap_drop:
  containers:

That host is no longer in the cluster:

$ kontena node list
Name                           OS                                       Driver          Labels                         Status    
ip-10-0-0-166                  Ubuntu 14.04.2 LTS (3.13.0-48-generic)   aufs            -                              online 

kontena-cli: installation problem

After running 'gem install kontena-cli' and then 'kontena -v' I've got this error message:

/usr/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find kontena-cli (>= 0) amongst [] (Gem::LoadError) from /usr/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec'
from /usr/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/kontena:18:in

'

Some related information:

  1. At the beginning I had ruby 1.8.7 on my Ubuntu comp. Then I installed ruby 1.9.3 by this command:

sudo apt-get install ruby1.9.3

  1. Then I changed the default ruby link (to change from ruby 1.8.7 to 1.9.3) by running command:

sudo update-alternatives --config ruby

  1. After that I run 'gem install kontena-cli' and 'kontena -v' as mentioned above.

kontena connect without http://

at the moment kontena connect is expecting host and port information with protocol info included. it should be possible to use connect without protocol, like:

kontena connect 192.168.66.100:8080

Add prefix wildcard support to kontena.yml files

When using Kontena Deploy feature, all services are prefixed by default with a parent directory name containing the kontena.yml file. Also, user has possibility to prefix the services with a custom prefix. This might affect the kontena.yml file contents since for example when linking services, it is required to name the linked services (including the prefix). Therefore, it is required to have some kind of wildcard support in kontena.yml file to use currently selected prefix instead of hard coded values.

Affinity and port options do not work together

When ever service is updated with port information the service spec loses it's affinity setup. And when affinity is set the service loses its port configuration.

Steps to reproduce:

kontena service create --affinity node==xyz nginx nginx:latest
kontena service deploy nginx
kontena service show nginx # Should show affinity in action
kontena service update -p 80:80 nginx
kontena service deploy nginx
kontena service show nginx # Affinity setup lost!!!

Seems to work ok if one specifies BOTH options at the same time. :)

kontena deploy -p switch not working with empty string

$ kontena deploy -p ""
DEPRECATION WARNING: Support for 'kontena deploy' will be dropped. Use 'kontena app deploy' instead.
creating -wordpress
creating -mysql

The name of services should be "wordpress" and "mysql" in this case. Also,

$ kontena service show "-wordpress"
ERROR: Unrecognised option '-w'

See: 'kontena service show --help'

It's impossible to do anything with these services.

Unable to install kontena-server on Ubuntu 14.04.3

Was following the bare metal install:

# apt-get install kontena-server
Reading package lists... Done 
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 kontena-server : Depends: lxc-docker (= 1.7.1) but it is not installable or
                       docker-engine (= 1.7.1) but 1.7.1-0~trusty is to be installed
E: Unable to correct problems, you have held broken packages.

OS Version:

# cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS"

I initially had the latest docker installed 1.8, but purged and reinstalled 1.7.1. Any ideas?

Avoid updating of service that has not changed with Kontena deploy

At the moment, when Kontena deploy is used, all services are deployed even if there was no change in the service configuration. This is unnecessary and might cause downtime for example for database services. Ideally, only the services that have changed will be updated with kontena deploy command.

VPN Workflow Problem

Ubuntu seems to be a target platform for Kontena, which is great. I'm using Ubuntu as a my dev/workstation and Ubuntu servers as master and agents. When setting up VPN I ran into some issues. I got the .ovpn file, went to import it using the gnome-network-manager but it doesn't recognize the inline keys. This isn't really your problem, more of a limitation of the network manager, but if you could output a tar file with the certs separate, it might ease adoption a bit. Just a thought!

Bug Tracker for Ubuntu: https://bugs.launchpad.net/ubuntu/+source/network-manager-openvpn/+bug/606365

Thanks for all the work on Kontena, looking forward to using it once I get it setup.

Hide internal system containers

At the moment, some internal system containers (like VPN) are shown by default. We need to have ability to hide any such system containers as they might clutter the list of services and containers.

Consistent node provision commands in CLI

The usage of node provision commands should be made consistent in CLI. At the moment, create command will take node name with --name switch and for rest of the commands it is given as argument. Therefore, the create command usage should be changed to something like this:

kontena node aws create [OPTIONS] [NAME]

Installing kontena-agent on Ubuntu 14.04 breaks Docker 1.7

tl;dr: docker won't start due to malformed (?) DOCKER_OPTS after kontena-agent install

I have a VPS where lsb_release -a reports "Ubuntu 14.04.2 LTS" and docker-engine is at version 1.7.1-0~trusty. Adding the Kontena archive and installing kontena-agent modifies my /etc/default/docker such that DOCKER_OPTS is

DOCKER_OPTS="--bridge=weave --fixed-cidr='10.81.null.0/24' --insecure-registry='10.81.0.0/16'"

and the Docker service fails to start. The "null" bit is suspicious, no?

$ sudo docker -d --bridge=weave --fixed-cidr='10.81.null.0/24' --insecure-registry='10.81.0.0/16'
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] [graphdriver] using prior storage driver "aufs"
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: invalid CIDR address: 10.81.null.0/24

Okay, let's take a guess at a more reasonable value:

$ sudo docker -d --bridge=weave --fixed-cidr='10.81.0.0/24' --insecure-registry='10.81.0.0/16'
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] [graphdriver] using prior storage driver "aufs"
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: Error creating default "bridge" network: bridge device with non default name weave must be created manually

Shucks :( Maybe the modprobe warning is the problem?

$ lsmod|egrep '^(bridge|nf_nat)\b'
bridge                110833  0
nf_nat                 21841  5 nf_nat_ftp,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,iptable_nat

wat :(

User not allowed to register Kontena account before user is invited to server

Currently user has to be invited to a Kontena server before he/she can register a Kontena account. This has to be changed so that user can also register a Kontena account without invitation, but error is returned when user is not allowed to use current Kontena server and further invitation is needed.

Supported workflow:

  1. Admin user invites user to Kontena server
  2. User connects to Kontena server
  3. If user has not Kontena account, user can register account for himself/herself
  4. Use can log in to Kontena server with Kontena account

What also is needed to support:

  1. User connects to Kontena server
  2. User registers Kontena account
  3. Admin user invites user to Kontena server
  4. User can log in to Kontena server with Kontena account

Add grid discovery url and initial size

Grid should have etcd discovery url and initial size. If discovery url is not given, then server will generate public etcd discovery url based on initial size.

Local auth backend

Would be nice to have local auth backend with master. In cases where both master and nodes are deployed in private datacenters it might be an issue to have auth in public cloud. Some local option would be really usable in these kind of cases. And by local I mean something close to master if not really build in to master.

Of course people can do this by "reverse-engineering" the auth API calls made in https://github.com/kontena/kontena/blob/master/server/app/services/auth_service/client.rb, but at least some "official" documentation about the API should be available. Also, to enable more out-of-box usability of Kontena it would be really nice to have some local option available out-of-box. Maybe some docker image that uses some embedded storage.

Add node join api to master

Master (server) should have an api that each node can call when starting up it's services (mainly etcd/weave). Api should require valid grid token and it should return discovery_url and node_number information.

Kontena Master doesn't report loss of contact with Kontena Agent/Nodes

When I was dealing with some odd VPN issues, #150, I eventually found that my issue was due to the Kontena Agent/Node not being able to contact the Kontena Master and vice versa. I had accidentally removed TCP port 8080 from our Security Group and the Kontena servers were no longer able to talk to each other.

$ kontena node list
Name                           OS                                       Driver          Labels                         Status    
ip-10-0-0-99                   Ubuntu 14.04.3 LTS (3.13.0-63-generic)   aufs            -                              online

A status of online is unexpected as the Node/Master haven't been in communication over port 8080. Grid list wasn't of much help either:

$ kontena grid list
Name                           Nodes    Services     Users     
mygrid *                       1        0            1   

The issue became more apparent when modifying things, for example:

~$ kontena vpn delete
{"code":503,"message":"Node unavailable","backtrace":null}

I would think this would be reproducible by blocking TCP port 8080 between node containers and running the previous commands.

Thanks guys,
-Seth

Ubuntu etcd package

We need some kind of service discovery for agents. Etcd fits to this nicely and it's already available in coreos.

ETCD DNS entry missing

The grid internal etcd seems to be missing DNS entry:

core@core-node-1 ~ $ sudo docker run -v /var/run/docker.sock:/var/run/docker.sock weaveworks/weaveexec:latest status dns    
registry-1   10.81.26.138    6af8610f3dac 3e:be:06:06:3b:f1
registry-1.nbl-coreos 10.81.26.138    6af8610f3dac 3e:be:06:06:3b:f1
registry     10.81.26.138    6af8610f3dac 3e:be:06:06:3b:f1
registry.nbl-coreos 10.81.26.138    6af8610f3dac 3e:be:06:06:3b:f1
vpn-1        10.81.8.78      74b0099935dd 3e:be:06:06:3b:f1
vpn-1.nbl-coreos 10.81.8.78      74b0099935dd 3e:be:06:06:3b:f1
vpn          10.81.8.78      74b0099935dd 3e:be:06:06:3b:f1
vpn.nbl-coreos 10.81.8.78      74b0099935dd 3e:be:06:06:3b:f1

The above is from pretty clean grid with just vpn and registry deployed.

Could not find any code that ties the etcd container into the grid internal weave dns. etcd used to have grid internal dns in past releases, if my memory serves me. :) Might be easiest to implement it by just labeling the etcd containers properly so that the weave_attacher picks them up and inserts proper dns entries.

Failed hosts are note detected

Assume I start two nodes:

$ kontena node list
Name                           OS                                       Driver          Labels                         Status    
node1                          CoreOS 717.3.0 (4.0.5)                   overlay         -                              online    
node2                          CoreOS 717.3.0 (4.0.5)                   overlay         -                              online    

If I completely stop the machine node2, I expect the status to change from online to offline. However, the status is not updated.

Furthermore, if there were any services running on node2, I expect them to be restarted on node1 which also does not happen.

Do you have any plans to fix/implement this behavior?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.