kontena / kontena Goto Github PK
View Code? Open in Web Editor NEWThe developer friendly container and micro services platform. Works on any cloud, easy to setup, simple to use.
Home Page: https://www.kontena.io/
License: Apache License 2.0
The developer friendly container and micro services platform. Works on any cloud, easy to setup, simple to use.
Home Page: https://www.kontena.io/
License: Apache License 2.0
In order to streamline the usage of Kontena, following enhancements should be made:
kontena register
should be changed so that it is possible to register at any time. The command without any arguments should register user to Kontena's default and public auth provider service at address https://auth.kontena.io. User may provide address to some other auth provider service as an argument for this command.kontena login
should be changed so that it is requiring an argument describing the Master Node address. Show Kontena ASCII art during login :)kontena connect
as there is no more need for it.$ kontena deploy -p ""
DEPRECATION WARNING: Support for 'kontena deploy' will be dropped. Use 'kontena app deploy' instead.
creating -wordpress
creating -mysql
The name of services should be "wordpress" and "mysql" in this case. Also,
$ kontena service show "-wordpress"
ERROR: Unrecognised option '-w'
See: 'kontena service show --help'
It's impossible to do anything with these services.
When ever service is updated with port information the service spec loses it's affinity setup. And when affinity is set the service loses its port configuration.
Steps to reproduce:
kontena service create --affinity node==xyz nginx nginx:latest
kontena service deploy nginx
kontena service show nginx # Should show affinity in action
kontena service update -p 80:80 nginx
kontena service deploy nginx
kontena service show nginx # Affinity setup lost!!!
Seems to work ok if one specifies BOTH options at the same time. :)
Hello,
Can Kontena be installed on Centos?
Thx
Master (server) should have an api that each node can call when starting up it's services (mainly etcd/weave). Api should require valid grid token and it should return discovery_url
and node_number
information.
Agent packages (etcd / weave) should fetch needed information from master server. Etcd needs discovery url and weave needs node number.
It seems that service stats
shows wrong net I/O stats after service is re-deployed. It should show cumulative sum of network I/O (also from old containers), yes?
The grid internal etcd seems to be missing DNS entry:
core@core-node-1 ~ $ sudo docker run -v /var/run/docker.sock:/var/run/docker.sock weaveworks/weaveexec:latest status dns
registry-1 10.81.26.138 6af8610f3dac 3e:be:06:06:3b:f1
registry-1.nbl-coreos 10.81.26.138 6af8610f3dac 3e:be:06:06:3b:f1
registry 10.81.26.138 6af8610f3dac 3e:be:06:06:3b:f1
registry.nbl-coreos 10.81.26.138 6af8610f3dac 3e:be:06:06:3b:f1
vpn-1 10.81.8.78 74b0099935dd 3e:be:06:06:3b:f1
vpn-1.nbl-coreos 10.81.8.78 74b0099935dd 3e:be:06:06:3b:f1
vpn 10.81.8.78 74b0099935dd 3e:be:06:06:3b:f1
vpn.nbl-coreos 10.81.8.78 74b0099935dd 3e:be:06:06:3b:f1
The above is from pretty clean grid with just vpn and registry deployed.
Could not find any code that ties the etcd container into the grid internal weave dns. etcd used to have grid internal dns in past releases, if my memory serves me. :) Might be easiest to implement it by just labeling the etcd containers properly so that the weave_attacher picks them up and inserts proper dns entries.
Would be nice to have local auth backend with master. In cases where both master and nodes are deployed in private datacenters it might be an issue to have auth in public cloud. Some local option would be really usable in these kind of cases. And by local I mean something close to master if not really build in to master.
Of course people can do this by "reverse-engineering" the auth API calls made in https://github.com/kontena/kontena/blob/master/server/app/services/auth_service/client.rb, but at least some "official" documentation about the API should be available. Also, to enable more out-of-box usability of Kontena it would be really nice to have some local option available out-of-box. Maybe some docker image that uses some embedded storage.
Sometimes it is needed to deploy just single or few selected services from kontena.yml file. Add support to do this. For example: kontena deploy <servicename1> <servicename2> <servicenameN>
Docker 1.7 requires that VolumesFrom is an array. Previous versions did accept also string.
As discussed in #150, OpenVPN server sends invalid route 172.17.42.1/31
that causes error in linux. Correct route is 172.17.42.1/32
We need some kind of service discovery for agents. Etcd fits to this nicely and it's already available in coreos.
Service definition should include "sidekick" containers that share network/process space from main container. Concept should be somewhat similar to app spec/pod.
Currently user has to be invited to a Kontena server before he/she can register a Kontena account. This has to be changed so that user can also register a Kontena account without invitation, but error is returned when user is not allowed to use current Kontena server and further invitation is needed.
Supported workflow:
What also is needed to support:
Ubuntu seems to be a target platform for Kontena, which is great. I'm using Ubuntu as a my dev/workstation and Ubuntu servers as master and agents. When setting up VPN I ran into some issues. I got the .ovpn file, went to import it using the gnome-network-manager but it doesn't recognize the inline keys. This isn't really your problem, more of a limitation of the network manager, but if you could output a tar file with the certs separate, it might ease adoption a bit. Just a thought!
Bug Tracker for Ubuntu: https://bugs.launchpad.net/ubuntu/+source/network-manager-openvpn/+bug/606365
Thanks for all the work on Kontena, looking forward to using it once I get it setup.
At the moment, some internal system containers (like VPN) are shown by default. We need to have ability to hide any such system containers as they might clutter the list of services and containers.
Container log streaming leaks memory because of bug in docker-api gem. See upserve/docker-api#286
The usage of node provision commands should be made consistent in CLI. At the moment, create command will take node name with --name
switch and for rest of the commands it is given as argument. Therefore, the create command usage should be changed to something like this:
kontena node aws create [OPTIONS] [NAME]
When using Kontena Deploy feature, all services are prefixed by default with a parent directory name containing the kontena.yml file. Also, user has possibility to prefix the services with a custom prefix. This might affect the kontena.yml file contents since for example when linking services, it is required to name the linked services (including the prefix). Therefore, it is required to have some kind of wildcard support in kontena.yml file to use currently selected prefix instead of hard coded values.
When provision nodes to Digital Ocean with Kontena Cli, internal DNS is not working correctly on provisioned nodes. That prevents using Kontena's built-in Docker registry because agent can't pull any images from registry.
I made a kontena.yml that looks similar to the following:
# jenkins configuration
jenkins:
image: jenkinsci/jenkins:latest
stateful: true
instances: 1
user: jenkins
ports:
- "9181:8080"
- "9150:50000"
affinity:
- container!=gitblit
# gitblit configuration
gitblit:
image: elderresearch/docker-gitblit:latest
stateful: true
instances: 1
user: gitblit
ports:
- "9180:80"
- "9118:9418"
- "9122:29418"
affinity:
- container!=jenkins
However, when I run kontena deploy
with this kontena.yml file, it creates the services then fails with error: {"error":"Not found"}.
. For some reason it also creates the services as demo/vagrant-jenkins
and demo/vagrant-gitblit
. If I call it to deploy manually with kontena service deploy vagrant-jenkins
and kontena service deploy vagrant-gitblit
, it works.
Why does it append the demo/
portion if we're already on the demo
grid? (Surely that's redundant and confusing, because my first instinct was to do kontena service deploy demo/vagrant-jenkins
, which doesn't work.) Secondly, it looks that kontena deploy
is calling the services wrong.
tl;dr: docker won't start due to malformed (?) DOCKER_OPTS after kontena-agent install
I have a VPS where lsb_release -a
reports "Ubuntu 14.04.2 LTS" and docker-engine is at version 1.7.1-0~trusty. Adding the Kontena archive and installing kontena-agent modifies my /etc/default/docker
such that DOCKER_OPTS is
DOCKER_OPTS="--bridge=weave --fixed-cidr='10.81.null.0/24' --insecure-registry='10.81.0.0/16'"
and the Docker service fails to start. The "null" bit is suspicious, no?
$ sudo docker -d --bridge=weave --fixed-cidr='10.81.null.0/24' --insecure-registry='10.81.0.0/16'
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] [graphdriver] using prior storage driver "aufs"
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: invalid CIDR address: 10.81.null.0/24
Okay, let's take a guess at a more reasonable value:
$ sudo docker -d --bridge=weave --fixed-cidr='10.81.0.0/24' --insecure-registry='10.81.0.0/16'
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] [graphdriver] using prior storage driver "aufs"
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: Error creating default "bridge" network: bridge device with non default name weave must be created manually
Shucks :( Maybe the modprobe warning is the problem?
$ lsmod|egrep '^(bridge|nf_nat)\b'
bridge 110833 0
nf_nat 21841 5 nf_nat_ftp,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,iptable_nat
wat :(
After 0.6 release local testing with Vagrant file is not valid anymore and agent provisioning will fail. See kontena/docs#2
Mounting /sys is broken in Docker 1.6.1, see: moby/moby#13097 .
kontena container exec
does not return proper exit code from container. It also outputs stderr to stdout which is not good behaviour.
It seems that some commands does not get through exec calls.
Hi.
This projects looks promising.
Is it possible to run the server and the agents on coreOS?
If yes, what would be the setup process?
Thx
After running 'gem install kontena-cli' and then 'kontena -v' I've got this error message:
/usr/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find kontena-cli (>= 0) amongst [] (Gem::LoadError) from /usr/lib/ruby/1.9.1/rubygems/dependency.rb:256:in
to_spec'
from /usr/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/kontena:18:in
Some related information:
sudo apt-get install ruby1.9.3
sudo update-alternatives --config ruby
Grid should have etcd discovery url and initial size. If discovery url is not given, then server will generate public etcd discovery url based on initial size.
Private docker repository is a pain to setup. Idea is to have necessary commands in Kontena cli to create private docker image repository for a grid without hassle.
When I was dealing with some odd VPN issues, #150, I eventually found that my issue was due to the Kontena Agent/Node not being able to contact the Kontena Master and vice versa. I had accidentally removed TCP port 8080 from our Security Group and the Kontena servers were no longer able to talk to each other.
$ kontena node list
Name OS Driver Labels Status
ip-10-0-0-99 Ubuntu 14.04.3 LTS (3.13.0-63-generic) aufs - online
A status of online
is unexpected as the Node/Master haven't been in communication over port 8080. Grid list wasn't of much help either:
$ kontena grid list
Name Nodes Services Users
mygrid * 1 0 1
The issue became more apparent when modifying things, for example:
~$ kontena vpn delete
{"code":503,"message":"Node unavailable","backtrace":null}
I would think this would be reproducible by blocking TCP port 8080 between node containers and running the previous commands.
Thanks guys,
-Seth
As mentioned in #88 it's confusing to show full id/path for services because user has already switched to grid. We should at least remove grid prefix from service list
and autocomplete for now.
Was following the bare metal install:
# apt-get install kontena-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
kontena-server : Depends: lxc-docker (= 1.7.1) but it is not installable or
docker-engine (= 1.7.1) but 1.7.1-0~trusty is to be installed
E: Unable to correct problems, you have held broken packages.
OS Version:
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS"
I initially had the latest docker installed 1.8, but purged and reinstalled 1.7.1. Any ideas?
We should balance containers automatically when grid nodes join/leave.
I terminated an instance that was running as a Kontena Node. After creating a new node and registering it with the cluster nothing happened. Kontena did not try and restart VPN, the registry, or services that were running on that host. Is this desired behavior? Kontena is still staging that the registry is up and running just fine, even though the host it is on is gone:
$ kontena service show registry
mygrid/registry:
status: running
stateful: yes
scaling: 1
image: kontena/registry:2.1
dns: registry.mygrid.kontena.local
affinity:
- node==ip-10-0-0-99
cmd:
env:
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/registry
- REGISTRY_HTTP_ADDR=0.0.0.0:80
ports:
volumes:
- /registry
volumes_from:
links:
cap_add:
cap_drop:
containers:
That host is no longer in the cluster:
$ kontena node list
Name OS Driver Labels Status
ip-10-0-0-166 Ubuntu 14.04.2 LTS (3.13.0-48-generic) aufs - online
Documentation says that volumes are defined in plural form in kontena.yml: http://www.kontena.io/docs/references/kontena-yml
But in reality CLI uses singular form:
https://github.com/kontena/kontena/blob/master/cli/lib/kontena/cli/stacks/stacks.rb#L140
For sake of consistency, YAML should IMHO use plural form to be consistent with other options. (ports etc.)
When I start a kontena service with:
kontena service create ghost-blog ghost:0.5 --stateful -p 8181:2368
kontena service deploy ghost-blog
it correctly starts a docker container on node1
.
But when I stop the docker container on node1
using
docker stop ghost-blog-1
I expected kontena to restart the container and keep the service running.
Kontena seems to immediately recognize that the container is stopped, but does not restart it.
$ kontena service show ghost-blog
first-grid/ghost-blog:
status: running
stateful: yes
scaling: 1
image: ghost:0.5
cmd: -
env:
ports:
- 8181:2368/tcp
links:
containers:
ghost-blog-1:
rev: 2015-07-27 11:42:11 UTC
node: node1
dns: ghost-blog-1.kontena.local
ip:
public ip: xxx.xxx.xxx.xxx
status: stopped
Do you plan to support automatic restart of failed containers?
At the moment, when Kontena deploy is used, all services are deployed even if there was no change in the service configuration. This is unnecessary and might cause downtime for example for database services. Ideally, only the services that have changed will be updated with kontena deploy
command.
at the moment kontena connect
is expecting host and port information with protocol info included. it should be possible to use connect without protocol, like:
kontena connect 192.168.66.100:8080
Assume I start two nodes:
$ kontena node list
Name OS Driver Labels Status
node1 CoreOS 717.3.0 (4.0.5) overlay - online
node2 CoreOS 717.3.0 (4.0.5) overlay - online
If I completely stop the machine node2
, I expect the status to change from online
to offline
. However, the status is not updated.
Furthermore, if there were any services running on node2
, I expect them to be restarted on node1
which also does not happen.
Do you have any plans to fix/implement this behavior?
Discovery seems to cause some race conditions when nodes are booted in parallel. We know node overlay ip's before hand so static bootstrapping should be a better option. And it does not call to internet.
This is handy when using cli from CI.
At the moment, Kontena installation is quite complex operation. Idea is to make it possible for users to provision Kontena Nodes on any cloud provider through cli with few simple commands.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.