gianarb / orbiter Goto Github PK
View Code? Open in Web Editor NEWOrbiter is an opensource docker swarm autoscaler
License: Apache License 2.0
Orbiter is an opensource docker swarm autoscaler
License: Apache License 2.0
This is my first time using Orbiter. Currently, I got this swarmtest.yml :
version: '3.7'
networks:
my_docker_network:
services:
orbiter:
image: gianarb/orbiter
command: /bin/orbiter daemon --debug
ports:
- 8000:8000
volumes:
- /var/run/docker.sock:/var/run/docker.dock
deploy:
placement:
constraints:
- node.role == manager
mode: replicated
replicas: 1
web:
image: nginx:alpine
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
volumes:
- ./web:/var/www/web
networks:
- my_docker_network
deploy:
labels:
- "orbiter=true"
replicas: 1
phpfpm:
image: php:7.3-fpm-alpine3.9
volumes:
- ./web:/var/www/web
networks:
- my_docker_network
deploy:
labels:
- "orbiter=true"
replicas: 1
which I runs with: docker stack deploy -c swarmtest.yml myswarm
I see from docker service ls
that they all came up:
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
fv0c3eea5g2n myswarm_orbiter replicated 1/1 gianarb/orbiter:latest *:8000->8000/tcp
ez3r46p6l297 myswarm_phpfpm replicated 1/1 php:7.3-fpm-alpine3.9
ne5fkzualf1x myswarm_web replicated 1/1 nginx:alpine *:80->80/tcp, *:443->443/tcp
However, when I do a load test using wrk . I see the CPU of the server goes up but the apps never scale up and they always stay at 1 replica no matter what.
Additionally, does this look right?:
$ curl -v -X GET http://127.0.0.1:8000/v1/orbiter/autoscaler
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /v1/orbiter/autoscaler HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 29 Jul 2021 14:55:37 GMT
< Content-Length: 11
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host 127.0.0.1 left intact
{"data":[]}
What are other things that I must do in order to make Orbiter autoscale?
Providers as DockerSwarm are able to manage reconciliation by them self but what happen with DigitalOcean or other that are not able to manage this behavior?
At the moment nothing happen. We are not able to detect when a node is down and to replace them with new instances. That's something that we need to address as soon as possible.
It looks like the last change was about a year ago...just wondering if this is still being maintained?
At the moment orbiter has not a status. It doesn't know where it triggers a scaling action for example. This can be a big limitation because a future feature can be:
Don't trigger another scale action if you did something just 1 minutes ago. Because it can be just a false positive behavior.
Or
Give me the list of scaling that orbiter did this week
Or
Move away from a yml based configuration to something like
orbiterctl autoscaler add --up 3 --down 3 --provider digitalocean --token aw4gagearg autoscalerName
Or
Manage autoscaling group not via tags (because some providers maybe it doesn't support tags but store the servers managed by orbiter in orbiter itself.
Etcd can be a good candidate to have an embedded distributed storage to support all these features.
This is a very big change, it's not going to be done soon. But it's still something good to have.
This post explains how to use tags.
At the moment the Digitalocean implementation is very simple. We are adding and deleting servers without to care about them. It can be any kind of server in status running.
we need to use tags to mark as "manage by orbiter" every server that we start and remove only the servers part of this groups.
Tags can be something like "orbiter_autoscaling=nameoftheautoscaler" in this way we know that it's managed by us and we are not going to delete other instances.
You don't need to fetch and loop through all services in the Swarm @
orbiter/autoscaler/autoscaler.go
Line 56 in e875318
Instead you can add a filter, and fetch the required service directly. E.g.:
serviceFilter := filters.NewArgs()
serviceFilter.Add("id", serviceId)
services, err := dockerClient.ServiceList(ctx, types.ServiceListOptions{Filters: serviceFilter})
Also see: https://docs.docker.com/engine/reference/commandline/service_ls/#filtering
In #42 we introduced a workaround to get the latest https://github.com/docker/docker/client since they introduced a BC in how they handle go packages.
This issue is here to monitor that and as reminder that dep
is broken intentionally
After some long chat with people and myself we realised that one single autoscaler for everything is not going to be usable and maintainable for this reason we will make orbiter an autoscaler for docker swarm.
I have created orbiter service using stack file hosting port 8000. Services with label orbiter=true are registering with orbiter. But when i tried to scale up he service using curl command like
curl -v -d '{"direction": true}'
http://localhost:8000/v1/orbiter/handle/autoswarm/web/docker
where 'web' is the name of my service , it used to stay calm and no response. After I went through service logs. what i found was log message with msg=GET / nof found.
Here are my orbiter service logs:
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:32Z" level=info msg="orbiter started"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:32Z" level=info msg="API Server run on port :8000"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:33Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:33Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:33Z" level=info msg="Registering /handle/autoswarm/hari to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:05:32Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:05:32Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:05Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:05Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:13Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:13Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:17Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:17Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:00Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:00Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:07Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:07Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:08Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:08Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:12Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:12Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:08Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:08Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:48Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:48Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:53Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:53Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:33:20Z" level=info msg="GET / not found."
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:36:11Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:36:11Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:36:11Z" level=info msg="Registering /handle/autoswarm/amz to orbiter. (UP 1, DOWN 1, COOL 1)"
I am expecting someone might help me in solving this issue.
Thanks
hari
I started orbiter with the attached orbiter-stack.yml file, then a service with orbiter flags. Initial scale up and scale down worked, but after scaling down to 1, scale up gave error about not being able to scale down. Used "curl -v -X POST localhost:8081/handle/autoswarm/stage-investor-relations_app/up" for all tests. Log file also attached.
Move the autodetection logic into the provider. If the provider is not supporting autodetecting it can just return an error.
Make smarter the fallback function in order to scale and to call all the autodetection provider to self-configure orbiter.
This two points are necessary to make the codebase flexible and maintainable. But the POC is there and it's working.
The Gopkg.lock
and Gopkg.toml
are out of sync.
dep status
Lock inputs-digest mismatch. This happens when Gopkg.toml is modified.
Run `dep ensure` to regenerate the inputs-digest.
Also seems that there's a manifest.json
which has another whole set of dependencies and should be deleted since it's used by nothing
Steps to reproduce
docker service logs <service name used in deployment>
Additional Details
Logs:
standard_init_linux.go:211: exec user process caused "exec format error",
standard_init_linux.go:211: exec user process caused "exec format error",
standard_init_linux.go:211: exec user process caused "exec format error",
Doing discovery, it would show that there is no a multiarch binary/container uploaded to docker hub.
Add a multiarch binary and docker image as outlined in https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
version: "3.4"
services:
orbiter:
image: gianarb/orbiter
ports:
- 8001:8001
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints:
- node.role == manager
mode: replicated
replicas: 1
Even if i put the ports diferently, or 8000, this happens
"NetworkSettings": { "Bridge": "", "SandboxID": "589789cd6bdacc517390d98ae5430560b6f1274b84f59b6ecd26ca06dc9d31a6", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {},
And i cannot access the container, the only way that i find for me to use it is following this
Please, help!
I think that this issue can be implemented in multiple PRs,
Every single command in the makefile is not echoing the command output, it's very annoying as one of the most important features of Make is that it shows you what it's doing.
There are actually two dockerfiles, named Dockerfile.image
and Dockerfile.build
.
They can me merged in a single multi staged build Dockerfile and also the image dockerfile has the orbiter command both in entrypoint
and in cmd
and this does not work. (no review on that?) Introduced in #30
For example, the binaries
target should just be an alias to a bin/orbiter
target without PHONY
Instead build/docker
should be a PHONY target
Using docker 17.06 very first action performed by orbiter will destroy all previous running containers instead scaling up or down.
Server:
Version: 17.06.0-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:21:56 2017
OS/Arch: linux/amd64
Experimental: false
The effect: when docker first receive a request with old api signature will apply an operation of "service update" with new replica value. This actually will scale up/down as requested AND will restart all previous running containers with the update policy defined.
In case there was only one previous container running this lead to a service unavailability during restarts
The cause: API changes from github.com/docker/docker 1.13.1 used by orbiter and the latest used by docker 17.06. Latest version has a different signature on method ServiceInspectWithRaw ( line 17 here ) adding new parameter of type types.ServiceInspectOptions ( godoc )
Solution proposed is to update docker client lib and change call to ServiceInspectWithRaw method.
@gianarb I'm preparing a pull request but this changed could potentially impact other aspects or functionalities.
At the moment our policy just manage up and down but we also need to have some sort of limit related:
min
how many instances you need to have always up.max
is the other side limit. It can be 0
it means infinite.hi there,
i got an issue when i tried run stack.yaml on my servers could u pls help on this
starting container failed: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: "daemon": executable file not found in $PATH": unknown
thx,
Lukman
We need to get env vars during the configuration parsing in order to have the ability to inject environment variables.
Just to make all flexible we can use golang template engine and an object that right now only support env vars. Something like:
autoscalers:
infra_scale:
provider: digitalocean
parameters:
token: {{ .Env.DIGITALOCEAN_TOKEN }}
region: nyc3
size: 512mb
image: ubuntu-14-04-x64
key_id: 163422
# https://www.digitalocean.com/community/tutorials/an-introduction-to-cloud-config-scripting
userdata: |
#cloud-config
runcmd:
- sudo apt-get update
- wget -qO- https://get.docker.com/ | sh
policies:
frontend:
up: 2
down: 3
In this way we can add something more in the future if we need and not just env vars.
Add autodetection and how it works for Swarm.
TY!
To get some useful information out of the daemon we can implement a set of debug routes when the -debug
option is enabled.
When running make init
I got the following error:
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.100:2376": remote error: tls: bad certificate
I found some references to this issue here docker-archive/toolbox#346 (comment)
Versions:
Now that we have a logo, some documentation and the project is reaching a good stability I think we should think about a static fancy site. As you probably now I am not able to make anything good in terms of colors and design.
I think we can use a subdomain from scaledocker.com like orbiter.scaledocker.com
I am looking for some help!!
We can write a fallback zero configuration implementation of docker swarm because we can use service labels to get all the services that require to be managed by orbiter. For example with a set of labels like:
orbiter=true,orbiter_up:3, orbiter_down:2
we can create an autoscaler.
From a docker client point of view orbiter is already using:
cli, err := client.NewEnvClient()
It means that from Docker side we are already zero configuration.
Thanks @aluzzardi for the idea!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.