Giter VIP home page Giter VIP logo

orbiter's Introduction

Build Status

Orbiter is an easy to run autoscaler for Docker Swarm. It is designed to work out of the box.

We designed it in collaboration with InfluxData to show how metrics can be used to create automation around Docker tasks.

orbiter daemon

Orbiter is a daemon that exposes an HTTP API to trigger scaling up or down.

Http API

Orbiter exposes an HTTP JSON api that you can use to trigger scaling UP (true) or DOWN (false).

The concept is very simple, when your monitoring system knows that it's time to scale it can call the outscaler to persist the right action

curl -v -d '{"direction": true}' \
    http://localhost:8000/v1/orbiter/handle/infra_scale/docker

Or if you prefer

curl -v -X POST http://localhost:8000/v1/orbiter/handle/infra_scale/docker/up

You can look at the list of services managed by orbiter:

curl -v -X GET http://localhost:8000/v1/orbiter/autoscaler

Look at the health to know if everything is working:

curl -v -X GET http://localhost:8000/v1/orbiter/health

Autodetect

orbiter daemon

It's going to start in autodetect mode. This modality at the moment only fetches for Docker SwarmMode. It uses the environment variables DOCKER_HOST (and others) to locate a Docker daemon. If it's in SwarmMode, orbiter is going to look at all the services currently running.

If a service is labeled with orbiter=true it's going to auto-register the service and it's going to enable autoscaling.

If a service is labeled with orbiter=true orbiter will auto-register the service providing autoscaling capabilities.

Let's say that you started a service:

docker service create --label orbiter=true --name web -p 80:80 nginx

When you start orbiter, it's going to auto-register an autoscaler called autoswarm/web. By default up and down are set to 1 but you can override them with the label orbiter.up=3 and orbiter.down=2.

This scalability allows you to instantiate orbiter in an extremely easy way in Docker Swarm.

A background job reads the Docker Swarm Event api to keep the services registered in sync.

With docker

docker run -it -v ${PWD}/your.yml:/etc/orbiter.yml -p 8000:8000 gianarb/orbiter daemon

We are supporting an image gianarb/orbiter in hub.docker.com. You can run it with your configuration.

In this example I am using volumes but if you have a Docker Swarm 1.13 up and running you can use secrets.

orbiter's People

Contributors

afemartin avatar fntlnz avatar gianarb avatar gzgithub avatar jwitko avatar solidnerd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

orbiter's Issues

Create context and template to make configuration smarter.

We need to get env vars during the configuration parsing in order to have the ability to inject environment variables.

Just to make all flexible we can use golang template engine and an object that right now only support env vars. Something like:

autoscalers:
  infra_scale:
    provider: digitalocean
    parameters:
      token: {{ .Env.DIGITALOCEAN_TOKEN }}
      region: nyc3
      size: 512mb
      image: ubuntu-14-04-x64
      key_id: 163422
      # https://www.digitalocean.com/community/tutorials/an-introduction-to-cloud-config-scripting
      userdata: |
        #cloud-config

        runcmd:
          - sudo apt-get update
          - wget -qO- https://get.docker.com/ | sh
    policies:
      frontend:
        up: 2
        down: 3

In this way we can add something more in the future if we need and not just env vars.

VirtualBox host-only adapters error

When running make init I got the following error:

Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.100:2376": remote error: tls: bad certificate

I found some references to this issue here docker-archive/toolbox#346 (comment)

Versions:

  • OS: Mac OS X 10.11.6
  • VirtualBox: 5.1.20
  • Docker Machine: 0.11.0

Any additional configuration required to make Orbiter autoscale?

This is my first time using Orbiter. Currently, I got this swarmtest.yml :

version: '3.7'

networks:
  my_docker_network:

services:
  orbiter:
    image: gianarb/orbiter
    command: /bin/orbiter daemon --debug
    ports:
      - 8000:8000
    volumes:
      - /var/run/docker.sock:/var/run/docker.dock
    deploy:
      placement:
        constraints:
          - node.role == manager
      mode: replicated
      replicas: 1
  web:
    image: nginx:alpine
    ports:
      - "0.0.0.0:80:80"
      - "0.0.0.0:443:443"
    volumes:
      - ./web:/var/www/web
    networks:
      - my_docker_network
    deploy:
      labels:
        - "orbiter=true"
      replicas: 1
  phpfpm:
    image: php:7.3-fpm-alpine3.9
    volumes:
      - ./web:/var/www/web
    networks:
      - my_docker_network
    deploy:
      labels:
        - "orbiter=true"
      replicas: 1

which I runs with: docker stack deploy -c swarmtest.yml myswarm

I see from docker service ls that they all came up:

$ docker service ls
ID             NAME              MODE         REPLICAS   IMAGE                    PORTS
fv0c3eea5g2n   myswarm_orbiter   replicated   1/1        gianarb/orbiter:latest   *:8000->8000/tcp
ez3r46p6l297   myswarm_phpfpm    replicated   1/1        php:7.3-fpm-alpine3.9
ne5fkzualf1x   myswarm_web       replicated   1/1        nginx:alpine             *:80->80/tcp, *:443->443/tcp

However, when I do a load test using wrk . I see the CPU of the server goes up but the apps never scale up and they always stay at 1 replica no matter what.

Additionally, does this look right?:

$ curl -v -X GET http://127.0.0.1:8000/v1/orbiter/autoscaler
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /v1/orbiter/autoscaler HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 29 Jul 2021 14:55:37 GMT
< Content-Length: 11
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host 127.0.0.1 left intact
{"data":[]}

What are other things that I must do in order to make Orbiter autoscale?

Specify min and max.

At the moment our policy just manage up and down but we also need to have some sort of limit related:

  • min how many instances you need to have always up.
  • max is the other side limit. It can be 0 it means infinite.

(bug) "exec format error" on Raspberry PI 3 & 4

  • Steps to reproduce

    1. load docker on raspberry pi
    2. launch service as stated in documentation
    3. execute docker service logs <service name used in deployment>
  • Additional Details
    Logs:

standard_init_linux.go:211: exec user process caused "exec format error",
standard_init_linux.go:211: exec user process caused "exec format error",
standard_init_linux.go:211: exec user process caused "exec format error",

Doing discovery, it would show that there is no a multiarch binary/container uploaded to docker hub.

  • Suggested changes

Add a multiarch binary and docker image as outlined in https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

Docker Swarm zero conf

We can write a fallback zero configuration implementation of docker swarm because we can use service labels to get all the services that require to be managed by orbiter. For example with a set of labels like:

orbiter=true,orbiter_up:3, orbiter_down:2 we can create an autoscaler.

From a docker client point of view orbiter is already using:

cli, err := client.NewEnvClient()

It means that from Docker side we are already zero configuration.

Thanks @aluzzardi for the idea!

Vendor out of sync

The Gopkg.lock and Gopkg.toml are out of sync.

dep status
Lock inputs-digest mismatch. This happens when Gopkg.toml is modified.
Run `dep ensure` to regenerate the inputs-digest.

Also seems that there's a manifest.json which has another whole set of dependencies and should be deleted since it's used by nothing

No response from curl command with "direction=true/false"

I have created orbiter service using stack file hosting port 8000. Services with label orbiter=true are registering with orbiter. But when i tried to scale up he service using curl command like

curl -v -d '{"direction": true}'
http://localhost:8000/v1/orbiter/handle/autoswarm/web/docker

where 'web' is the name of my service , it used to stay calm and no response. After I went through service logs. what i found was log message with msg=GET / nof found.

Here are my orbiter service logs:
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:32Z" level=info msg="orbiter started"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:32Z" level=info msg="API Server run on port :8000"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:33Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:33Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T08:15:33Z" level=info msg="Registering /handle/autoswarm/hari to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:05:32Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:05:32Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:05Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:05Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:13Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:13Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:17Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:09:17Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:00Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:00Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:07Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:16:07Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:08Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:08Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:12Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:23:12Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:08Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:08Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:48Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:48Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:53Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:25:53Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:33:20Z" level=info msg="GET / not found."
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:36:11Z" level=info msg="Successfully connected to a Docker daemon"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:36:11Z" level=info msg="Registering /handle/autoswarm/web to orbiter. (UP 1, DOWN 1, COOL 1)"
hari_orbiter.1.72cgeu7t7aoh@ubuntuvm-tmp | time="2020-09-02T13:36:11Z" level=info msg="Registering /handle/autoswarm/amz to orbiter. (UP 1, DOWN 1, COOL 1)"

I am expecting someone might help me in solving this issue.
Thanks
hari

executable file not found in $PATH

hi there,

i got an issue when i tried run stack.yaml on my servers could u pls help on this

starting container failed: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: "daemon": executable file not found in $PATH": unknown

thx,
Lukman

scale up post cmd failure

I started orbiter with the attached orbiter-stack.yml file, then a service with orbiter flags. Initial scale up and scale down worked, but after scaling down to 1, scale up gave error about not being able to scale down. Used "curl -v -X POST localhost:8081/handle/autoswarm/stage-investor-relations_app/up" for all tests. Log file also attached.

orbiter-scale-error.zip

Makefile improvements

I think that this issue can be implemented in multiple PRs,

Echoing the command output is useful

Every single command in the makefile is not echoing the command output, it's very annoying as one of the most important features of Make is that it shows you what it's doing.

Dockerfile not aligned with Makefile

There are actually two dockerfiles, named Dockerfile.image and Dockerfile.build .
They can me merged in a single multi staged build Dockerfile and also the image dockerfile has the orbiter command both in entrypoint and in cmd and this does not work. (no review on that?) Introduced in #30

PHONY

For example, the binaries target should just be an alias to a bin/orbiter target without PHONY
Instead build/docker should be a PHONY target

Autoscaling destroy all previous container on first action

Using docker 17.06 very first action performed by orbiter will destroy all previous running containers instead scaling up or down.

Server:
 Version:      17.06.0-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:21:56 2017
 OS/Arch:      linux/amd64
 Experimental: false

The effect: when docker first receive a request with old api signature will apply an operation of "service update" with new replica value. This actually will scale up/down as requested AND will restart all previous running containers with the update policy defined.
In case there was only one previous container running this lead to a service unavailability during restarts

The cause: API changes from github.com/docker/docker 1.13.1 used by orbiter and the latest used by docker 17.06. Latest version has a different signature on method ServiceInspectWithRaw ( line 17 here ) adding new parameter of type types.ServiceInspectOptions ( godoc )

Solution proposed is to update docker client lib and change call to ServiceInspectWithRaw method.
@gianarb I'm preparing a pull request but this changed could potentially impact other aspects or functionalities.

Create autoscaling group in Digitalocean based on tags

This post explains how to use tags.

At the moment the Digitalocean implementation is very simple. We are adding and deleting servers without to care about them. It can be any kind of server in status running.

we need to use tags to mark as "manage by orbiter" every server that we start and remove only the servers part of this groups.
Tags can be something like "orbiter_autoscaling=nameoftheautoscaler" in this way we know that it's managed by us and we are not going to delete other instances.

Reconciliation

Providers as DockerSwarm are able to manage reconciliation by them self but what happen with DigitalOcean or other that are not able to manage this behavior?

At the moment nothing happen. We are not able to detect when a node is down and to replace them with new instances. That's something that we need to address as soon as possible.

  • Basic API
  • Provider: SwarmMode
  • Provider: DigitalOcean

[COMMUNITY HELP!!] Site docs and so on

Now that we have a logo, some documentation and the project is reaching a good stability I think we should think about a static fancy site. As you probably now I am not able to make anything good in terms of colors and design.

I think we can use a subdomain from scaledocker.com like orbiter.scaledocker.com

I am looking for some help!!

autoconfig refactoring.

  • Move the autodetection logic into the provider. If the provider is not supporting autodetecting it can just return an error.

  • Make smarter the fallback function in order to scale and to call all the autodetection provider to self-configure orbiter.

This two points are necessary to make the codebase flexible and maintainable. But the POC is there and it's working.

Service starts but not use the port that is specified

version: "3.4"

services:
  orbiter:
    image: gianarb/orbiter
    ports:
      - 8001:8001
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      placement:
        constraints:
          - node.role == manager
      mode: replicated
      replicas: 1

Even if i put the ports diferently, or 8000, this happens

"NetworkSettings": { "Bridge": "", "SandboxID": "589789cd6bdacc517390d98ae5430560b6f1274b84f59b6ecd26ca06dc9d31a6", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {},

And i cannot access the container, the only way that i find for me to use it is following this

https://stackoverflow.com/questions/19335444/how-do-i-assign-a-port-mapping-to-an-existing-docker-container/26622041#26622041

Please, help!

Optimize by querying by ID

You don't need to fetch and loop through all services in the Swarm @

for _, service := range services {

Instead you can add a filter, and fetch the required service directly. E.g.:

serviceFilter := filters.NewArgs()
serviceFilter.Add("id", serviceId)
services, err := dockerClient.ServiceList(ctx, types.ServiceListOptions{Filters: serviceFilter})

Also see: https://docs.docker.com/engine/reference/commandline/service_ls/#filtering

Make orbiter a swarm autoscaler

After some long chat with people and myself we realised that one single autoscaler for everything is not going to be usable and maintainable for this reason we will make orbiter an autoscaler for docker swarm.

Manage status and distributed datastore.

At the moment orbiter has not a status. It doesn't know where it triggers a scaling action for example. This can be a big limitation because a future feature can be:

Don't trigger another scale action if you did something just 1 minutes ago. Because it can be just a false positive behavior.

Or

Give me the list of scaling that orbiter did this week

Or

Move away from a yml based configuration to something like

orbiterctl autoscaler add --up 3 --down 3 --provider digitalocean --token aw4gagearg autoscalerName

Or

Manage autoscaling group not via tags (because some providers maybe it doesn't support tags but store the servers managed by orbiter in orbiter itself.

Etcd can be a good candidate to have an embedded distributed storage to support all these features.

This is a very big change, it's not going to be done soon. But it's still something good to have.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.