Giter VIP home page Giter VIP logo

rotor's Introduction

Rotor Logo

turbinelabs/rotor

This project is no longer maintained by Turbine Labs, which has shut down.

Apache 2.0 GoDoc CircleCI Go Report Card codecov

Rotor is a fast, lightweight bridge between your service discovery and Envoy’s configuration APIs. It groups your infrastructure into Envoy clusters and defines simple routes for each service. Instances are gathered directly from your service discovery registry, and clusters are created by grouping together instances under a common tag or label. This instance and cluster information is served via Envoy’s endpoint and cluster discovery services, CDS and EDS.

Rotor also sets up Envoy’s routing and listeners (RDS and LDS) to serve these clusters and send stats to common backends like Prometheus, statsd, dogstatsd, and Wavefront.

Rotor is a great starting point for quickly integrating Envoy with your existing environment. It provides a simple service mesh for many applications out of the box. If you have a more complex application, you can modify it to your needs or add an API key to unlock traffic management with Houston.

Features

Rotor connects two types of information:

  • Service Discovery information is collected from your existing registry, such as Kubernetes or Consul.
  • Envoy Configuration is served over Envoy’s discovery services: EDS, CDS, RDS, and LDS.
  • (optionally) Configuration via UI and API is provided by the Turbine Labs API, with an API key.

Without an API key (“standalone mode”), Rotor will serve Envoy configuration as follows:

  • Endpoints (EDS) are mirrored from your service discovery.
  • Clusters (CDS) are created by grouping endpoints based on labels on your nodes/pods/hosts. The label format depends on the service discovery; typically tbn_cluster or tbn-cluster.
  • Routes (RDS) are created from your clusters. Each cluster is exposed via a single domain with the same names as the cluster, and a single catch-all route (/). This is similar to Consul or Kubernetes DNS discovery for service registries.
  • Listeners (LDS) are statically configured. Rotor configures Envoy to listen on port 80 and sets up ALS to collect stats on the routes served via RDS.
  • Access Logging (ALS) is configured, and Rotor can send request metrics to Prometheus, statsd, dogstatsd, and/or Wavefront.

For better control over the routes and listeners that Rotor sets up, there are two options:

  • Define static listeners and routes in a config file via the ROTOR_XDS_STATIC_RESOURCES_FILENAME environment variable. Rotor will serve these "dynamic" routes over RDS and LDS. To change these routes, update the file and restart Rotor. More information and examples in this blog post.
  • Add an an API key for Houston. This allows Rotor to pull routes from Houston, provides full control over domains, routes, and cluster behavior via Houston's UI and API. For more information, see the section below on adding an API key.

For control over cluster features like circuit breaking, gRPC, and static clusters, Rotor can also read either a set of static clusters or a cluster template from the same ROTOR_XDS_STATIC_RESOURCES_FILENAME file. See an example of setting up gRPC and static clusters on the Turbine Labs blog.

Installation

The simplest way to start using Rotor is to use the docker image. You’ll need to configure it to point to your service discovery, then configure Envoy to read xDS configuration from Rotor. How you setup Envoy will depend on your environment, though you can see a simple example in the section on Envoy configuration.

Rotor supports the following service discovery integrations:

  • Kubernetes
  • Consul
  • AWS/EC2
  • AWS/ECS
  • DC/OS
  • (experimental) Envoy v1 CDS/SDS
  • (experimental) Envoy v2 CDS/EDS

Additionally, Rotor can poll a file for service discovery information. This provides a lowest-common-denominator interface if you have a mechanism for service discovery that we don't yet support. We plan to add support for other common service discovery mechanisms in the future, and we'd love your help.

Note: you must label or tag the instances you want Rotor to collect! The name of this label is configurable, and the exact configuration option depends on your service discovery registry. To see the flags available for your SD, run:

docker run turbinelabs/rotor:0.19.0 rotor <platform> --help

where <platform> is one of: aws, ecs, consul, file, kubernetes, or marathon.

Kubernetes

Kubernetes requires a number of RBAC objects to be created before running Rotor. The easiest way to create all these is via the YAML file in this repo:

kubectl create -f https://raw.githubusercontent.com/turbinelabs/rotor/master/examples/kubernetes/kubernetes-rotor.yaml

Rotor discovers clusters by looking for active pods in Kubernetes and grouping them based on their labels. You will have to add two pieces of information to each pod to have Rotor recognize it:

  • A tbn_cluster: <name> label to name the service to which the Pod belongs. The label key can be customized.

  • An exposed port named http or a customized name. A pod must have both the tbn_cluster label and a port named http to be collected by Rotor.

Rotor will also collect all other labels on the Pod, which can be used for routing.

An example of a pod with labels correctly configured is included here. An example Envoy-simple yaml is also included.

Consul

Consul requires the datacenter and the host/port of your Consul server.

docker run -d \
  -e "ROTOR_CMD=consul" \
  -e "ROTOR_CONSUL_DC=<your datacenter>" \
  -e "ROTOR_CONSUL_HOSTPORT=<consul ip address>:8500" \
  turbinelabs/rotor:0.19.0

To mark a Service for Rotor, add a tag called tbn-cluster. See examples/consul for a working example.

EC2

Rotor can collect labels from the AWS API on EC2 instances.

docker run -d \
  -e 'ROTOR_AWS_AWS_ACCESS_KEY_ID=<your aws access key>' \
  -e 'ROTOR_AWS_AWS_REGION=<your aws region>' \
  -e 'ROTOR_AWS_AWS_SECRET_ACCESS_KEY=<your secret access key>' \
  -e 'ROTOR_AWS_VPC_ID=<your vpc id>' \
  -e 'ROTOR_CMD=aws' \
  -p 50000:50000 \
  turbinelabs/rotor:0.19.0

You need to tag instances with the service name and port it exposes by adding a tag of the format tbn:cluster:<cluster-name>:<port-name>. Instances that serve more than one port can be tagged multiple times. For example, to expose two services from a single instance on ports 8080 and 8081, you can tag the instance by running:

aws ec2 create-tags \
  --resources <your instance id> \
  --tags \
    Key=tbn:cluster:your-service-name:8080,Value= \
    Key=tbn:cluster:your-other-service:8081,Value=

ECS

ECS integration uses the AWS API, similar to EC2.

docker run -d \
  -e 'ROTOR_AWS_AWS_ACCESS_KEY_ID=<your aws access key>' \
  -e 'ROTOR_AWS_AWS_REGION=<your aws region>' \
  -e 'ROTOR_AWS_AWS_SECRET_ACCESS_KEY=<your secret access key>' \
  -e 'ROTOR_CMD=ecs' \
  -p 50000:50000 \
  turbinelabs/rotor:0.19.0

You can run this inside or outside of ECS itself, as long as your Envoy instances have access to the container on port 50000.

ECS tags indicate the service name and exposed port, and they are located with dockerLabels on the container definition:

{
  "dockerLabels": {
    "tbn-cluster": "your-service:8080"
  }
}

DC/OS

Rotor runs as an app inside DC/OS. Save this as rotor.json:

{
  "id": "/tbn/rotor",
  "cpus": 1,
  "mem": 128,
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "turbinelabs/rotor:0.19.0",
      "forcePullImage": true
    }
  },
  "env": {
   "ROTOR_CMD": "marathon",
    "ROTOR_MARATHON_DCOS_ACS_TOKEN": "<your dc/os access token>",
    "ROTOR_MARATHON_DCOS_URL": "<your dc/os admin URL>",
    "ROTOR_MARATHON_DCOS_INSECURE": "<true if admin URL is not HTTPS>"
  },
  "healthChecks": []
}

Deploy the app with:

dcos marathon app add rotor.json

To have rotor pick up services, add tbn_cluster labels to each container definition with the service name.

Flat files

Rotor can read from flat files that define clusters and instances. To specify the format of the file, use the --format flag (or the ROTOR_FILE_FORMAT environment variable). Possible values are "json" and "yaml", and the default value is "json".

docker run -d \
  -e 'ROTOR_CMD=file' \
  -e 'ROTOR_FILE_FORMAT=yaml' \
  -e 'ROTOR_FILE_FILENAME=/path/to/file/in/container' \
  -p 50000:50000 \
  turbinelabs/rotor:0.19.0

The format defines clusters and the associated instances:

- cluster: example-cluster-1
  instances:
    - host: 127.0.0.1
      port: 8080
    - host: 127.0.0.1
      port: 8081
- cluster: example-cluster-2
  instances:
    - host: 127.0.0.1
      port: 8083

Envoy

Once Rotor is running, you can configure Envoy to receive EDS, CDS, RDS, and LDS configuration from it. You can put together a bootstrap config based on the Envoy docs, or you can use envoy-simple, a minimal Envoy container that can configured via environment variables.

docker run -d \
  -e 'ENVOY_XDS_HOST=127.0.0.1' \
  -e 'ENVOY_XDS_PORT=50000' \
  -p 9999:9999 \
  -p 80:80 \
  turbinelabs/envoy-simple:0.19.0

You may have to modify the host and port, depending on where you have Rotor deployed.

If you are not using envoy-simple, you will have to set --service-cluster=default-cluster and --service-zone=default-zone flags on your Envoy. With a Houston API key, Rotor is capable of serving many different Envoy configurations, depending on which Envoy is asking. In standalone mode, all Envoys are assumed to be part of the same Zone and Cluster, so you must make sure these values are passed to Rotor. envoy-simple passes default-cluster and default-zone by default. To serve multiple configs, either run multiple Rotors, fork Rotor and add your own config, or see Using with Houston.

You can verify that Rotor and Envoy are working correctly together by curling the admin interface to Envoy to see the routes that have been set up:

curl localhost:9999/config_dump

If everything is working, you should see a JSON config object with routes for all your services.

Configuration

Global flags for Rotor can be listed with docker run turbinelabs/rotor:0.19.0 rotor --help. Global flags can be be passed via upper-case, underscore-delimited environment variables prefixed with ROTOR_, with all non-alpha characters converted to underscores. For example, --some-flag becomes ROTOR_SOME_FLAG.

Per-platform flags can be listed with docker run turbinelabs/rotor:0.19.0 rotor <platform> --help. Per-platform flags can be similarly passed as environment variables, prefixed with ROTOR_<PLATFORM>. For example --some-flag for the kubernetes platform becomes ROTOR_KUBERNETES_SOME_FLAG.

Note Command-line flags take precedence over environment variables.

Configuring Leaderboard Logging

Rotor can be configured to periodically log a leaderboard of non-2xx requests to stdout. This functionality is controlled by selecting the number of responses to track (ROTOR_XDS_GRPC_LOG_TOP or --xds.grpc-log-top) and the aggregation period (ROTOR_XDS_GRPC_LOG_TOP_INTERVAL or --xds.grpc-log-top-interval). These are global flags and, if being passed on the command line, should come before platform configuration. As with any flag they may also be specified via environment variable.

When viewing Rotor logs the request leaderboard is recorded in the following format:

[info] <timestamp> ALS: <number of requests>: <HTTP response code> <request path>

Debugging Rotor

There are a few ways to figure out what's going on with Rotor.

Debug Logging

You can make Rotor's logging more verbose by adding ROTOR_CONSOLE_LEVEL=debug to the environment, or by setting the --console.level flag if running the binary by hand.

Config Dump

You can dump the full configuration that Rotor serves by running rotor-test-client within the running Rotor docker container:

docker exec <container id> rotor-test-client

If you've set ROTOR_XDS_DEFAULT_CLUSTER or ROTOR_XDS_DEFAULT_ZONE, you'll need to correspondingly set them as arguments:

docker exec <container id> rotor-test-client --zone=<zone> --cluster=<cluster>

If you're running the binaries by hand, and you've passed the --xds.addr flag to rotor, you'll need to pass the same value in the --addr flag to rotor-test-client.

Local Installation / Development

For development, running tests, or custom integration, you may want to run Rotor locally.

Requirements

  • Go 1.10.3 or later (previous versions may work, but we don't build or test against them)

Dependencies

The rotor project depends on these packages:

The tests depend on our test package, and on gomock, and gomock-based Mocks of most interfaces are provided.

The Rotor plugins depend on many packages, none of which is exposed in the public interfaces. This should be considered an opaque implementation detail, see Vendoring for more discussion.

It should always be safe to use HEAD of all master branches of Turbine Labs open source projects together, or to vendor them with the same git tag.

Install

go get -u github.com/turbinelabs/rotor/...
go install github.com/turbinelabs/rotor/...

Clone/Test

mkdir -p $GOPATH/src/turbinelabs
git clone https://github.com/turbinelabs/rotor.git > $GOPATH/src/turbinelabs/rotor
go test github.com/turbinelabs/rotor/...

Godoc

Rotor

Versioning

Please see Versioning of Turbine Labs Open Source Projects.

Pull Requests

Patches accepted! In particular we'd love to support other mechanisms of service discovery. Please see Contributing to Turbine Labs Open Source Projects.

API Key

Rotor is also a part of a paid subscription to Houston. By adding an API key to Rotor, you unlock traffic management for your whole team, including:

  • An easy-to-use UI for creating and modifying routes
  • Full configuration of all of Envoy’s features: advanced load balancing, health checking, circuit breakers, and more
  • Automatic collection of Envoy’s metrics for routes, clusters, and more, with easy integration into statsd, Prometheus, and other common dashboards

Specifically, instead of the static routes described in Features, an API key allows more flexible configuration of routes, domains, listeners, and clusters through Houston. You can also run multiple Rotor processes to bridge, e.g. EC2 and Kubernetes, allowing you configure routes that incrementally migrate traffic from one to the other. Houston also collects all the additional metadata on your instances, allowing you to route traffic based on custom tags and labels.

If you already have an API key, see the Turbine Labs docs for how to get started.

Code of Conduct

All Turbine Labs open-sourced projects are released with a Contributor Code of Conduct. By participating in our projects you agree to abide by its terms, which will be carefully enforced.

rotor's People

Contributors

9len avatar arsalaanansarideveloper avatar brirams avatar brookshelley avatar chrisgoffinet avatar falun avatar felixonmars avatar k4y3ff avatar mccv avatar phedoreanu avatar protochron avatar trjordan avatar zbintliff avatar zuercher avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rotor's Issues

Example file config invalid

file config:

- cluster: example-cluster-1
  instances:
    - host: 127.0.0.1
      port: 8080
    - host: 127.0.0.1
      port: 8081
- cluster: example-cluster-2
  instances:
    - host: 127.0.0.1
      port: 8083

docker-compose file:

version: "2"
services:
  rotor:
    image: turbinelabs/rotor:0.18.0
    ports:
      - 50000:50000
    environment:
      - ROTOR_CONSOLE_LEVEL=debug
      - ROTOR_CMD=file
      - ROTOR_FORMAT=yaml
      - ROTOR_FILE_FILENAME=/data/routes.yml
    volumes:
      - ./data:/data

error log:

rotor_1  | Jul 11 10:20:12 add4810ee356 syslog-ng[11]: EOF on control channel, closing connection;
rotor_1  | *** Running /etc/rc.local...
rotor_1  | *** Booting runit daemon...
rotor_1  | *** Runit started as PID 17
rotor_1  | *** Running /usr/local/bin/rotor.sh...
rotor_1  | Jul 11 10:20:12 add4810ee356 cron[21]: (CRON) INFO (pidfile fd = 3)
rotor_1  | Jul 11 10:20:12 add4810ee356 cron[21]: (CRON) INFO (Running @reboot jobs)
rotor_1  | [info] 2018/07/11 10:20:12 No --api.key specified. Using standalone mode: Envoys will be configured to serve on port 80, for Envoy cluster "default-cluster" in zone "default-zone".
rotor_1  | [info] 2018/07/11 10:20:12 No API key specified, the API stats backend will not be configured.
rotor_1  | [info] 2018/07/11 10:20:12 watching /data/routes.yml
rotor_1  | [info] 2018/07/11 10:20:12 watching /data
rotor_1  | [debug] 2018/07/11 10:20:12 file: reload
rotor_1  | [info] 2018/07/11 10:20:12 serving xDS on [::]:50000
rotor_1  | [info] 2018/07/11 10:20:12 log streaming enabled
rotor_1  | [info] 2018/07/11 10:20:12 Stopping XDS gRPC server
rotor_1  | file: invalid character ' ' in numeric literal
rotor_1  |
rotor_1  | *** /usr/local/bin/rotor.sh exited with status 1.
rotor_1  | *** Shutting down runit daemon (PID 17)...
rotor_1  | *** Running /etc/my_init.post_shutdown.d/10_syslog-ng.shutdown...
rotor_1  | Jul 11 10:20:13 add4810ee356 syslog-ng[11]: syslog-ng shutting down; version='3.5.6'
rotor_1  | Jul 11 10:20:13 add4810ee356 syslog-ng[11]: EOF on control channel, closing connection;
rotor_1  | *** Killing all processes...
rotorandenvoy_rotor_1 exited with code 1

Consul health_check not work

Since consul reject the : as service's metadata key, the health_check didn't work.

{{bold "Health Checks"}}

Node health checks will be added as instance metadata named following the pattern
"check:<check-id>" with the check status as value. Additionally "node-health" is
added for an instance within each cluster to aggregate all the other health
checks on that node that either are 1) not bound to a service or 2) bound to
the service this cluster represents. The value for this aggregate metadata will be:

    passing   if all Consul health checks have a "passing" value
    mixed     if any Consul health check has a "passing" value
    failed    if no Consul health check has the value of "passing"

hashicorp/consul#4422

How to route to my defined clusters in rotor file mode configuration

Hi,
@9len I want to use rotor file mode configuration, this is my configurations:

clusters.yaml

- cluster: example-cluster-1
  instances:
    - host: 172.27.71.209
      port: 8002
    - host: 172.27.71.209
      port: 8001
- cluster: example-cluster-2
  instances:
    - host: 172.27.71.209
      port: 8002

my docker run commands :

docker run --name hello-world1 -d -p 8001:80 containersol/hello-world
docker run --name hello-world2 -d -p 8002:80 containersol/hello-world
docker run -v $(pwd)/:/data    \
  -e 'ROTOR_CMD=file' \
  -e 'ROTOR_CONSOLE_LEVEL=debug' \
  -e 'ROTOR_FILE_FORMAT=yaml' \
  -e 'ROTOR_FILE_FILENAME=/data/clusters.yaml' \
  -p 50000:50000 \
  turbinelabs/rotor:0.19.0
docker run \
  -e 'ENVOY_XDS_HOST=172.27.71.209' \
  -e 'ENVOY_XDS_PORT=50000' \
  -p 9999:9999 \
  -p 80:80 \
  turbinelabs/envoy-simple:0.19.0

Now, when I do a curl request with curl localhost:80 I didn't get any response from my hello world container!!
Also if I want to customize the route to serve each cluster in different routes, what do I do?

Static listener with tls_context support?

Hey guys,

When I enable tls_context for a static listener rotor fails to unmarshal the file

could not deserialize static resources: json: cannot unmarshal string into Go value of type []json.RawMessage
  • static config
listeners:
- address:
    socket_address:
      address: 0.0.0.0
      port_value: 443
  filter_chains:
  - filters:
    - name: envoy.http_connection_manager
      config:
        codec_type: AUTO
        stat_prefix: ingress_http
        route_config:
          virtual_hosts:
          - name: backend
            domains:
            - "example.com"
            routes:
            - match:
                prefix: "/service/1"
              route:
                cluster: service1
            - match:
                prefix: "/service/2"
              route:
                cluster: service2
        http_filters:
        - name: envoy.router
          config: {}
    tls_context:
      common_tls_context:
        alpn_protocols: h2,http/1.1
        tls_params:
          tls_minimum_protocol_version: TLSv1_2
          tls_maximum_protocol_version: TLSv1_3
          cipher_suites: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
        tls_certificates:
        - certificate_chain: { filename: /etc/envoy/cert.crt }
          private_key: { filename: /etc/envoy/cert.key }

Any ideas?
Thanks

Identify file format based on filename

Hi there,

I was very excited to try Rotor this morning, using rotor version 0.16.0 and ran into a small documentaion/UX issue:

While following the instructions in the README to start with Flat Files Service Discovery, the README doesn’t mention that to use a YAML file – in the format described – one needs to specify the --format=yaml flag to Rotor.

Otherwise Rotor tries to parse the YAML as JSON, which is the default value in the github.com/turbinelabs/codec library and spits out file: invalid character ' ' in numeric literal errors with no obvious clue as to what is causing this.

I looked into trying to guess the file format from the filename, but it looks it would require quite a few changes to the way flags are parsed.

Not able to get healthy hosts using ROTOR

Hi Team,

I am trying to integrate with Turbine Labs rotor for Envoy xDS.
I was able to bring up , Rotor, and Envoy(configured to talk to Rotor).

Issue: The hosts though are not getting updated to Envoy .

Here are my logs from Rotor and Envoy, any help here would be deeply appreciated to figure out why the hosts are not getting updated to Envoy.

Envoy is brought as below
docker run -it -d --name envoy-rotor --link rotor -v /envoy/envoy-rotor.yaml:/etc/envoy-rotor.yaml -v /var/tmp/:/var/tmp/ -p 8005:8005 -p 8001:8001 envoyproxy/envoy envoy --v2-config-only -c /etc/envoy-rotor.yaml

Envoy.txt

Rotor is brought as below
docker run -it --name rotor -d -e "ROTOR_CMD=consul" -e "ROTOR_CONSUL_DC=xxx" -e "ROTOR_CONSUL_HOSTPORT=consul_ip:8500" -e "ROTOR_CONSOLE_LEVEL=trace" -p 50000:50000 turbinelabs/rotor:0.17.2

Rotor Logs:

2018-06-25T21:38:28.860117000Z [info] 2018/06/25 21:38:28 No API key specified, the API stats backend will not be configured.
2018-06-25T21:38:28.864072000Z [info] 2018/06/25 21:38:28 serving xDS on [::]:50000
2018-06-25T21:38:28.864689000Z [info] 2018/06/25 21:38:28 log streaming enabled
018-06-25T21:55:29.202937000Z [info] 2018/06/25 21:55:29 respond type.googleapis.com/envoy.api.v2.ClusterLoadAssignment[envoyservice-1] version "rh0E+4kL7sP/bq9Mchn6nQ==" with version "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:55:29.218037000Z [info] 2018/06/25 21:55:29 Stream 1, type.googleapis.com/envoy.api.v2.ClusterLoadAssignment: ack response version "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:55:29.218281000Z [info] 2018/06/25 21:55:29 open watch 33 for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment[envoyservice-1] from nodeID "{"proxy_name":"default-cluster","zone_name":"default-zone"}", version "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:56:29.231249000Z [info] 2018/06/25 21:56:29 respond open watch 33[envoyservice-1] with new version "HpiopUqw68kQRX56Z1eKIg=="
2018-06-25T21:56:29.231556000Z [info] 2018/06/25 21:56:29 respond type.googleapis.com/envoy.api.v2.ClusterLoadAssignment[envoyservice-1] version "hppZLEmbKZg9WVOtbF1aQQ==" with version "HpiopUqw68kQRX56Z1eKIg=="

**Envoy Logs:

**
2018-06-25T21:55:20.720849000Z [2018-06-25 21:55:20.720][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3749 milliseconds
2018-06-25T21:55:20.721800000Z [2018-06-25 21:55:20.721][5][debug][upstream] source/common/upstream/logical_dns_cluster.cc:77] async DNS resolution complete for rotor
2018-06-25T21:55:24.683702000Z [2018-06-25 21:55:24.682][5][debug][main] source/server/server.cc:118] flushing stats
2018-06-25T21:55:25.722346000Z [2018-06-25 21:55:25.721][5][debug][upstream] source/common/upstream/logical_dns_cluster.cc:69] starting async DNS resolution for rotor
2018-06-25T21:55:25.722733000Z [2018-06-25 21:55:25.721][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3124 milliseconds
2018-06-25T21:55:25.723396000Z [2018-06-25 21:55:25.723][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 2811 milliseconds
2018-06-25T21:55:25.724564000Z [2018-06-25 21:55:25.724][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3749 milliseconds
2018-06-25T21:55:25.725642000Z [2018-06-25 21:55:25.725][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 4061 milliseconds
2018-06-25T21:55:25.726663000Z [2018-06-25 21:55:25.726][5][debug][upstream] source/common/upstream/logical_dns_cluster.cc:77] async DNS resolution complete for rotor
2018-06-25T21:55:29.204203000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:384] [C0] socket event: 3
2018-06-25T21:55:29.204458000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:452] [C0] write ready
2018-06-25T21:55:29.204700000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:422] [C0] read ready
2018-06-25T21:55:29.204946000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: 100
2018-06-25T21:55:29.205206000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: -1
2018-06-25T21:55:29.205430000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:29] [C0] read error: 11
2018-06-25T21:55:29.205693000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:277] [C0] dispatching 100 bytes
2018-06-25T21:55:29.205928000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:335] [C0] recv frame type=0
2018-06-25T21:55:29.206166000Z [2018-06-25 21:55:29.202][5][trace][http] source/common/http/async_client_impl.cc:100] async http request response data (length=91 end_stream=false)
2018-06-25T21:55:29.206403000Z [2018-06-25 21:55:29.202][5][debug][upstream] source/common/config/grpc_mux_impl.cc:160] Received gRPC message for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment at version hppZLEmbKZg9WVOtbF1aQQ==
2018-06-25T21:55:29.206645000Z [2018-06-25 21:55:29.202][5][debug][upstream] source/common/upstream/eds.cc:51] Missing ClusterLoadAssignment for envoyservice-1 in onConfigUpdate()
2018-06-25T21:55:29.206870000Z [2018-06-25 21:55:29.202][5][debug][config] bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:60] gRPC config for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment accepted with 0 resources: []
2018-06-25T21:55:29.207093000Z [2018-06-25 21:55:29.202][5][trace][upstream] source/common/config/grpc_mux_impl.cc:80] Sending DiscoveryRequest for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment: version_info: "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:55:29.207295000Z node {
2018-06-25T21:55:29.207533000Z id: "node504"
2018-06-25T21:55:29.207747000Z cluster: "default-cluster"
2018-06-25T21:55:29.208032000Z locality {
2018-06-25T21:55:29.208245000Z zone: "default-zone"
2018-06-25T21:55:29.208453000Z }
2018-06-25T21:55:29.208700000Z build_version: "067f8f6523f63d6f0ccd3d44e6fd2db97804af20/1.7.0-dev/Clean/RELEASE"
2018-06-25T21:55:29.208923000Z }
2018-06-25T21:55:29.209145000Z resource_names: "envoyservice-1"
2018-06-25T21:55:29.209371000Z type_url: "type.googleapis.com/envoy.api.v2.ClusterLoadAssignment"
2018-06-25T21:55:29.211162000Z response_nonce: "33"
2018-06-25T21:55:29.211414000Z
2018-06-25T21:55:29.211659000Z [2018-06-25 21:55:29.202][5][trace][router] source/common/router/router.cc:872] [C0][S9549407309502274984] proxying 217 bytes
2018-06-25T21:55:29.211905000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:292] [C0] dispatched 100 bytes
2018-06-25T21:55:29.212129000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:321] [C0] writing 226 bytes, end_stream false
2018-06-25T21:55:29.212342000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:446] [C0] sent frame type=0
2018-06-25T21:55:29.212595000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:384] [C0] socket event: 2
2018-06-25T21:55:29.212836000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:452] [C0] write ready
2018-06-25T21:55:29.213051000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:63] [C0] write returns: 226
2018-06-25T21:55:29.213275000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/connection_impl.cc:384] [C0] socket event: 3
2018-06-25T21:55:29.213528000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/connection_impl.cc:452] [C0] write ready
2018-06-25T21:55:29.213742000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/connection_impl.cc:422] [C0] read ready
2018-06-25T21:55:29.213963000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: 30
2018-06-25T21:55:29.214194000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: -1
2018-06-25T21:55:29.214405000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/raw_buffer_socket.cc:29] [C0] read error: 11
2018-06-25T21:55:29.214651000Z [2018-06-25 21:55:29.203][5][trace][http2] source/common/http/http2/codec_impl.cc:277] [C0] dispatching 30 bytes

Envoy Admin Stats
cluster.envoyservice-1.bind_errors: 0
cluster.envoyservice-1.lb_healthy_panic: 0
cluster.envoyservice-1.lb_local_cluster_not_ok: 0
cluster.envoyservice-1.lb_recalculate_zone_structures: 0
cluster.envoyservice-1.lb_subsets_active: 0
cluster.envoyservice-1.lb_subsets_created: 0
cluster.envoyservice-1.lb_subsets_fallback: 0
cluster.envoyservice-1.lb_subsets_removed: 0
cluster.envoyservice-1.lb_subsets_selected: 0
cluster.envoyservice-1.lb_zone_cluster_too_small: 0
cluster.envoyservice-1.lb_zone_no_capacity_left: 0
cluster.envoyservice-1.lb_zone_number_differs: 0
cluster.envoyservice-1.lb_zone_routing_all_directly: 0
cluster.envoyservice-1.lb_zone_routing_cross_zone: 0
cluster.envoyservice-1.lb_zone_routing_sampled: 0
cluster.envoyservice-1.max_host_weight: 0
cluster.envoyservice-1.membership_change: 0
cluster.envoyservice-1.membership_healthy: 0
cluster.envoyservice-1.membership_total: 0
cluster.envoyservice-1.original_dst_host_invalid: 0
cluster.envoyservice-1.retry_or_shadow_abandoned: 0
cluster.envoyservice-1.update_attempt: 51
cluster.envoyservice-1.update_empty: 50
cluster.envoyservice-1.update_failure: 0
cluster.envoyservice-1.update_no_rebuild: 0

Get error on docker build

Just forked and try to docker build on it. Got the following errors. Any idea?

Step 10/21 : RUN go get github.com/turbinelabs/rotor/...
---> Running in 7d210f05d73c
package github.com/lyft/protoc-gen-validate/tests/harness/cases/go: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/go" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/go (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/go (from $GOPATH)
package github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go (from $GOPATH)
package github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo (from $GOPATH)
package github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo (from $GOPATH)

Improve kubernetes example for rotor

As best practice we should create a namespace for rotor, so when using kubectl create namespaces are created, service accounts get setup, etc without user needing to fiddle with it.

  • Create a namespace out of the box for rotor
  • Assign service accounts, etc to use this namespace

mechanism to configure custom clusters and listeners

the plan is to add flags to read from a file to specify:

  • statically configured listeners (with embedded routes)
  • statically configured clusters
  • a template for new listeners
  • a template for new clusters

This covers many use-cases, but will not cover fully-dynamic specification of listeners and clusters; we plan to offer that in our paid product.

Improve documentation on kubernetes plugin

It took me a good minute to figure this out, i can definitely see how this will bite first-time users. So I setup rotor + k8s service discovery. I created a pod with the correct labels and yet… nothing. Envoy shows nothing with config_dump, and rotor logs were not helpful. So i tried lowering the log level to debug for rotor. For some reason my pod was not being picked up. It wasnt until i remembered from my istio days you need to set the port name to http, which when i finally did, rotor picked up the pod. Plus i had to go through source code to confirm this. I think it would be helpful if the project had a simple webserver example in k8s to show this and to call it out. Also i think rotor should at the very least log that a pod was found but because it didnt match http it will not be considered.

Actionable Items

  • Provide an example kubernetes pod spec with the correct port labels and cluster label.
  • Add better logging to rotor kubernetes plugin to output at INFO that a pod was found but because it did not match the port label (i.e http) it will not be considered. This would really help debug if users make mistakes.

I am happy to do this if you agree its worthwhile.

Export a hash for state of the world

One of the issues I've found through various service mesh technologies is it can be difficult to determine what the current state is for service discovery. Imagine the scenario where you are running multiple Rotor instances for availability purposes. A bug manifests itself in one of the plugins (i.e kubernetes), and let's say the state is now stale (oops we lost the watch and never recovered!). If you are trying to debug a complex service mesh, how would you know? I think it would be extremely valuable to export over the stats backends a deterministic hash of the state of the world. That way when you are monitoring your rotor instances, you can have the confidence that they are sharing the same information across instances to envoy.

My experience is that this is really missing in the service mesh world to help provide debugging when bugs manifest.

Dynamically watch namespaces in kubernetes

Currently the Kubernetes plugin requires a namespace to be defined to watch for pods. In a multi-tenant cluster I think this isn't going to be scalable. I propose we support, similar to Istio, the ability to label namespaces we want to be watched, and then Rotor can dynamically watch multiple namespaces.

how to use web-socket?

I didn't find any configurations to set that, can you tell me how to modify the protocol?
The configs only contains host and port (and metadata), maybe the config below is misused, I don't know.

- cluster: jenkins.102.co
  instances:
    - host: 10.6.8.102
      port: 8082
      metadata:
        - key: envoy.lb
          value: canary
        - key: use_websocket
          value: true
        - key: service_name
          value: jnkins.102.com

And I found that the listenner's config always set to

var (
	xdsClusterConfig = envoycore.ConfigSource{
		ConfigSourceSpecifier: &envoycore.ConfigSource_ApiConfigSource{
			ApiConfigSource: &envoycore.ApiConfigSource{
				ApiType:      envoycore.ApiConfigSource_GRPC,
				ClusterNames: []string{xdsClusterName},
				RefreshDelay: ptr.Duration(xdsRefreshDelaySecs * time.Second),
			},
		},
	}
)

I stuck in this for a whole day, help me out please....

Thank you...

zone aware balancing question

If i wanted to configure envoy to be zone aware, is it correct that i would need a rotor process in each zone? or can i use a single rotor instance. thanks!

Project status

Hi all.

Tell me, please, what is the status of project after shutting down Tourbine Labs and the transition of the team to Slack. Is it possible to use rotor in production or is it better to look for an analog?

No way to enable HTTP2

For CDS, the returned clusters do not have http2_protocol_options set and there is no way through which we can set them either (cds.go). Hence upstreams which support http2 don't work. Since gRPC uses http2, doing cluster discovery using rotor does not work with gRPC services.

Can we add another bool parameter to the Cluster struct called enableHttp2 (cluster.go) . Different plugins can have their own mechanisms of finding the value for the parameter. For example, for EC2 integration, the presence of an extra tag may tell whether HTTP2 needs to be enabled.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.