Giter VIP home page Giter VIP logo

synapse's Introduction

Build Status Inline docs

Synapse

Synapse is Airbnb's new system for service discovery. Synapse solves the problem of automated fail-over in the cloud, where failover via network re-configuration is impossible. The end result is the ability to connect internal services together in a scalable, fault-tolerant way.

Motivation

Synapse emerged from the need to maintain high-availability applications in the cloud. Traditional high-availability techniques, which involve using a CRM like pacemaker, do not work in environments where the end-user has no control over the networking. In an environment like Amazon's EC2, all of the available workarounds are suboptimal:

  • Round-robin DNS: Slow to converge, and doesn't work when applications cache DNS lookups (which is frequent)
  • Elastic IPs: slow to converge, limited in number, public-facing-only, which makes them less useful for internal services
  • ELB: ultimately uses DNS (see above), can't tune load balancing, have to launch a new one for every service * region, autoscaling doesn't happen fast enough

One solution to this problem is a discovery service, like Apache Zookeeper. However, Zookeeper and similar services have their own problems:

  • Service discovery is embedded in all of your apps; often, integration is not simple
  • The discovery layer itself is subject to failure
  • Requires additional servers/instances

Synapse solves these difficulties in a simple and fault-tolerant way.

How Synapse Works

Synapse typically runs on your application servers, often every machine. At the heart of Synapse are proven routing components like HAProxy or NGINX.

For every external service that your application talks to, we assign a synapse local port on localhost. Synapse creates a proxy from the local port to the service, and you reconfigure your application to talk to the proxy.

Under the hood, Synapse supports service_watchers for service discovery and config_generators for configuring local state (e.g. load balancer configs) based on that service discovery state.

Synapse supports service discovery with pluggable service_watchers which take care of signaling to the config_generators so that they can react and reconfigure to point at available servers on the fly.

We've included a number of default watchers, including ones that query zookeeper and ones using the AWS API. It is easy to write your own watchers for your use case, and install them as gems that extend Synapse's functionality. Check out the docs on creating a watcher if you're interested, and if you think that the service watcher would be generally useful feel free to pull request with a link to your watcher.

Synapse also has pluggable config_generators, which are responsible for reacting to service discovery changes and writing out appropriate config. Right now HAProxy, and local files are built in, but you can plug your own in easily.

Example Migration

Let's suppose your rails application depends on a Postgres database instance. The database.yaml file has the DB host and port hardcoded:

production:
  database: mydb
  host: mydb.example.com
  port: 5432

You would like to be able to fail over to a different database in case the original dies. Let's suppose your instance is running in AWS and you're using the tag 'proddb' set to 'true' to indicate the prod DB. You set up synapse to proxy the DB connection on localhost:3219 in the synapse.conf.yaml file. Add a hash under services that looks like this:

---
 services:
  proddb:
   default_servers:
    -
     name: "default-db"
     host: "mydb.example.com"
     port: 5432
   discovery:
    method: "awstag"
    tag_name: "proddb"
    tag_value: "true"
   haproxy:
    port: 3219
    server_options: "check inter 2000 rise 3 fall 2"
    frontend: mode tcp
    backend: mode tcp

And then change your database.yaml file to look like this:

production:
  database: mydb
  host: localhost
  port: 3219

Start up synapse. It will configure HAProxy with a proxy from localhost:3219 to your DB. It will attempt to find the DB using the AWS API; if that does not work, it will default to the DB given in default_servers. In the worst case, if AWS API is down and you need to change which DB your application talks to, simply edit the synapse.conf.json file, update the default_servers and restart synapse. HAProxy will be transparently reloaded, and your application will keep running without a hiccup.

Installation

To download and run the synapse binary, first install a version of ruby. Then, install synapse with:

$ mkdir -p /opt/smartstack/synapse
# If you are on Ruby 2.X use --no-document instead of --no-ri --no-rdoc

# If you want to install specific versions of dependencies such as an older
# version of the aws-sdk, the docker-api, etc, gem install that here *before*
# gem installing synapse.

# Example:
# $ gem install aws-sdk -v XXX

$ gem install synapse --install-dir /opt/smartstack/synapse --no-ri --no-rdoc

# If you want to install specific plugins such as watchers or config generators
# gem install them *after* you install synapse.

# Example:
# $ gem install synapse-nginx --install-dir /opt/smartstack/synapse --no-ri --no-rdoc

This will download synapse and its dependencies into /opt/smartstack/synapse. You might wish to omit the --install-dir flag to use your system's default gem path, however this will require you to run gem install synapse with root permissions.

You can now run the synapse binary like:

export GEM_PATH=/opt/smartstack/synapse
/opt/smartstack/synapse/bin/synapse --help

Don't forget to install HAProxy or NGINX or whatever proxy your config_generator is configuring.

Configuration

Synapse depends on a single config file in JSON format; it's usually called synapse.conf.json. The file has a services section that describes how services are discovered and configured, and then top level sections for every supported proxy or configuration section. For example, the default Synapse supports three sections:

  • services: lists the services you'd like to connect.
  • haproxy: specifies how to configure and interact with HAProxy.
  • file_output (optional): specifies where to write service state to on the filesystem.
  • [<your config generator here>] (optional): configuration for your custom configuration generators (e.g. nginx, vulcand, envoy, etc ..., w.e. you want).

If you have synapse config_generator plugins installed, you'll want a top level as well, e.g.:

  • nginx (optional): configuration for how to configure and interact with NGINX.

The services section is a hash, where the keys are the name of the service to be configured. The name is just a human-readable string; it will be used in logs and notifications. Each value in the services hash is also a hash, and must contain the following keys:

  • discovery: how synapse will discover hosts providing this service (see below)

The services hash should contain a section on how to configure each routing component you wish to use for this particular service. The current choices are haproxy but you can access others e.g. nginx through plugins. Note that if you give a routing component at the top level but not at the service level the default is typically to make that service available via that routing component, sans listening ports. If you wish to only configure a single component explicitly pass the disabled option to the relevant routing component. For example if you want to only configure HAProxy and not NGINX for a particular service, you would pass disabled to the nginx section of that service's watcher config.

  • haproxy: how will the haproxy section for this service be configured. If the corresponding watcher is defined to use zookeeper and the service publishes its haproxy configure on ZK, the haproxy hash can be filled/updated via data from the ZK node.
  • nginx: how will the nginx section for this service be configured. NOTE to use this you must have the synapse-nginx plugin installed.

The services hash may contain the following additional keys:

  • default_servers (default: []): the list of default servers providing this service; synapse uses these if no others can be discovered. See Listing Default Servers.
  • keep_default_servers (default: false): whether default servers should be added to discovered services
  • use_previous_backends (default: true): if at any time the registry drops all backends, use previous backends we already know about.
* `backend_port_override`: the port that discovered servers listen on; you should specify this if your discovery mechanism only discovers names or addresses (like the DNS watcher or the Ec2TagWatcher). If the discovery method discovers a port along with hostnames (like the zookeeper watcher) this option may be left out, but will be used in preference if given.

We've included a number of watchers which provide service discovery. Put these into the discovery section of the service hash, with these options:

Base

The base watcher is useful in situations where you only want to use the servers in the default_servers list. It has the following options:

  • method: base
  • label_filters: optional list of filters to be applied to discovered service nodes
Filtering service nodes

Synapse can be configured to only return service nodes that match a label_filters predicate. If provided, label_filters should be an array of hashes which contain the following:

  • label: The name of the label for which the filter is applied
  • value: The comparison value
  • condition (one of ['equals', 'not-equals']): The type of filter condition to be applied.

Given a label_filters: [{ "label": "cluster", "value": "dev", "condition": "equals" }], this will return only service nodes that contain the label value { "cluster": "dev" }.

Zookeeper

This watcher retrieves a list of servers and also service config data from zookeeper. It takes the following mandatory arguments:

  • method: zookeeper
  • path: the zookeeper path where ephemeral nodes will be created for each available service server
  • hosts: the list of zookeeper servers to query
  • retry_policy: the retry policy (exponential back-off) when connecting to zookeeper servers and making network calls: max_attempts maximum number of all attempts (including original attempt), max_delay maxmimum delay in seconds for all attempts (including original attempt), base_interval retry interval in seconds for first retry attempt, max_interval maximum interval in seconds for each retry attempt. By default the retry policy is disabled.

The watcher assumes that each node under path represents a service server.

The watcher assumes that the data (if any) retrieved at znode path is a hash, where each key is named by a valid config_generator (e.g. haproxy) and the value is a hash that configs the generator. Alternatively, if a generator_config_path argument is specified, the watcher will attempt to read generator config from that znode instead. If generator_config_path has the value disabled, then generator config will not be read from zookeeper at all.

The following arguments are optional:

  • decode: A hash containing configuration for how to decode the data found in zookeeper.
Decoding service nodes

Synapse attempts to decode the data in each of these nodes using JSON and you can control how it is decoded with the decode argument. If provided, the decode hash should contain the following:

  • method (one of ['nerve', 'serverset'], default: 'nerve'): The kind of data to expect to find in zookeeper nodes
  • endpoint_name (default: nil): If using the serverset method, this controls which of the additionalEndpoints is chosen instead of the serviceEndpoint data. If not supplied the serverset method will use the host/port from the serviceEndpoint data.

If the method is nerve, then we expect to find nerve registrations with a host and a port. Any additional metadata for the service node provided in the hash labels will be parsed. This information is used by label_filter configuration.

If the method is serverset then we expect to find Finagle ServerSet (also used by Aurora) registrations with a serviceEndpoint and optionally one or more additionalEndpoints. The Synapse name will be automatically deduced from shard if present.

Zookeeper Poll

This watcher retrieves a list of servers and also service config data from zookeeper. Instead of setting Zookeeper watchers, it uses a long-polling method.

It takes the following mandatory arguments:

  • method: zookeeper_poll
  • polling_interval_sec: the interval at which the watcher will poll Zookeeper. Defaults to 60 seconds.

Other than these two options, it takes the same options as the above ZookeeperWatcher. For all the required options, see above.

Docker

This watcher retrieves a list of docker containers via docker's HTTP API. It takes the following options:

  • method: docker
  • servers: a list of servers running docker as a daemon. Format is {"name":"...", "host": "..."[, port: 4243]}
  • image_name: find containers running this image
  • container_port: find containers forwarding this port
  • check_interval: how often to poll the docker API on each server. Default is 15s.
AWS EC2 tags

This watcher retrieves a list of Amazon EC2 instances that have a tag with particular value using the AWS API. It takes the following options:

  • method: ec2tag
  • tag_name: the name of the tag to inspect. As per the AWS docs, this is case-sensitive.
  • tag_value: the value to match on. Case-sensitive.

Additionally, you MUST supply backend_port_override in the service configuration as this watcher does not know which port the backend service is listening on.

The following options are optional, provided the well-known AWS_ environment variables shown are set. If supplied, these options will be used in preference to the AWS_ environment variables.

  • aws_access_key_id: AWS key or set AWS_ACCESS_KEY_ID in the environment.
  • aws_secret_access_key: AWS secret key or set AWS_SECRET_ACCESS_KEY in the environment.
  • aws_region: AWS region (i.e. us-east-1) or set AWS_REGION in the environment.
Marathon

This watcher polls the Marathon API and retrieves a list of instances for a given application.

It takes the following options:

  • marathon_api_url: Address of the marathon API (e.g. http://marathon-master:8080)
  • application_name: Name of the application in Marathon
  • check_interval: How often to request the list of tasks from Marathon (default: 10 seconds)
  • port_index: Index of the backend port in the task's "ports" array. (default: 0)
Multi

The MultiWatcher aggregates multiple watchers together. This allows getting service discovery data from multiple sources and combining them into one set of backends.

It takes the following options:

  • method: must be multi
  • watchers: a hash of name --> child watcher config. The name should uniquely identify the child watcher, and the config should be of the same format that the child watcher expects. I.e. a valid config for a Zookeeper child watcher looks like {"my_watcher" => {"method" => "zookeeper", "path" => "/svc", "hosts": ["localhost:2181"]} }
  • resolver: an options hash which creates a Resolver. The method field of that hash is used to decide what resolver to use, and the rest of the options are passed to that.
S3 Toggle File Resolver

This resolver merges results by picking one of the watchers to treat as the source of truth. That watcher is picked based on the contents of an S3 file, which is periodically polled. In other words, the file in S3 "toggles" between different watchers.

It takes the following options:

  • method: must be s3_toggle
  • s3_url: an S3-style URL pointing to the file. Looks like s3://{bucket}/{path...}.
  • s3_polling_interval_seconds: frequency with which to fetch the file

The S3 file has the following YAML schema:

watcher_name: weight
second_watcher_name: weight

It is a simple dictionary in YAML where the key refers to the watcher nam (as provided to the MultiWatcher) and the value is an integer, non-negative weight that determines the probability for that watcher to be chosen.

Union Resolver

The UnionResolver merges the backends from each child watcher into a single list. For example, with two children watchers that have backends of [a, b] and [c, d], it will return [a, b, c, d].

The config_for_generator cannot be easily merged; intead, we pick the first non-empty config. As such, when using union you should ensure that only one watcher returns a config or that all watchers have the same config.

  • method: must be union
Sequential Resolver

The SequentialResolver goes through a specific ordering of watchers and returns the first set of backends that did not error or return an empty set. If sequential_order is ['primary', 'secondary'], it will first read the backends from primary; secondary will only be read if the primary fails (by returning an empty set of backends). The smae method is used for the config_for_generator.

It takes the following options:

  • method: must be sequential
  • sequential_order: a list of watcher names that will be read in the provided order

You may list a number of default servers providing a service. Each hash in that section has the following options:

  • name: a human-readable name for the default server; must be unique
  • host: the host or IP address of the server
  • port: the port where the service runs on the host

The default_servers list is used only when service discovery returns no servers. In that case, the service proxy will be created with the servers listed here. If you do not list any default servers, no proxy will be created. The default_servers will also be used in addition to discovered servers if the keep_default_servers option is set.

If you do not list any default_servers, and all backends for a service disappear then the previous known backends will be used. Disable this behavior by unsetting use_previous_backends.

This section is its own hash, which should contain the following keys:

  • disabled: A boolean value indicating if haproxy configuration management for just this service instance ought be disabled. For example, if you want file output for a particular service but no HAProxy config. (default is False)
  • port: the port (on localhost) where HAProxy will listen for connections to the service. If this is null, just the bind_address will be used (e.g. for unix sockets) and if omitted, only a backend stanza (and no frontend stanza) will be generated for this service. In the case of a bare backend, you'll need to get traffic to your service yourself via the shared_frontend or manual frontends in extra_sections
  • bind_address: force HAProxy to listen on this address (default is localhost). Setting bind_address on a per service basis overrides the global bind_address in the top level haproxy. Having HAProxy listen for connections on different addresses (example: service1 listen on 127.0.0.2:443 and service2 listen on 127.0.0.3:443) allows /etc/hosts entries to point to services.
  • bind_options: optional: default value is an empty string, specify additional bind parameters, such as ssl accept-proxy, crt, ciphers etc.
  • server_port_override: DEPRECATED. Renamed backend_port_override and moved to the top level hash. This will be removed in future versions.
  • server_options: the haproxy options for each server line of the service in HAProxy config; it may be left out. This field supports some basic templating: you can add include %{port}, %{host}, or %{name} in this string, and those will be replaced with the appropriate values for the particular server being configured.
  • frontend: additional lines passed to the HAProxy config in the frontend stanza of this service
  • backend: additional lines passed to the HAProxy config in the backend stanza of this service
  • backend_name: The name of the generated HAProxy backend for this service (defaults to the service's key in the services section)
  • listen: these lines will be parsed and placed in the correct frontend/backend section as applicable; you can put lines which are the same for the frontend and backend here.
  • backend_order: optional: how backends should be ordered in the backend stanza. (default is shuffling). Setting to asc means sorting backends in ascending alphabetical order before generating stanza. desc means descending alphabetical order. no_shuffle means no shuffling or sorting. If you shuffle consider setting server_order_seed at the top level so that your backend ordering is deterministic across HAProxy reloads.
  • shared_frontend: optional: haproxy configuration directives for a shared http frontend (see below)
  • cookie_value_method: optional: default value is name, it defines the way your backends receive a cookie value in http mode. If equal to hash, synapse hashes backend names on cookie value assignation of your discovered backends, useful when you want to use haproxy cookie feature but you do not want that your end users receive a Set-Cookie with your server name and ip readable in clear.
  • use_nerve_weights: optional: this option enables reading the weights from nerve and applying them to the haproxy configuration. By default this is disabled in the case where users apply weights using server_options or haproxy_server_options. This option will also remove the weight parameter from server_options and haproxy_server_options

The top level haproxy section of the config file has the following options:

  • do_checks: whether or not Synapse will validate HAProxy config prior to writing it (default to false)
  • check_command: the command Synapse will run to validate HAProxy config
  • candidate_config_file_path: the path to write the pre-validated (candidate) HAProxy config to for the check command
  • do_reloads: whether or not Synapse will reload HAProxy (default to true)
  • reload_command: the command Synapse will run to reload HAProxy
  • do_writes: whether or not the config file will be written (default to true)
  • config_file_path: where Synapse will write the HAProxy config file
  • do_socket: whether or not Synapse will use the HAProxy socket commands to prevent reloads (default to true)
  • socket_file_path: where to find the haproxy stats socket. can be a list (if using nbproc)
  • global: options listed here will be written into the global section of the HAProxy config
  • defaults: options listed here will be written into the defaults section of the HAProxy config
  • extra_sections: additional, manually-configured frontend, backend, or listen stanzas
  • bind_address: force HAProxy to listen on this address (default is localhost)
  • shared_frontend: (OPTIONAL) additional lines passed to the HAProxy config used to configure a shared HTTP frontend (see below)
  • restart_interval: number of seconds to wait between restarts of haproxy (default: 2)
  • restart_jitter: percentage, expressed as a float, of jitter to multiply the restart_interval by when determining the next restart time. Use this to help prevent healthcheck storms when HAProxy restarts. (default: 0.0)
  • state_file_path: full path on disk (e.g. /tmp/synapse/state.json) for caching haproxy state between reloads. If provided, synapse will store recently seen backends at this location and can "remember" backends across both synapse and HAProxy restarts. Any backends that are "down" in the reporter but listed in the cache will be put into HAProxy disabled. Synapse writes the state file every sixty seconds, so the file's age can be used to monitor that Synapse is alive and making progress. (default: nil)
  • state_file_ttl: the number of seconds that backends should be kept in the state file cache. This only applies if state_file_path is provided. (default: 86400)
  • server_order_seed: A number to seed random actions with so that all orders are deterministic. You can use this so that backend ordering is deterministic but still shuffled, for example by setting this to the hash of your machine's IP address you guarantee that HAProxy on different machines have different orders, but within that machine you always choose the same order. (default: rand(2000))
  • max_server_id: Synapse will try to ensure that server lines are written out with HAProxy "id"s that are unique and associated 1:1 with a service backend (host + port + name). To ensure these are unique Synapse internally counts up from 1 until max_server_id, so you can have no more than this number of servers in a backend. If the default (65k) is not enough, make this higher but be wary that HAProxy internally uses an int to store this id, so ... your mileage may vary trying to make this higher. (default: 65535)

Note that a non-default bind_address can be dangerous. If you configure an address:port combination that is already in use on the system, haproxy will fail to start.

This section controls whether or not synapse will write out service state to the filesystem in json format. This can be used for services that want to use discovery information but not go through HAProxy.

  • output_directory: the path to a directory on disk that service registrations should be written to.

HAProxy shared HTTP Frontend

For HTTP-only services, it is not always necessary or desirable to dedicate a TCP port per service, since HAProxy can route traffic based on host headers. To support this, the optional shared_frontend section can be added to both the haproxy section and each indvidual service definition. Synapse will concatenate them all into a single frontend section in the generated haproxy.cfg file. Note that synapse does not assemble the routing ACLs for you; you have to do that yourself based on your needs. This is probably most useful in combination with the service_conf_dir directive in a case where the individual service config files are being distributed by a configuration manager such as puppet or chef, or bundled into service packages. For example:

 haproxy:
  shared_frontend:
   - "bind 127.0.0.1:8081"
  reload_command: "service haproxy reload"
  config_file_path: "/etc/haproxy/haproxy.cfg"
  socket_file_path:
    - /var/run/haproxy.sock
    - /var/run/haproxy2.sock
  global:
   - "daemon"
   - "user    haproxy"
   - "group   haproxy"
   - "maxconn 4096"
   - "log     127.0.0.1 local2 notice"
   - "stats   socket /var/run/haproxy.sock"
  defaults:
   - "log      global"
   - "balance  roundrobin"
 services:
  service1:
   discovery: 
    method: "zookeeper"
    path:  "/nerve/services/service1"
    hosts:
     - "0.zookeeper.example.com:2181"
   haproxy:
    server_options: "check inter 2s rise 3 fall 2"
    shared_frontend:
     - "acl is_service1 hdr_dom(host) -i service1.lb.example.com"
     - "use_backend service1 if is_service1"
    backend: "mode http"

  service2:
   discovery:
    method: "zookeeper"
    path:  "/nerve/services/service2"
    hosts: "0.zookeeper.example.com:2181"

   haproxy:
    server_options: "check inter 2s rise 3 fall 2"
    shared_frontend:
     - "acl is_service1 hdr_dom(host) -i service2.lb.example.com"
     - "use_backend service2 if is_service2"
    backend:
     - "mode http"

This would produce an haproxy.cfg much like the following:

backend service1
        mode http
        server server1.example.net:80 server1.example.net:80 check inter 2s rise 3 fall 2

backend service2
        mode http
        server server2.example.net:80 server2.example.net:80 check inter 2s rise 3 fall 2

frontend shared-frontend
        bind 127.0.0.1:8081
        acl is_service1 hdr_dom(host) -i service1.lb
        use_backend service1 if is_service1
        acl is_service2 hdr_dom(host) -i service2.lb
        use_backend service2 if is_service2

Non-HTTP backends such as MySQL or RabbitMQ will obviously continue to need their own dedicated ports.

Contributing

Note that now that we have a fully dynamic include system for service watchers and configuration generators, you don't have to PR into the main tree, but please do contribute a link.

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

See the Service Watcher README for how to add new Service Watchers.

See the Config Generator README for how to add new Config Generators

  • synapse-nginx Is a config_generator which allows Synapse to automatically configure and administer a local NGINX proxy.

synapse's People

Contributors

anson627 avatar bazbremner avatar bobtfish avatar brianwolfe avatar brndnmtthws avatar bsherrod avatar cap10morgan avatar chase-childers avatar darnaut avatar dcosson avatar djnos avatar evie404 avatar gmcatsf avatar igor47 avatar jiancheung avatar jnb avatar jolynch avatar juchem avatar justinvenus avatar lap1817 avatar lcharignon avatar maxburkhardt avatar panchr avatar ramyak avatar rrrene avatar scarletmeow avatar schleyfox avatar tcc-jenkins avatar tdooner avatar twellspring avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synapse's Issues

In DNS service watcher, re-resolve every $TTL

The DNS service watcher will re-resolve every check_interval seconds. Rather than having to set check_interval, and possibly setting it too high/low, it would be nice if it re-resolved every $TTL (whatever the returned TTL is).

docker watcher - dependent on publicly exposed ports to the parent host.

The current implementation is quite restricting. For example using synapse within a docker container, it uses the host IP within @discovery["servers"].

This means you need to publicly expose all containers in the parent host, rather than just using the private ips of the containers themselves. So rather than having discovery, you end up with a fixed mapping based on ports.

When not using publicly exposed ports you end up with this invalid configuration, with a broken port.

    server 172.17.42.1:_basket-service 172.17.42.1: cookie 172.17.42.1:_basket-service check inter 2s rise 3 fall 2

        "Ports": {
            "8080/tcp": null,
            "8081/tcp": null
        }

Error with a little debugging...

[ root@52cc33d73359:~ ]$ /usr/bin/ruby1.9.1 /usr/local/bin/synapse -c /synapse.conf.json
I, [2015-01-25T18:22:20.776251 #1971]  INFO -- Synapse::Synapse: synapse: starting...
I, [2015-01-25T18:22:20.781601 #1971]  INFO -- Synapse::Synapse: synapse: regenerating haproxy config
I, [2015-01-25T18:22:20.781990 #1971]  INFO -- Synapse::DnsWatcher: synapse: discovered 1 backends for service http-webservices-member
W, [2015-01-25T18:22:20.782539 #1971]  WARN -- Synapse::Haproxy: synapse: no backends found for watcher http-webservices-basket
I, [2015-01-25T18:22:20.783647 #1971]  INFO -- Synapse::Haproxy: synapse: could not open haproxy config file at /etc/haproxy/haproxy.cfg
Docker::Container { :id => b8589347f04c9e3a811bf78f49149d645cba9b48afccdd52f64308e039ca3106, :connection => Docker::Connection { :url => http://172.17.42.1:4243, :options => {} } }
8080
{"8080"=>"8080", "8081"=>""}
{"8080"=>"", "8081"=>""}
[{"name"=>"basket-service", "host"=>"172.17.42.1", "port"=>"8080"},
 {"name"=>"basket-service", "host"=>"172.17.42.1", "port"=>""}]
I, [2015-01-25T18:22:20.819391 #1971]  INFO -- Synapse::DockerWatcher: synapse: discovered 2 backends for service http-webservices-basket
I, [2015-01-25T18:22:20.821997 #1971]  INFO -- Synapse::Haproxy: synapse: restarted haproxy
I, [2015-01-25T18:22:20.823559 #1971]  INFO -- Synapse::Synapse: synapse: regenerating haproxy config
[ALERT] 024/182222 (2005) : parsing [/etc/haproxy/haproxy.cfg:107] : server 172.17.42.1:_basket-service has neither service port nor check port nor tcp_check rule 'connect' with port information. Check has been disabled.
[ALERT] 024/182222 (2005) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 024/182222 (2005) : Fatal errors found in configuration.
E, [2015-01-25T18:22:22.844902 #1971] ERROR -- Synapse::Synapse: synapse: encountered unexpected exception #<RuntimeError: failed to reload haproxy via service haproxy reload:  * Reloading haproxy haproxy
   ...fail!> in main thread
W, [2015-01-25T18:22:22.845050 #1971]  WARN -- Synapse::Synapse: synapse: exiting; sending stop signal to all watchers
I, [2015-01-25T18:22:22.845184 #1971]  INFO -- Synapse::DockerWatcher: synapse: stopping watcher http-webservices-basket using default stop handler
I, [2015-01-25T18:22:22.845321 #1971]  INFO -- Synapse::DnsWatcher: synapse: stopping watcher http-webservices-member using default stop handler
/var/lib/gems/1.9.1/gems/synapse-0.11.1/lib/synapse/haproxy.rb:780:in `restart': failed to reload haproxy via service haproxy reload:  * Reloading haproxy haproxy (RuntimeError)
   ...fail!
    from /var/lib/gems/1.9.1/gems/synapse-0.11.1/lib/synapse/haproxy.rb:550:in `update_config'
    from /var/lib/gems/1.9.1/gems/synapse-0.11.1/lib/synapse.rb:51:in `block in run'
    from /var/lib/gems/1.9.1/gems/synapse-0.11.1/lib/synapse.rb:43:in `loop'
    from /var/lib/gems/1.9.1/gems/synapse-0.11.1/lib/synapse.rb:43:in `run'
    from /var/lib/gems/1.9.1/gems/synapse-0.11.1/bin/synapse:60:in `<top (required)>'
    from /usr/local/bin/synapse:23:in `load'
    from /usr/local/bin/synapse:23:in `<main>'

Synapse fails to restart after zookeeper node disappears

I am using the Zookeeper watcher to get a list of available nodes - which it successfully does.

However, after a node fails, it does not get removed from haproxy.

Haproxy monitoring page shows those nodes as down, Zookeeper no longer has them as they are ephemeral, but the reconfiguration does not take place.

If I restart Synapse, however, everything goes back to normal.

I, [2014-10-09T22:17:20.844370 #12]  INFO -- Synapse::ZookeeperWatcher: synapse: discovering backends for service proddb
I, [2014-10-09T22:17:20.861672 #12]  INFO -- Synapse::ZookeeperWatcher: synapse: discovered 6 backends for service proddb
[WARNING] 281/221756 (157) : Server proddb/100.0.10.10:3306_100.0.10.10 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 6 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 281/221756 (157) : Server proddb/100.0.10.10:3306_100.0.10.10 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 2ms. 5 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
I, [2014-10-09T22:18:15.682057 #12]  INFO -- Synapse::ZookeeperWatcher: synapse: discovering backends for service proddb
I, [2014-10-09T22:18:15.690843 #12]  INFO -- Synapse::ZookeeperWatcher: synapse: discovered 5 backends for service proddb

Note: Synapse is deployed in Docker and managed by supervisord

docker watcher proposal: identify services by label instead of port and image name

Docker 1.6 added labels for images and containers.

Instead of requiring the same port for every service and exposing the port to the host (#109, #88), read a com.airbnb.synapse.service-port label from the image or container, and register the service using that port.

Another option might be to keep specifying the port in the watcher config, but look for a com.airbnb.synapse.service label.

Containers without a label would not be registered.

docker watcher proposal: use the events API instead of polling

Replace polling by streaming the events API

  • using the events API filter by the start event to get notified of all new containers that have started
  • when a new container starts, check it against the list of criteria and register it if it matches
  • removes the need to poll, and keep track of the list of previous containers
  • the /containers/json endpoint that it uses now can be pretty slow with large numbers of containers

Be smarter about initial reconfigure

Synapse is designed to re-write the haproxy config & reload haproxy when it starts (set here).

This means that any time synapse is restarted there is a period between when it starts and when the watcher first registers where the defaults are being used (and if there are no defaults you will return 503's from haproxy).

I can see why you would want an initial reconfigure so that any time you restart synapse you know any changes unrelated to backends will get picked up, but it seems like it should be smarter and wait until all watchers have checked in once before doing its initial reconfiguration. I'm happy to submit a PR if this sounds reasonable.

zookeeper watcher won't update haproxy config when adding/removing new node

Had anyone ever get synapse+zookeeper+nerve working prperly?

I followed the document on this repo, and encountered a couple of issues:

  • synapse works with yaml config file however the json format config file is provided
  • there is a few config items that should have been array however it is a single line in example config
  • finally i was able to fix above problem, however, when taking a service online or offline, the change does not reflect in haproxy

here is my setup,

  • synapse config
 haproxy:
  reload_command: "sudo service haproxy reload"
  config_file_path: "/etc/haproxy/haproxy.cfg"
  socket_file_path: "/var/haproxy/haproxy.sock"
  do_writes: True
  do_reloads: True
  do_socket: True
  global:
   - "daemon"
   - "user    haproxy"
   - "group   haproxy"
   - "maxconn 4096"
   - "log     127.0.0.1 local2 notice"
   - "stats   socket /var/haproxy/haproxy.sock mode 666 level admin"
  defaults:
   - "log      global"
   - "balance  roundrobin"
   - "timeout connect 5s"
   - "timeout client 1m"
   - "timeout server 1m"
  extra_sections:
   "listen stats :3212":
    - "mode http"
    - "stats enable"
    - "stats uri /"
    - "stats refresh 5s"
 services:
  hellox:
   discovery:
    method: "zookeeper"
    path: "/nerve/services/hello/services"
    hosts:
     - "localhost:2181"
   haproxy:
    port: 3214
    server_options: "check inter 2s rise 3 fall 2"
    listen:
     - "mode http"
  • nerve config
{
  "instance_id": "localhost",
  "services": {
    "hello": {
      "host": "localhost",
      "port": 3000,
      "reporter_type": "zookeeper",
      "zk_hosts": ["localhost:2181"],
      "zk_path": "/nerve/services/hello/services",
      "check_interval": 2,
      "checks": [
        {
          "type": "http",
          "uri": "/health",
          "timeout": 0.2,
          "rise": 3,
          "fall": 2
        }
      ]
    },
    "hello2": {
      "host": "localhost",
      "port": 2999,
      "reporter_type": "zookeeper",
      "zk_hosts": ["localhost:2181"],
      "zk_path": "/nerve/services/hello/services",
      "check_interval": 2,
      "checks": [
        {
          "type": "http",
          "uri": "/health",
          "timeout": 0.2,
          "rise": 3,
          "fall": 2
        }
      ]
    },
    "hello3": {
      "host": "localhost",
      "port": 2998,
      "reporter_type": "zookeeper",
      "zk_hosts": ["localhost:2181"],
      "zk_path": "/nerve/services/hello/services",
      "check_interval": 2,
      "checks": [
        {
          "type": "http",
          "uri": "/health",
          "timeout": 0.2,
          "rise": 3,
          "fall": 2
        }
      ]
    },
    "hello4": {
      "host": "localhost",
      "port": 2997,
      "reporter_type": "zookeeper",
      "zk_hosts": ["localhost:2181"],
      "zk_path": "/nerve/services/hello/services",
      "check_interval": 2,
      "checks": [
        {
          "type": "http",
          "uri": "/health",
          "timeout": 0.2,
          "rise": 3,
          "fall": 2
        }
      ]
    }
  }
}

zookeeper is listening on port 2181, when I start synapse, there is no service running, there is 0 backend entries in haproxy config, then I bring up a few services, I can see new entries get added to zookeeper by nerve but it seems synapse isn't get notified of this, or it didn't do anything to add the new service to haproxy.

Can anyone please suggest where I am doing wrong? thanks.

Extending cookie_value_method

Hi, I have a question on implementation of extending the cookie_value_method. We have a requirement to have a znode id as the cookie value.

/> cat /service/server000
{"host": "192.168.101.102", "name": "service", "port": 12345} 

Taking from this example above, I need to expose server000 as the cookie value. There are a few ways to expose this.

Since the discovery path is exposed in (https://github.com/airbnb/synapse/blob/master/lib/synapse/service_watcher/zookeeper.rb#L130) it could be simple to extend the node information and add the value of id?

Another solution is to resolve the backends IP. (jheung-r7@730e8da)

I have implemented both but was wondering which would be the best for a PR?

~ Joe

Docker watcher: Port is required?

Sorry for this being rather a question than an issueโ€ฆ the Docker watcher requires a port for each service if I understand the code right. Wouldn't it make sense to have the watcher just look for an image-tag combination and figure out the port (or ports if there are several) by itself? That way one could spin up a new deploy with a "staging" tag, test drive it, then add a "production" tag and take down the old deploy. Or am I missing something?

ec2tag watcher needs to back off if Client.RequestLimitExceeded is returned

If you have a reasonable number of ec2tag watchers running, you can hit the rate limiter. Rather than a fixed check_interval, there should be a way of doing some kind of backoff (as per the AWS docs). If the Client.RequestLimitExceeded is returned, then there definitely needs to be a backoff - otherwise you just continue to hammer on the door, and the rate limit exceedance will never go away.

I'm yet to discover what the rate limit is, but I hit it today with ~6 servers each watching ~24 ec2tags (plus various other odds and sods running in the background) and the default check_interval

Missing discovery method when trying to create watcher

Hello,

I am trying to get Synapse up and running, however, I have encountered the following error:

/usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse/service_watcher.rb:24:in `create': Missing discovery method when trying to create watcher (ArgumentError)
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse.rb:72:in `block in create_service_watchers'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse.rb:71:in `each'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse.rb:71:in `create_service_watchers'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse.rb:18:in `initialize'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/bin/synapse:59:in `new'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/bin/synapse:59:in `<top (required)>'
    from /usr/local/bin/synapse:23:in `load'
    from /usr/local/bin/synapse:23:in `<main>'

My config looks like this:


---
  services: 
    application: 
      default_servers: 
        - 
          name: "default-app"
          host: "111.111.111.111"
          port: 443
      discovery: 
        method: "zookeeper"
        path: "/services/application"
        hosts: 
          - "xhbtr.site.com:2181"
          - "xhbtr2.site.com:2181"
          - "xhbtr3.site.com:2181"
      haproxy: 
        port: 3214
        server_options: "check inter 2s rise 3 fall 2"
        listen: 
          - "mode http"
          - "option httpchk /health"
          - "http-check expect string OK"
    haproxy: 
      reload_command: "sudo service haproxy reload"
      config_file_path: "/etc/haproxy/haproxy.cfg"
      socket_file_path: "/var/haproxy/stats.sock"
      do_writes: true
      do_reloads: true
      do_socket: false
      global: 
        - "daemon"
        - "user haproxy"
        - "group haproxy"
        - "maxconn 4096"
        - "log     127.0.0.1 local0"
        - "log     127.0.0.1 local1 notice"
        - "stats   socket /var/haproxy/stats.sock mode 666 level admin"
      defaults: 
        - "log      global"
        - "option   dontlognull"
        - "maxconn  2000"
        - "retries  3"
        - "timeout  connect 5s"
        - "timeout  client  1m"
        - "timeout  server  1m"
        - "option   redispatch"
        - "balance  leastconn"
      extra_sections: 
        listen stats :3212: 
          - "mode http"
          - "stats enable"
          - "stats uri /"
          - "stats refresh 5s"

What am I missing?

Thanks!

gem install synapse error

hi
i have run the command as below
gem install synapse --install-dir /opt/smartstack/synapse --no-ri --no-rdoc

but get the error,i don't know what it is mean, can you help me to fix it ?

error message as below:

/opt/smartstack/synapse/gems/mini_portile2-2.0.0/lib/mini_portile2/mini_portile.rb:79:in `apply_patch': Failed to complete patch task; patch(1) or git(1) is required. (RuntimeError)
    from /opt/smartstack/synapse/gems/mini_portile2-2.0.0/lib/mini_portile2/mini_portile.rb:87:in `block in patch'
    from /opt/smartstack/synapse/gems/mini_portile2-2.0.0/lib/mini_portile2/mini_portile.rb:85:in `each'
    from /opt/smartstack/synapse/gems/mini_portile2-2.0.0/lib/mini_portile2/mini_portile.rb:85:in `patch'
    from /opt/smartstack/synapse/gems/mini_portile2-2.0.0/lib/mini_portile2/mini_portile.rb:148:in `cook'
    from extconf.rb:289:in `block (2 levels) in process_recipe'
    from extconf.rb:182:in `block in chdir_for_build'
    from extconf.rb:181:in `chdir'
    from extconf.rb:181:in `chdir_for_build'
    from extconf.rb:288:in `block in process_recipe'
    from extconf.rb:187:in `tap'
    from extconf.rb:187:in `process_recipe'
    from extconf.rb:490:in `<main>'
ERROR:  Error installing synapse:
    ERROR: Failed to build gem native extension.

    Building has failed. See above output for more information on the failure.
extconf failed, exit code 1

Gem files will remain installed in /opt/smartstack/synapse/gems/nokogiri-1.6.8.rc1 for inspection.
Results logged to /opt/smartstack/synapse/extensions/x86_64-linux/2.1.0-static/nokogiri-1.6.8.rc1/gem_make.out

semi-catatonic synapse

I've noticed this now once or twice, and it's a little disturbing. A synapse process with multiple services defined eventually goes catatonic after a certain amount (as yet unquantified) of churn in the zookeeper backends. The symptoms are as follows:

  • the synapse process ignores SIGTERM, and can only be terminated with SIGKILL.
  • haproxy stops being updated despite nerve on the churned servers correctly reporting status to zookeeper. For example:
    image

Compare to the currently-live list in ZK:

ls /nerve/services/qa1/ministry-worker/default
time = 0 msec
/nerve/services/qa1/ministry-worker/default: rc = 0
        i-e1e53b9b_ministry-worker
        i-d400ebfa_ministry-worker
        i-e3e53b99_ministry-worker
        i-dde53ba7_ministry-worker
        i-d500ebfb_ministry-worker
        i-d900ebf7_ministry-worker
        i-d800ebf6_ministry-worker
        i-de00ebf0_ministry-worker
        i-db00ebf5_ministry-worker
        i-ebe53b91_ministry-worker
        i-df00ebf1_ministry-worker
        i-da00ebf4_ministry-worker
  • the output log, however, is still being updated:
2013-12-26 01:05:39.679608500 D, [2013-12-26T01:05:39.679352 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:05:39 +0000
2013-12-26 01:06:42.752187500 D, [2013-12-26T01:06:42.752094 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:06:42 +0000
2013-12-26 01:07:45.824888500 D, [2013-12-26T01:07:45.824624 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:07:45 +0000
2013-12-26 01:08:48.807369500 D, [2013-12-26T01:08:48.807255 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:08:48 +0000
2013-12-26 01:09:51.676951500 D, [2013-12-26T01:09:51.676688 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:09:51 +0000
2013-12-26 01:10:54.518216500 D, [2013-12-26T01:10:54.517993 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:10:54 +0000
2013-12-26 01:11:57.621308500 D, [2013-12-26T01:11:57.621061 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:11:57 +0000
2013-12-26 01:13:02.798093500 D, [2013-12-26T01:13:02.797806 #21871] DEBUG -- : synapse: still running at 2013-12-26 01:13:02 +0000
  • nothing much seems to be happening syscall-wise:
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
sched_yield()                           = 0
[...and so on...]
  • if I SIGKILL synapse and restart it, it immediately comes up with a correct list of backends:

image

ZookeeperWatcher does not handle NoNode failures

There is a race condition in how the ZookeeperWatcher discovers backends where we can crash the whole Synapse process if we try to read a node that doesn't exist after the list. For example:

E, [2016-08-02T17:02:32.909587 #113325] ERROR -- Synapse::Synapse: synapse: encountered unexpected exception #<ZK::Exceptions::NoNode: inputs: {:path=>"/nerve/..."}> in main thread

We've seen this cause stability issues in production, let's catch that NoNode exception and continue.

synapse: error polling docker host http://localhost:4243: #<Docker::Error::ClientError: Expected(200..204) <=> Actual(400 Bad Request)

Hello,
Is synapse incompatible with docker-api 1.20 , I got this warn when running synapse,
I attached environment info,and how to resolve it ?

root@VirtualBox:/var/lib/gems/1.9.1/gems/synapse-0.12.2# synapse -c /etc/synapse.json.conf
I, [2016-01-20T16:15:03.797821 #13184] INFO -- Synapse::Synapse: synapse: starting...
I, [2016-01-20T16:15:03.798011 #13184] INFO -- Synapse::Synapse: synapse: configuring haproxy
I, [2016-01-20T16:15:03.798862 #13184] INFO -- Synapse::Haproxy: synapse: reconfigured haproxy
W, [2016-01-20T16:15:03.807022 #13184] WARN -- Synapse::ServiceWatcher::DockerWatcher: synapse: error polling docker host http://localhost:4243: #<Docker::Error::ClientError: Expected(200..204) <=> Actual(400 Bad Request)

I, [2016-01-20T16:15:03.808142 #13184] INFO -- Synapse::Haproxy: synapse: restarted haproxy
W, [2016-01-20T16:15:18.815548 #13184] WARN -- Synapse::ServiceWatcher::DockerWatcher: synapse: error polling docker host http://localhost:4243: #<Docker::Error::ClientError: Expected(200..204) <=> Actual(400 Bad Request)

root@VirtualBox:/var/lib/gems/1.9.1/gems/synapse-0.12.2# docker version
Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Wed Oct 7 17:48:28 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Wed Oct 7 17:48:28 UTC 2015
OS/Arch: linux/amd64

root@VirtualBox:/var/lib/gems/1.9.1/gems/synapse-0.12.2# cat synapse.gemspec

-- encoding: utf-8 --

lib = File.expand_path('../lib', FILE)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'synapse/version'

Gem::Specification.new do |gem|
gem.name = "synapse"
gem.version = Synapse::VERSION
gem.authors = ["Martin Rhoads"]
gem.email = ["[email protected]"]
gem.description = %q{: Write a gem description}
gem.summary = %q{: Write a gem summary}
gem.homepage = ""

gem.files = git ls-files.split($/)
gem.executables = gem.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
gem.test_files = gem.files.grep(%r{^(test|spec|features)/})

gem.add_runtime_dependency "aws-sdk", "> 1.39"
gem.add_runtime_dependency "docker-api", "
> 1.7.2"
gem.add_runtime_dependency "zk", "~> 1.9.4"

gem.add_development_dependency "rake"
gem.add_development_dependency "rspec", "~> 3.1.0"
gem.add_development_dependency "pry"
gem.add_development_dependency "pry-nav"
gem.add_development_dependency "webmock"
end

root@VirtualBox:/var/lib/gems/1.9.1/gems/synapse-0.12.2# uname -an
Linux VirtualBox 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Thanks in advance.

How to auto-detect a brand new service without rewrite the service section in synapse configuration

Synapse  is very useful. But I facing some problem when I'm using it.

Synapse refresh HAProxy config by watching changes on zk. 

when services comes up it update the ha config file. 

 But when I want to add a brand new services that never write in "services" 
 section of Synapse config file.

 How can I update all my Synapse config files in all the servers 
 I maintaining without manually rewrite the configuration?

 Thanks

Ubuntu 13.10 ruby 1.9.3 synapse not working

Any idea how I can fix this? I ran "sudo gem install synapse" prior to the below.

vagrant@packer-virtualbox-iso:~$ ruby -v
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]

vagrant@packer-virtualbox-iso:~$ synapse -h
Welcome to synapse

Usage: synapse --config /path/to/synapse/config
-c, --config config path to synapse config
-h, --help Display this screen

vagrant@packer-virtualbox-iso:$ synapse
/var/lib/gems/1.9.1/gems/synapse-0.9.1/bin/synapse:40:in rescue in parseconfig': uninitialized constant Psych::ParseError (NameError) from /var/lib/gems/1.9.1/gems/synapse-0.9.1/bin/synapse:34:inparseconfig'
from /var/lib/gems/1.9.1/gems/synapse-0.9.1/bin/synapse:46:in <top (required)>' from /usr/local/bin/synapse:23:inload'
from /usr/local/bin/synapse:23:in `

'
vagrant@packer-virtualbox-iso:$

dns watcher ping assumes dns server resolves public addresses

The dns watcher's ping method declares the watcher dead if it can't resolve airbnb.com. This doesn't work if you are in a private network (e.g. AWS VPC) and are using a DNS server that only resolves private domains.

Potential alternatives:

  • allow the domain for the ping to be configurable
  • don't care if the domain is not found (the fact that it got a proper response from the dns server is good enough)

Release?

Lost a chunk of time today before noticing that the current rubygems release is way behind master (0.2.1 vs 0.8.0).

Any chance of a release?

Synapse and new nodes / auto scaling

From my understanding so far, we need to restart synapse for it to be aware of new nodes that are added to a service specifically in the case of autoscaling.
Is there a way we can make synapse aware of new nodes so it automatically reloads ?

File output does not purge unknown services

I came across a fun bug today where we removed a service and synapse decided to stop managing the output file for us. This is somewhat surprising and we should probably purge any service files we don't recognize.

is this maintained? any point in filing pull requests?

I've noticed a lot of issues with documentation that I'm currently stumbling over. There are also a few pull requests open for months with useful features (#98 comes to mind) that haven't seen any attention.

Will I waste my time to open a pull request here?

Trouble with getting Synapse to Start

Hi,

I've been playing around with ZooKeeper + Nerve + Synapse to try and get a hello_world type setup working. I got Nerve to register a hello_service to register to ZooKeeper with the path /nerve/services/userslist/services. I'm having some trouble with setting up Synapse and HAProxy to abstract the connection to that service for my hello_world app though and that's probably from me not understanding how to configure everything properly.

The setup:

hello_world:
listening: localhost:8000
Originally directly hits http://localhost:8200/users to get some dummy data
Trying to set it up to hit http://localhost:3213/<endpoint> instead

hello_service:
listening: localhost:8200
Serves up a generic json response at /users

ZooKeeper:
listening: localhost:2181
path with the service registered: /nerve/services/userslist/services

I start off by launching ZooKeeper, hello_service, hello_world, and Nerve. I can do a get /nerve/services/userslist/services in the ZooKeeper CLI and see that Nerve has hello_service in there.

When I try to start Synapse with the configuration below, I get this response and it stops at the last line:

synapse -c /root/src/synapse/synapse.conf.json
I, [2015-06-14T18:17:59.255979 #2046]  INFO -- Synapse::Synapse: synapse: starting...
I, [2015-06-14T18:17:59.256148 #2046]  INFO -- Synapse::ZookeeperWatcher: synapse: starting ZK watcher users @ hosts: localhost:2181, path: /nerve/services/userslist/services
I, [2015-06-14T18:17:59.256208 #2046]  INFO -- Synapse::ZookeeperWatcher: synapse: zookeeper watcher connecting to ZK at localhost:2181
I, [2015-06-14T18:17:59.274773 #2046]  INFO -- Synapse::ZookeeperWatcher: synapse: discovering backends for service users
I, [2015-06-14T18:17:59.277248 #2046]  INFO -- Synapse::ZookeeperWatcher: synapse: discovered 1 backends for service users
I, [2015-06-14T18:17:59.277395 #2046]  INFO -- Synapse::Synapse: synapse: regenerating haproxy config

It looks to me like it's trying to generate an haproxy.cfg file in the path specified in "config_file_path" in synapse.conf.json. I created an empty haproxy.cfg in the path specified but Synapse still gets stuck there. I can also see that Synapse is running but HAProxy isn't in the background. I've tried starting HAProxy first before Synapse but I still get the same results. I also noticed that /var/haproxy for stats.sock didn't exist so I created that folder. The .sock file didn't get created though when I ran Synapse. I've also tried hitting localhost:3212 and localhost:3213 and they both return with a connection refused error.

My synapse.conf.json is pasted down below. I'm doing this testing on an Ubuntu 14.04 Docker image. My HAProxy version is 1.4.24 and was installed before Synapse. Any help or suggestions would be greatly appreciated!

{
  "services": {
    "users": {
      "default_servers": [
        {
          "name": "default1",
          "host": "localhost",
          "port": 8200
        }
      ],
      "discovery": {
        "method": "zookeeper",
        "path": "/nerve/services/userslist/services",
        "hosts": [
          "localhost:2181"
        ]
      },
      "haproxy": {
        "port": 3213,
        "server_options": "check inter 2s rise 3 fall 2",
        "listen": [
          "mode http",
        ],
        "frontend" : ["mode tcp"],
        "backend" : ["mode tcp"]
      }
    }
  },
  "haproxy": {
    "reload_command": "service haproxy reload",
    "config_file_path": "/root/src/synapse/haproxy.cfg",
    "socket_file_path": "/var/haproxy/stats.sock",
    "do_writes": false,
    "do_reloads": false,
    "do_socket": false,
    "global": [
      "daemon",
      "user haproxy",
      "group haproxy",
      "maxconn 4096",
      "log     127.0.0.1 local0",
      "log     127.0.0.1 local1 notice",
      "stats   socket /var/haproxy/stats.sock mode 666 level admin"
    ],
    "defaults": [
      "log      global",
      "option   dontlognull",
      "maxconn  2000",
      "retries  3",
      "timeout  connect 5s",
      "timeout  client  1m",
      "timeout  server  1m",
      "option   redispatch",
      "balance  roundrobin"
    ],
    "extra_sections": {
      "listen stats :3212": [
        "mode http",
        "stats enable",
        "stats uri /",
        "stats refresh 5s"
      ]
    }
  }
}

DnsWatcher < BaseWatcher

We were trying out this service at our organization in an internal datacenter and we temporarily lost internet connection. Because this is resolving the external address of airbnb being required as part of 'ping?' it caused synapse to fail entirely. Perhaps throwing a recoverable error rather than a runtime exception would help prevent the entire cluster from collapsing.

I, [2016-02-17T17:34:09.333142 #8263]  INFO -- Synapse::Haproxy: synapse: restarted haproxy
E, [2016-02-18T16:58:00.932196 #8263] ERROR -- Synapse::Synapse: synapse: encountered unexpected exception #<RuntimeError: synapse: service watcher mongo failed ping!> in main thread
W, [2016-02-18T16:58:00.932830 #8263]  WARN -- Synapse::Synapse: synapse: exiting; sending stop signal to all watchers
I, [2016-02-18T16:58:00.932962 #8263]  INFO -- Synapse::ZookeeperDnsWatcher: synapse: stopping watcher mongo using default stop handler
W, [2016-02-18T16:58:00.933561 #8263]  WARN -- Synapse::ZookeeperDnsWatcher::Zookeeper: synapse: zookeeper watcher exiting
I, [2016-02-18T16:58:00.934565 #8263]  INFO -- Synapse::ZookeeperDnsWatcher::Zookeeper: synapse: zookeeper watcher cleaning up
I, [2016-02-18T16:58:00.934961 #8263]  INFO -- Synapse::ZookeeperDnsWatcher::Zookeeper: synapse: closing zk connection to 172.16.151.82:2181,172.16.151.86:2181,172.16.151.87:2181
I, [2016-02-18T16:58:00.937347 #8263]  INFO -- Synapse::ZookeeperDnsWatcher::Zookeeper: synapse: zookeeper watcher cleaned up successfully
/var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/lib/ruby/gems/2.2.0/gems/synapse-0.12.1/lib/synapse.rb:54:in `block (2 levels) in run': synapse: service watcher mongo failed ping! (RuntimeError)
    from /var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/lib/ruby/gems/2.2.0/gems/synapse-0.12.1/lib/synapse.rb:53:in `each'
    from /var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/lib/ruby/gems/2.2.0/gems/synapse-0.12.1/lib/synapse.rb:53:in `block in run'
    from /var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/lib/ruby/gems/2.2.0/gems/synapse-0.12.1/lib/synapse.rb:52:in `loop'
    from /var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/lib/ruby/gems/2.2.0/gems/synapse-0.12.1/lib/synapse.rb:52:in `run'
    from /var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/lib/ruby/gems/2.2.0/gems/synapse-0.12.1/bin/synapse:60:in `<top (required)>'
    from /var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/.bin/synapse:23:in `load'
    from /var/lib/mesos/slaves/20160213-002027-1469517996-5050-555-S4/frameworks/20160204-212329-1385631916-5050-163-0000/executors/thermos-1455730435744-bright-devel-authserve-0-6e52c164-8fcb-4e00-bc33-62523f32ad4c/runs/4810df56-7a1d-4bd1-9c2c-84f2c041a181/sandbox/synapse/.bin/synapse:23:in `<main>'

HAProxy configuration is not being updated

I've spent part of today trying to learn Synapse as I think it could be useful in my infrastructure. I set up a Vagrant box, where I was able to get it to successfully query the AWS API. The issue though was that it never updated the HAProxy configuration. I have committed this learning exercise to GitHub so it can be reproduced easily. I've also copied a portion of the README here for coherence.

Reproduction

vagrant up
vagrant ssh
cd /vagrant
java -jar synapse.jar --config bug.json
# quit
cat /etc/haproxy/haproxy.cfg

Synapse clearly indicates via standard out that it has identified the two EC2 instances.
It does not modify /etc/haproxy/haproxy.cfg as configured though it reports that it has regenerated the config.

synapse.jar was built with the source at e012cd3 in airbnb/synapse.

undefined method `fetch' for nil:NilClass

I started getting the following error on a few machines where the latest Synapse release was deployed. Still checking whether anything else could have caused it, but any tips would be appreciated:

I, [2016-02-23T19:44:59.815064 #1821]  INFO -- Synapse::Synapse: synapse: starting...
I, [2016-02-23T19:44:59.815198 #1821]  INFO -- Synapse::ServiceWatcher::BaseWatcher: synapse: starting stub watcher; this means doing nothing at all!
I, [2016-02-23T19:44:59.815262 #1821]  INFO -- Synapse::Synapse: synapse: configuring haproxy
W, [2016-02-23T19:44:59.815437 #1821]  WARN -- Synapse::Haproxy: synapse: unhandled error reading stats socket: #<Errno::ECONNREFUSED: Connection refused - connect(2) for /var/haproxy/stats.sock>
E, [2016-02-23T19:44:59.815793 #1821] ERROR -- Synapse::Synapse: synapse: encountered unexpected exception #<NoMethodError: undefined method `fetch' for nil:NilClass> in main thread
W, [2016-02-23T19:44:59.815891 #1821]  WARN -- Synapse::Synapse: synapse: exiting; sending stop signal to all watchers
I, [2016-02-23T19:44:59.815950 #1821]  INFO -- Synapse::ServiceWatcher::BaseWatcher: synapse: stopping watcher proddb using default stop handler
/usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse/haproxy.rb:702:in `generate_backend_stanza': undefined method `fetch' for nil:NilClass (NoMethodError)
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse/haproxy.rb:602:in `block in generate_config'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse/haproxy.rb:599:in `each'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse/haproxy.rb:599:in `generate_config'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse/haproxy.rb:585:in `update_config'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse.rb:61:in `block (2 levels) in run'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse.rb:59:in `each'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse.rb:59:in `block in run'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse.rb:52:in `loop'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/lib/synapse.rb:52:in `run'
        from /usr/local/lib/ruby/gems/2.2.0/gems/synapse-0.13.1/bin/synapse:60:in `<top (required)>'
        from /usr/local/bin/synapse:23:in `load'
        from /usr/local/bin/synapse:23:in `<main>'

When using default servers as fallback, synapse is looking for a Zookeeper method

Not sure if I am doing something wrong, or if that is a bug, but here it is:

After synapse fails to fetch backends from Zookeeper, and it falls back to default server list, it is still trying to find method "shuffle"

W, [2014-06-06T20:24:36.814618 #14]  WARN -- Synapse::ZookeeperWatcher: synapse: no backends for service application; using default servers: {"name"=>"default-app", "host"=>"111.111.111.111", "port"=>443}
I, [2014-06-06T20:24:36.817710 #14]  INFO -- Synapse::Synapse: synapse: regenerating haproxy config
E, [2014-06-06T20:24:36.818293 #14] ERROR -- Synapse::Synapse: synapse: encountered unexpected exception #<NoMethodError: undefined method `shuffle' for {"name"=>"default-app", "host"=>"111.111.111.111", "port"=>443}:Hash> in main thread
W, [2014-06-06T20:24:36.818395 #14]  WARN -- Synapse::Synapse: synapse: exiting; sending stop signal to all watchers
W, [2014-06-06T20:24:36.818452 #14]  WARN -- Synapse::ZookeeperWatcher: synapse: zookeeper watcher exiting
I, [2014-06-06T20:24:36.830512 #14]  INFO -- Synapse::ZookeeperWatcher: synapse: zookeeper watcher cleaned up successfully
/usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse/haproxy.rb:654:in `generate_backend_stanza': undefined method `shuffle' for {"name"=>"default-app", "host"=>"111.111.111.111", "port"=>443}:Hash (NoMethodError)
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse/haproxy.rb:551:in `block in generate_config'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse/haproxy.rb:548:in `each'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse/haproxy.rb:548:in `generate_config'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse/haproxy.rb:534:in `update_config'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse.rb:45:in `block in run'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse.rb:37:in `loop'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/lib/synapse.rb:37:in `run'
    from /usr/local/lib/ruby/gems/2.1.0/gems/synapse-0.10.0/bin/synapse:60:in `<top (required)>'
    from /usr/local/bin/synapse:23:in `load'
    from /usr/local/bin/synapse:23:in `<main>'

HAProxy with ActiveMQ

Is it possible to configure Synapse to work with activemq?
When creating ActiveMQ Connection what url should I pass? I can't sent the url of the haproxy since it throws an error "connections refuses".
I didn't fine in the "ReadMe" information on how configure the application itself to the Synapse, only the other side - Synapse to watchers - is described. Can someone please give more information about that?

Centos Compatibility & Higher Version of Ruby Support

synapse version - v0.12.2.
nerve version - v0.6.0

the cookbook breaks here on centos. Im running it on centos 6.5 64 bit
Error executing action run on resource 'execute[synapse_install]'

It appeared the cookbook is looking for a higher version for ruby. Centos currently installs v1.8. It also appeared there's no easy way of installing a higher version of ruby on centos. Please how should I go about this?

ELB config - Error "Name or service not known"

Hello,

I installed Synapse on a Ubuntu 14.04 server.

When I try to launch it using the following config, I receive as a response a warning "Name or service not known".

I run haproxy 1.6.3, use ruby 2.2.1 and bundler 1.11.2

Synapse correctly rewrites haproxy config file, but seems to fail to find my instances.
Is there a debug mode which I could activate to track more precisely the error?

Thank you a lot

Config file:


---
  services:
    myservice:
      default_servers:
        -
          name: "elb"
          host: "ELB_IP"
          port: 80
      discovery:
        method: "ec2tag"
        tag_name: "TAG_NAME"
        tag_value: "TAG_VALUE"
        aws_access_key_id: "AWS_ACCESS_KEY_ID"
        aws_secret_access_key: "AWS_SECRET_ACCESS_KEY"
        aws_region: "AWS_REGION"
      haproxy:
        port: 3213
        server_port_override: "7000"
        server_options: "check inter 2000 rise 3 fall 2"
        frontend:
          - "mode tcp"
        backend:
          - "mode tcp"
  haproxy:
    bind_address: "0.0.0.0"
    reload_command: "service haproxy reload"
    config_file_path: "/etc/haproxy/haproxy.cfg"
    do_writes: true
    do_reloads: true
    global:
      - "log 127.0.0.1 local0"
      - "log 127.0.0.1 local1 notice"
      - "user haproxy"
      - "group haproxy"
    defaults:
      - "log global"
      - "balance roundrobin"
      - "timeout client 50s"
      - "timeout connect 5s"
      - "timeout server 50s"

Response:

I, [2016-02-08T16:39:00.007426 #2617]  INFO -- Synapse::Synapse: synapse: starting...
I, [2016-02-08T16:39:00.007588 #2617]  INFO -- Synapse::ServiceWatcher::Ec2tagWatcher: Connecting to EC2 region: AWS_REGION
I, [2016-02-08T16:39:00.696891 #2617]  INFO -- Synapse::ServiceWatcher::Ec2tagWatcher: synapse: ec2tag watcher looking for instances tagged with TAG_NAME=TAG_VALUE
I, [2016-02-08T16:39:00.697127 #2617]  INFO -- Synapse::Synapse: synapse: configuring haproxy
W, [2016-02-08T16:39:00.697281 #2617]  WARN -- Synapse::Haproxy: synapse: unhandled error reading stats socket: #<TypeError: no implicit conversion of nil into String>
I, [2016-02-08T16:39:00.721859 #2617]  INFO -- Synapse::Haproxy: synapse: restarted haproxy
W, [2016-02-08T16:39:02.886652 #2617]  WARN -- Synapse::ServiceWatcher::Ec2tagWatcher: synapse: error in ec2tag watcher thread: #<SocketError: getaddrinfo: Name or service not known>
W, [2016-02-08T16:39:02.886780 #2617]  WARN -- Synapse::ServiceWatcher::Ec2tagWatcher: ["/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:879:in `initialize'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:879:in `open'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:879:in `block in connect'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/timeout.rb:89:in `block in timeout'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/timeout.rb:99:in `call'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/timeout.rb:99:in `timeout'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:878:in `connect'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:863:in `do_start'", "/usr/local/rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:858:in `start'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/connection_pool.rb:327:in `start_session'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/connection_pool.rb:127:in `session_for'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/net_http_handler.rb:56:in `handle'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:253:in `block in make_sync_request'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:289:in `retry_server_errors'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:249:in `make_sync_request'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:511:in `block (2 levels) in client_request'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:391:in `log_client_request'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:477:in `block in client_request'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:373:in `return_or_raise'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:476:in `client_request'", "(eval):3:in `describe_instances'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/ec2/filtered_collection.rb:44:in `filtered_request'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/ec2/instance_collection.rb:318:in `each'", "/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/synapse-00024028b366/lib/synapse/service_watcher/ec2tag.rb:108:in `select'", "/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/synapse-00024028b366/lib/synapse/service_watcher/ec2tag.rb:108:in `instances_with_tags'", "/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/synapse-00024028b366/lib/synapse/service_watcher/ec2tag.rb:86:in `block in discover_instances'", "/usr/local/rvm/gems/ruby-2.2.1/gems/aws-sdk-v1-1.66.0/lib/aws/core.rb:598:in `memoize'", "/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/synapse-00024028b366/lib/synapse/service_watcher/ec2tag.rb:85:in `discover_instances'", "/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/synapse-00024028b366/lib/synapse/service_watcher/ec2tag.rb:63:in `watch'", "/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/synapse-00024028b366/lib/synapse/service_watcher/ec2tag.rb:23:in `block in start'"]

Zookeeper - global configuration from the ENV?

I currently use Synapse with zookeeper, using a path pattern like '/production/services/someservice' etc. What I'd like to be able to do is take the first part of that path (i.e. /production) from the environment. Does this seem like a reasonable / sensible feature to add, and would it be useful to others? For context, I'm hosting stuff in AWS, and would like to combine this with user-data to enable me to move instances (created from AMIs) from "staging" to "production".

Why logging to STDERR?

Sorry for posting questions here, but didn't find any other place for questions.

Why Synapse (and actually Nerve too) log everything to STDERR? I'm not Ruby specialist and don't know is this common way to do it, but would like to hear what is the reason for that. Thanks!

Support for pluggable proxy layer

I was wondering what your views are in making the proxy layer pluggable which would allow me to use nginx instead of haproxy. I didn't want to prematurely start coding so wanted to hear your opinion first.
@igor47

serfdom.io service watcher

Serf is "a decentralized solution for service discovery and orchestration that is lightweight, highly available, and fault tolerant."

http://rubygems.org/gems/kahuna is a Serf library for Ruby. I don't know how mature this library is, but it's version 0.0.1 so my gut tells me "not very". :)

[Enhancement] Support Apache Aurora zookeeper service discovery

Apache Aurora writes information into zookeeper in a slightly different layout than what the zookeeper support in synapse expects.

{'additionalEndpoints': {'aurora': {'host': 'mesos-slave03of2.example.com',
                                      'port': 31616},
                          'http': {'host': 'mesos-slave03of2.example.com',
                                    'port': 31616}},
 'serviceEndpoint': {'host': 'mesos-slave03of2.example.com',
                      'port': 31616},
 'shard': 1,
 'status': 'ALIVE'}

I'll send a pull request to add the support in a moment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.