Giter VIP home page Giter VIP logo

ochothon's People

Contributors

gitter-badger avatar lmok avatar pferrot avatar stphung-adsk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ochothon's Issues

Clarify the ochopod proxy IP that is being used

Would it make sense that when specifying the ochopod proxy to connect to as a command line argument it should override the OCHOPOD_PROXY environment variable?

I was trying to connect to an ochopod proxy by saying ocho cli <ip> but it was using the one from my environment and I had no idea for awhile. Would definitely be great to clarify this.

➜  ochothon git:(master) ✗ ocho cli <ip removed>
internal failure <- No JSON object could be decoded

Best way to pass settings to app environment

The recommended way to pass settings is to use json.loads(os.environ['pod']) and return it as the second argument from configure. However when trying this it appears that the values are not able to be loaded because they are unicode strings. I'm not sure if this is supposed to be possible, if not what's the best way to pass settings data to the application as environment variables? Thanks!

Settings:
settings:
  test: 1
  x: y
  foo: bar
Sample output:
  2015-06-04 22:44:57,400 - INFO - {u'test': 1, u'x': u'y', u'foo': u'bar'}
  2015-06-04 22:44:57,405 - WARNING - lifecycle (piped process) : failed to configure -> ..n2.7/subprocess.py (1327) -> TypeError (execve() arg 3 contains a non-string value), shutting down
  2015-06-04 22:44:57,405 - INFO - lifecycle (piped process) : finalizing pod
  2015-06-04 22:44:57,419 - WARNING - model (reactive) : configuration failed -> ..models/reactive.py (371) -> AssertionError (1+ pods failed to configure)

Allow external and container port to be specified

Currently there are two options which are to:

  • Choose the external and container port to be the same port using syntax such as "9000 *"
  • Choose the container port using syntax such as "9000"

It would be nice to be able to choose both so that it is possible to specify the external port and the container port. This allows reuse of the image since those typically launch applications on a static port. The use case for us is to be able to run the same image on N statically defined ports.

Dependent containers stop on first scale up via Marathon UI

In my use case I have an application tier which an haproxy tier depends on. Right now I am following the sequence below:

  • deploy app (1)
  • deploy haproxy (1)

At this point if I use the marathon ui to scale up the app tier to 2 nodes I would expect the current apps to stay running, the new container to start up, and the haproxy to reconfigure itself. What actually happens is that the current app container turns off (so both are off at the same time) and then they reconfigure themselves and eventually both turn on. At the haproxy level, when this happens it reconfigures itself once when both app containers are down and then once again when they are both up.

Below is a sample log output from my haproxy node which shows this behavior using the 1.0.1 pod image. In the example below this image runs the sleep command since I was trying to reduce the problem.

52.26.198.211 > log -l marathon.haproxy-a
<marathon.haproxy-a> -> 100% replies (1 pods total) ->
- marathon.haproxy-a #16

  2015-06-17 22:06:42,183 - DEBUG - environment ->
    MESOS_TASK_ID -> ochopod.marathon.haproxy-a-2015-06-17-22-06-33.20def88d-153d-11e5-9f69-56847afe9799
    ochopod_debug -> true
    DEBIAN_FRONTEND -> noninteractive
    MESOS_SANDBOX -> /mnt/mesos/sandbox
    PORTS -> 23,9001
    PORT -> 23
    SUPERVISOR_PROCESS_NAME -> stack
    HOST -> ip-10-0-7-238.us-west-2.compute.internal
    SUPERVISOR_GROUP_NAME -> stack
    ochopod_namespace -> marathon
    PATH -> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    pod -> {}
    ochopod_start -> true
    PORT_9000 -> 9001
    MARATHON_APP_ID -> /ochopod.marathon.haproxy-a-2015-06-17-22-06-33
    SUPERVISOR_SERVER_URL -> http://127.0.0.1:8081
    ochopod_port -> 8080
    ochopod_local -> false
    HOSTNAME -> eb2f374935e4
    SUPERVISOR_ENABLED -> 1
    PORT_8080 -> 23
    PWD -> /
    ochopod_application ->
    MARATHON_APP_VERSION -> 2015-06-17T22:06:33.323Z
    ochopod_cluster -> haproxy-a
    PORT0 -> 23
    HOME -> /root
    ochopod_task ->
    PORT1 -> 9001
  2015-06-17 22:06:42,207 - INFO - starting marathon.haproxy-a (marathon/ec2) @ i-420753b5
  2015-06-17 22:06:42,209 - DEBUG - coordinator : connecting @ leader.mesos:2181
  2015-06-17 22:06:42,221 - DEBUG - coordinator : zk state change -> CONNECTED (disconnected)
  2015-06-17 22:06:42,231 - DEBUG - coordinator : registered as 306f3343-ba13-4d1b-b4b3-92a845fe7cbb (#16)
  2015-06-17 22:06:42,235 - DEBUG - coordinator : lock acquired @ /ochopod/clusters/marathon.haproxy-a, now leading
  2015-06-17 22:06:42,239 - DEBUG - model (reactive) : watching 1 dependencies
  2015-06-17 22:06:42,240 - INFO - model (reactive) : leading for cluster marathon.haproxy-a
  2015-06-17 22:06:42,241 - DEBUG - watcher (marathon.hello-ochopod-a) : change detected in dependency
  2015-06-17 22:06:43,242 - INFO - model (reactive) : hash changed, configuration in 5.0 seconds
  2015-06-17 22:06:43,243 - DEBUG - model (reactive) : hash -> 56:bc:4d:84:a9:4d:ca:34:77:5c:e7:2c:26:28:ed:b7
  2015-06-17 22:06:43,243 - DEBUG - model (reactive) : configuration in 5.0 seconds
  2015-06-17 22:06:44,245 - DEBUG - model (reactive) : configuration in 4.0 seconds
  2015-06-17 22:06:45,245 - DEBUG - model (reactive) : configuration in 3.0 seconds
  2015-06-17 22:06:46,246 - DEBUG - model (reactive) : configuration in 2.0 seconds
  2015-06-17 22:06:47,250 - DEBUG - model (reactive) : configuration in 1.0 seconds
  2015-06-17 22:06:48,252 - INFO - model (reactive) : configuring (1 pods, i/o port 8080)
  2015-06-17 22:06:48,252 - DEBUG - control -> http://10.0.7.238:23/control/check/60
  2015-06-17 22:06:48,252 - DEBUG - model (reactive) : -> /control/check (1 pods)
  2015-06-17 22:06:48,265 - DEBUG - http in -> /control/check
  2015-06-17 22:06:49,233 - DEBUG - http out -> HTTP 200 (0 ms)
  2015-06-17 22:06:49,235 - DEBUG - control <- http://10.0.7.238:23/control/check/60 (HTTP 200)
  2015-06-17 22:06:49,235 - DEBUG - model (reactive) : json payload ->
  {
      "dependencies": {
          "hello-ochopod-a": {
              "9f92f458-947d-4673-99df-786f0c2a835d": {
                  "node": "i-420753b5",
                  "application": "ochopod.marathon.hello-ochopod-a-2015-06-17-22-06-03",
                  "task": "ochopod.marathon.hello-ochopod-a-2015-06-17-22-06-03.0b653aac-153d-11e5-9f69-56847afe9799",
                  "seq": 38,
                  "fwk": "marathon-ec2",
                  "ip": "10.0.7.238",
                  "zk": "leader.mesos:2181",
                  "namespace": "marathon",
                  "public": "52.26.45.205",
                  "cluster": "hello-ochopod-a",
                  "start": "true",
                  "debug": "false",
                  "local": "false",
                  "port": "8080",
                  "ports": {
                      "8080": 1,
                      "9000": 2
                  }
              }
          }
      },
      "pods": {
          "306f3343-ba13-4d1b-b4b3-92a845fe7cbb": {
              "node": "i-420753b5",
              "task": "ochopod.marathon.haproxy-a-2015-06-17-22-06-33.20def88d-153d-11e5-9f69-56847afe9799",
              "seq": 16,
              "zk": "leader.mesos:2181",
              "ip": "10.0.7.238",
              "fwk": "marathon-ec2",
              "namespace": "marathon",
              "start": "true",
              "port": "8080",
              "cluster": "haproxy-a",
              "application": "ochopod.marathon.haproxy-a-2015-06-17-22-06-33",
              "debug": "true",
              "local": "false",
              "public": "52.26.45.205",
              "ports": {
                  "8080": 23,
                  "9000": 9001
              }
          }
      }
  }
  2015-06-17 22:06:49,236 - INFO - model (reactive) : asking 1 pods to configure
  2015-06-17 22:06:49,236 - DEBUG - control -> http://10.0.7.238:23/control/on/60
  2015-06-17 22:06:49,236 - DEBUG - model (reactive) : -> /control/on (1 pods)
  2015-06-17 22:06:49,239 - DEBUG - http in -> /control/on
  2015-06-17 22:06:50,218 - INFO - lifecycle (piped process) : initializing pod
  2015-06-17 22:06:50,218 - INFO - lifecycle (piped process) : configuring pod 1/1
  2015-06-17 22:06:50,219 - INFO - endpoints [u'10.0.7.238:2']
  2015-06-17 22:06:50,221 - INFO - lifecycle (piped process) : popen() #1 -> started <sleep 100000000000000> as pid 68
  2015-06-17 22:06:50,256 - DEBUG - http out -> HTTP 200 (1 ms)
  2015-06-17 22:06:50,257 - DEBUG - control <- http://10.0.7.238:23/control/on/60 (HTTP 200)
  2015-06-17 22:06:50,258 - DEBUG - control -> http://10.0.7.238:23/control/ok/60
  2015-06-17 22:06:50,258 - DEBUG - model (reactive) : -> /control/ok (1 pods)
  2015-06-17 22:06:50,261 - DEBUG - http in -> /control/ok
  2015-06-17 22:06:51,223 - DEBUG - lifecycle (piped process) : cluster has been formed, invoking configured()
  2015-06-17 22:06:51,228 - DEBUG - http out -> HTTP 200 (0 ms)
  2015-06-17 22:06:51,230 - DEBUG - control <- http://10.0.7.238:23/control/ok/60 (HTTP 200)
  2015-06-17 22:06:51,234 - DEBUG - model (reactive) : new hash -> 56:bc:4d:84:a9:4d:ca:34:77:5c:e7:2c:26:28:ed:b7
  2015-06-17 22:06:51,234 - INFO - model (reactive) : configuration complete (1 pods alive)
  2015-06-17 22:07:42,292 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:08:42,375 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:09:42,451 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:10:42,540 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:11:42,623 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:12:42,709 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:13:42,792 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:14:42,873 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:15:42,951 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 68)
  2015-06-17 22:16:02,976 - DEBUG - watcher (marathon.hello-ochopod-a) : change detected in dependency
  2015-06-17 22:16:03,978 - INFO - model (reactive) : hash changed, configuration in 5.0 seconds
  2015-06-17 22:16:03,978 - DEBUG - model (reactive) : hash -> 8f:7c:ae:35:a6:49:56:82:d8:32:aa:42:2c:db:04:fe
  2015-06-17 22:16:03,978 - DEBUG - model (reactive) : configuration in 5.0 seconds
  2015-06-17 22:16:04,979 - DEBUG - model (reactive) : configuration in 4.0 seconds
  2015-06-17 22:16:05,981 - DEBUG - model (reactive) : configuration in 3.0 seconds
  2015-06-17 22:16:06,982 - DEBUG - model (reactive) : configuration in 2.0 seconds
  2015-06-17 22:16:07,984 - DEBUG - model (reactive) : configuration in 1.0 seconds
  2015-06-17 22:16:08,985 - INFO - model (reactive) : configuring (1 pods, i/o port 8080)
  2015-06-17 22:16:08,986 - DEBUG - control -> http://10.0.7.238:23/control/check/60
  2015-06-17 22:16:08,986 - DEBUG - model (reactive) : -> /control/check (1 pods)
  2015-06-17 22:16:08,989 - DEBUG - http in -> /control/check
  2015-06-17 22:16:10,007 - DEBUG - http out -> HTTP 200 (1 ms)
  2015-06-17 22:16:10,009 - DEBUG - control <- http://10.0.7.238:23/control/check/60 (HTTP 200)
  2015-06-17 22:16:10,009 - DEBUG - model (reactive) : json payload ->
  {
      "dependencies": {
          "hello-ochopod-a": {}
      },
      "pods": {
          "306f3343-ba13-4d1b-b4b3-92a845fe7cbb": {
              "node": "i-420753b5",
              "task": "ochopod.marathon.haproxy-a-2015-06-17-22-06-33.20def88d-153d-11e5-9f69-56847afe9799",
              "seq": 16,
              "zk": "leader.mesos:2181",
              "ip": "10.0.7.238",
              "fwk": "marathon-ec2",
              "namespace": "marathon",
              "start": "true",
              "port": "8080",
              "cluster": "haproxy-a",
              "application": "ochopod.marathon.haproxy-a-2015-06-17-22-06-33",
              "debug": "true",
              "local": "false",
              "public": "52.26.45.205",
              "ports": {
                  "8080": 23,
                  "9000": 9001
              }
          }
      }
  }
  2015-06-17 22:16:10,009 - INFO - model (reactive) : asking 1 pods to configure
  2015-06-17 22:16:10,010 - DEBUG - control -> http://10.0.7.238:23/control/on/60
  2015-06-17 22:16:10,010 - DEBUG - model (reactive) : -> /control/on (1 pods)
  2015-06-17 22:16:10,012 - DEBUG - http in -> /control/on
  2015-06-17 22:16:10,986 - INFO - lifecycle (piped process) : tearing down process 68
  2015-06-17 22:16:10,989 - DEBUG - watcher (marathon.hello-ochopod-a) : change detected in dependency
  2015-06-17 22:16:11,988 - DEBUG - lifecycle (piped process) : pid 68 terminated in 1 seconds
  2015-06-17 22:16:11,988 - INFO - lifecycle (piped process) : configuring pod 1/1
  2015-06-17 22:16:11,989 - INFO - endpoints ['']
  2015-06-17 22:16:11,991 - INFO - lifecycle (piped process) : popen() #2 -> started <sleep 100000000000000> as pid 2406
  2015-06-17 22:16:12,033 - DEBUG - http out -> HTTP 200 (2 ms)
  2015-06-17 22:16:12,035 - DEBUG - control <- http://10.0.7.238:23/control/on/60 (HTTP 200)
  2015-06-17 22:16:12,036 - DEBUG - control -> http://10.0.7.238:23/control/ok/60
  2015-06-17 22:16:12,036 - DEBUG - model (reactive) : -> /control/ok (1 pods)
  2015-06-17 22:16:12,039 - DEBUG - http in -> /control/ok
  2015-06-17 22:16:12,993 - DEBUG - lifecycle (piped process) : cluster has been formed, invoking configured()
  2015-06-17 22:16:13,006 - DEBUG - http out -> HTTP 200 (0 ms)
  2015-06-17 22:16:13,007 - DEBUG - control <- http://10.0.7.238:23/control/ok/60 (HTTP 200)
  2015-06-17 22:16:13,012 - DEBUG - model (reactive) : new hash -> 8f:7c:ae:35:a6:49:56:82:d8:32:aa:42:2c:db:04:fe
  2015-06-17 22:16:13,012 - INFO - model (reactive) : configuration complete (1 pods alive)
  2015-06-17 22:16:14,014 - INFO - model (reactive) : hash changed, configuration in 5.0 seconds
  2015-06-17 22:16:14,014 - DEBUG - model (reactive) : hash -> 46:12:3d:47:e7:13:3b:00:71:7b:49:cd:40:ac:bf:bb
  2015-06-17 22:16:14,015 - DEBUG - model (reactive) : configuration in 5.0 seconds
  2015-06-17 22:16:15,016 - DEBUG - model (reactive) : configuration in 4.0 seconds
  2015-06-17 22:16:16,018 - DEBUG - model (reactive) : configuration in 3.0 seconds
  2015-06-17 22:16:17,019 - DEBUG - model (reactive) : configuration in 2.0 seconds
  2015-06-17 22:16:18,021 - DEBUG - model (reactive) : configuration in 1.0 seconds
  2015-06-17 22:16:19,022 - INFO - model (reactive) : configuring (1 pods, i/o port 8080)
  2015-06-17 22:16:19,023 - DEBUG - control -> http://10.0.7.238:23/control/check/60
  2015-06-17 22:16:19,023 - DEBUG - model (reactive) : -> /control/check (1 pods)
  2015-06-17 22:16:19,026 - DEBUG - http in -> /control/check
  2015-06-17 22:16:20,043 - DEBUG - http out -> HTTP 200 (1 ms)
  2015-06-17 22:16:20,045 - DEBUG - control <- http://10.0.7.238:23/control/check/60 (HTTP 200)
  2015-06-17 22:16:20,045 - DEBUG - model (reactive) : json payload ->
  {
      "dependencies": {
          "hello-ochopod-a": {
              "a329b902-0592-452c-bd0a-70f5f6d81b92": {
                  "node": "i-aa06525d",
                  "application": "ochopod.marathon.hello-ochopod-a-2015-06-17-22-06-03",
                  "task": "ochopod.marathon.hello-ochopod-a-2015-06-17-22-06-03.4f55d93e-153e-11e5-9f69-56847afe9799",
                  "seq": 40,
                  "fwk": "marathon-ec2",
                  "ip": "10.0.3.15",
                  "zk": "leader.mesos:2181",
                  "namespace": "marathon",
                  "public": "",
                  "cluster": "hello-ochopod-a",
                  "start": "true",
                  "debug": "false",
                  "local": "false",
                  "port": "8080",
                  "ports": {
                      "8080": 3889,
                      "9000": 3890
                  }
              },
              "c7105994-9f49-40f7-b493-7c461f9444ca": {
                  "node": "i-ab06525c",
                  "application": "ochopod.marathon.hello-ochopod-a-2015-06-17-22-06-03",
                  "task": "ochopod.marathon.hello-ochopod-a-2015-06-17-22-06-03.4f56004f-153e-11e5-9f69-56847afe9799",
                  "seq": 39,
                  "fwk": "marathon-ec2",
                  "ip": "10.0.3.16",
                  "zk": "leader.mesos:2181",
                  "namespace": "marathon",
                  "public": "",
                  "cluster": "hello-ochopod-a",
                  "start": "true",
                  "debug": "false",
                  "local": "false",
                  "port": "8080",
                  "ports": {
                      "8080": 3889,
                      "9000": 3890
                  }
              }
          }
      },
      "pods": {
          "306f3343-ba13-4d1b-b4b3-92a845fe7cbb": {
              "node": "i-420753b5",
              "task": "ochopod.marathon.haproxy-a-2015-06-17-22-06-33.20def88d-153d-11e5-9f69-56847afe9799",
              "seq": 16,
              "zk": "leader.mesos:2181",
              "ip": "10.0.7.238",
              "fwk": "marathon-ec2",
              "namespace": "marathon",
              "start": "true",
              "port": "8080",
              "cluster": "haproxy-a",
              "application": "ochopod.marathon.haproxy-a-2015-06-17-22-06-33",
              "debug": "true",
              "local": "false",
              "public": "52.26.45.205",
              "ports": {
                  "8080": 23,
                  "9000": 9001
              }
          }
      }
  }
  2015-06-17 22:16:20,046 - INFO - model (reactive) : asking 1 pods to configure
  2015-06-17 22:16:20,046 - DEBUG - control -> http://10.0.7.238:23/control/on/60
  2015-06-17 22:16:20,046 - DEBUG - model (reactive) : -> /control/on (1 pods)
  2015-06-17 22:16:20,048 - DEBUG - http in -> /control/on
  2015-06-17 22:16:21,004 - INFO - lifecycle (piped process) : tearing down process 2406
  2015-06-17 22:16:22,005 - DEBUG - lifecycle (piped process) : pid 2406 terminated in 1 seconds
  2015-06-17 22:16:22,006 - INFO - lifecycle (piped process) : configuring pod 1/1
  2015-06-17 22:16:22,006 - INFO - endpoints [u'10.0.3.15:3890', u'10.0.3.16:3890']
  2015-06-17 22:16:22,008 - INFO - lifecycle (piped process) : popen() #3 -> started <sleep 100000000000000> as pid 2450
  2015-06-17 22:16:22,018 - DEBUG - http out -> HTTP 200 (1 ms)
  2015-06-17 22:16:22,020 - DEBUG - control <- http://10.0.7.238:23/control/on/60 (HTTP 200)
  2015-06-17 22:16:22,021 - DEBUG - control -> http://10.0.7.238:23/control/ok/60
  2015-06-17 22:16:22,022 - DEBUG - model (reactive) : -> /control/ok (1 pods)
  2015-06-17 22:16:22,025 - DEBUG - http in -> /control/ok
  2015-06-17 22:16:23,010 - DEBUG - lifecycle (piped process) : cluster has been formed, invoking configured()
  2015-06-17 22:16:23,043 - DEBUG - http out -> HTTP 200 (1 ms)
  2015-06-17 22:16:23,044 - DEBUG - control <- http://10.0.7.238:23/control/ok/60 (HTTP 200)
  2015-06-17 22:16:23,049 - DEBUG - model (reactive) : new hash -> 46:12:3d:47:e7:13:3b:00:71:7b:49:cd:40:ac:bf:bb
  2015-06-17 22:16:23,049 - INFO - model (reactive) : configuration complete (1 pods alive)
  2015-06-17 22:16:43,035 - DEBUG - lifecycle (piped process) : running the sanity-check (pid 2450)


52.26.198.211 >

Dependency graph

It would be nice if Ochothon could generate kind of dependency graph of the running pods, something similar to what build tools like Maven or SBT allow for library dependencies.

E.g. say I have a reverse proxy (my-reverse-proxy), which depends on 2 load balancers (lb-app-1 and lb-app-2). Those 2 load balancers depend on their respective applications: app-1 and app-2. Say you have two instances of app-1 and two instances of app-2. Then the graph could look like this:

my-reverse-proxy
+- lb-app-1
    +- app-1 (3)
+- lb-app-2
    +- app-2 (2)

I started looking into this the other day, a library like NetworkX could be leveraged and it would be fairly easy to do except that it seems Ochopod does not provide the pod dependencies when it is queried through the /info API. So that would need to be implemented.

Anyway, this is certainly a nice to have. But precisely: nice to have are...nice to have :-)

deploy (unknown command)

For some ochothon projects, for instance: marathon-ec2-flask-sample deploy command does not work (unknown command) if you are inside the project:

# Inside project
$ pwd
./marathon-ec2-flask-sample
$ ocho cli ocho-proxy
welcome to the ocho CLI ! (CTRL-C or exit to get out)
ocho-proxy > deploy
unknown command (available commands -> bump, deploy, exec, grep, kill, log, ls, off, on, poll, port, reset, scale)
ocho-proxy > exit

# Outside project
$ cd ..
$ ocho cli ocho-proxy
welcome to the ocho CLI ! (CTRL-C or exit to get out)
ocho-proxy > deploy
error: too few arguments
usage: deploy [-h] [-j] [-n NAMESPACE] [-o OVERRIDES [OVERRIDES ...]]
              [-p PODS] [-r RELEASE] [-s SUFFIX] [-t TIMEOUT] [--strict] [-d]
              containers [containers ...]

Need to be able to (re-)configure the sub-process running within an ochopod instance

I was thinking of a feature that would be great to have in ochothon: the ability to pass some parameters to the desired pods in order to re-configure the underlying application.

My use case: ochopod cluster running a number of instances of my scala+akka+spray application (web service). Sometimes I need to change some configuration properties (that are currently in the application.conf). Would be great if I could do this from ochothon. No need to redeploy the pod: just pass some configuration values (specific to you application) and the sub-process is restarted if needed.

The configuration should ideally be persisted so that it is still used in case the pods are destroyed and re-created. Not all pods in a given cluster will use the same configuration though (i.e. configuration is instance-specific and not cluster-specific).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.