Giter VIP home page Giter VIP logo

horust's People

Contributors

dependabot[bot] avatar esavier avatar federicoponzi avatar hanshuebner avatar tomgranot avatar tomgs avatar zicklag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

horust's Issues

Per-service resource limits

It would be nice to have another section for allowing per-service resource limits. This is just a draft and would need some more thoughts on it.

Termitate all services immediately when user press CTRL+C second time.

It is common in other CLI similar to Horust, to gracefully stop all services on first CTRL+C, but terminate it immediately on following signals.

I'd expect similar behavior in Horust, especially for local-dev environment purposes where graceful-stop is not always necessary.

Of course, I could just prepare another service configuration for that purpose, but I think this is a justified case to implement this enhancement.

I'm willing to dig into signal handling in Horust and prepare PR if you accept it.

Serde build error

There is an issue with upstream serde. 1.0.119 (works with serde = {version = "=1.0.118", features = ["derive"] }

Horust is using serde::export and this module was renamed in serde 1.0.119.
problematic files:
src/horust/formats/service.rs
src/lib.rs

Memory leak in healthcheck

First of all, I want to thank you for writing an excellent init system for containers. Horust finds a fantastic balance of simplicity and functionality.

Version Information:

Description:

There appears to be a memory leak caused by health checks as a copy of an object describing the service is created every check interval. Additionally, disabling the health check feature flag does not resolve the issue as the code is not completely removed. To workaround the issue I have manually patched out the checking code (I am not proficient with rust, so I am not confident in raising a PR for this).

Support custom service termination order

Similar to how services are started in a specific order (using 'start-after'), would it be possible to shutdown services in a specific order as well? Maybe even just in the reverse order that the services were started?

My use case is that I run a VPN (Tailscale), logs collector (Grafana Agent) and an API in the same container. When a shutdown occurs, ideally the API would shutdown first, then the logs collector, then the VPN. But currently, since the signal is sent to all services at once, the order is fairly random.

Proxy signals when running in single command mode

In single command mode, horust is run like this:

cargo run  -- -- /bin/bash

If started without Horust, bash handles SIGINT signals (CTRL^C). If run via horust, and we do CTRL^C, horust will intercept it and send a SIGTERM to bash which will then stop.
When running it via single command mode, horust should just proxy the signals and reap children processes.

Failed to compile master on MacOS

Version Information:

  • Version: 0.11.1

Macbook running Catalina 10.15.5, rustc 1.44.1 (c7087fe00 2020-06-17)

Description:

Compiling fails, giving:

 Compiling horust v0.1.1 (/Users/t0mgs/IdeaProjects/Horust)
error[E0432]: unresolved imports `libc::prctl`, `libc::PR_SET_CHILD_SUBREAPER`

src/horust/mod.rs:14:12
   |
14 | use libc::{prctl, PR_SET_CHILD_SUBREAPER};
   |            ^^^^^  ^^^^^^^^^^^^^^^^^^^^^^ no `PR_SET_CHILD_SUBREAPER` in the root
   |            |
   |            no `prctl` in the root

error[E0425]: cannot find function `execvpe` in module `nix::unistd`
   --> src/horust/runtime/process_spawner.rs:149:18
    |
149 |     nix::unistd::execvpe(program_name.as_ref(), arg_cptr.as_ref(), env_cptr.as_ref())?;
    |                  ^^^^^^^ help: a function with a similar name exists: `execve`
    | 
   ::: /Users/t0mgs/.cargo/registry/src/github.com-1ecc6299db9ec823/nix-0.16.1/src/unistd.rs:734:1
    |
734 | pub fn execve(path: &CStr, args: &[&CStr], env: &[&CStr]) -> Result<Void> {
    | ------------------------------------------------------------------------- similarly named function `execve` defined here

error: aborting due to 2 previous errors

Some errors have detailed explanations: E0425, E0432.
For more information about an error, try `rustc --explain E0425`.
error: could not compile `horust`.

Haven't actually figured out what these deps are, just wanted to persist this if anyone else has the same issue.
@FedericoPonzi gettin' late round these parts, if you can take a look that would be great.

Verbose logging

I'm using Horust to start Apache2, which is by itself a very quiet service - even with the "debug" log level, it doesn't show when it started up. It would be great if there was a -v flag to Horust that would cause it to output messages like the following ones:

  • Started service "apache2"
  • Service "apache2" is now healthy
  • Service "php-fpm" failed with strategy "shutdown", stopping all services
  • Stopped service apache2

This would make the log output way more useful, as also information the services processes themselves can't know would be logged.

[environment] keep-env = true

keep-env = bool: default: true. Pass over all the environment variables.
https://federicoponzi.github.io/Horust/

The default appears to be keep-env = false. The environment variables were not passed to the command until I added:

[environment]
keep-env = true

`/etc/horust/services/azure-functions-host.toml

command = "dotnet /azure-functions-host/Microsoft.Azure.WebJobs.Script.WebHost.dll"
[restart]
strategy = "on-failure"
backoff = "5s"
attempts = 10
[failure]
strategy = "shutdown"

Allow using current directory as config root

I would like to use horust as development tool. That means, for example, attaching a bunch of horust configs inside the repo, and instead of meddling with docker, use horust to start and manage several applications that would work in tandem.

  • How the feature should work.
    i would love to have flag, specifying current directory as a directory to search for config files, i.e.
    given following structure in, for example /home/user/apps
    bin/service1 bin/service2 cfg/service1.toml cfg/service2.toml
    i would like to be able to do something like horust --here to start horust (hopefuly nondemonized) instance to start and run application configured by horust's cfg/*toml configs. Console output would be horust log, while application outputs would follow the rules inside configs

use case would be, for example, testing a bunch of services or applications that have to work together.
Part of the horust that manages the applications and keeps them under supervision would be great for that purpose.

Socket activated service

I'm not sure how much this is used in the wild, but it looks like something interesting to program.

Permission fixer scripts

Something similar to: https://github.com/just-containers/s6-overlay#fixing-ownership--permissions

Usually when you start a docker container, you want to fix permissions on the volumes (so needs to happen at runtime and not build time). Without this feature, you would need to create a service which runs and won't be restarted, which is a bit ugly because services are long running processes and one shot services are not suitable for being supervised.

The idea is to provide a format similar to s6-overlay's.

This is suitable if you want to avoid spawning another process and thus makes faster startups.

Add Verification for service/config file

Problem:
Service uses specific user in service file. Let's say UserA.
For root this service file is ok, since root can ::setuid() to any other user easily without assistance of the user. So running user's stuff as a root should never be a problem.
However, running the service as any other user, let's say UserB, will fail to do so, with very weird error code. So, lets fix that.

Proposition:
Let's introduce precheck stage, when each loaded file will be checked against logic issues (since serde already handles syntax). In that stage we will check if specified directories/files exist, if user can setuid to specified user, if program specified is runnable and has correct permissions, etc... etc…
That also would allow us to implement --create-if-absent (of course this switch's name is placeholder for sake of example) that will create directories, for example, for logs.

What is this for, exactly?

@FedericoPonzi While the general concept of an init system is clear to most everyone who's ever dealt with a *NIX system, I'm not exactly sure whether a container init system is something most people spend time thinking about.

To be clearer - why do I need systemd inside a container? Don't containers already have systemd? Is Horust a replacement for systemd, even? How does it stack up against other tools? Can it replace something I have in my stack now, even in its alpha phase?

I'm not saying that Horust should be anything re the above, I'm saying that outside of the immediate people who you and I talked to about the project, it's unclear exactly what it's doing here and what it... well, is.

Let's dream for a sec: If Horust gets big, what would it like? Who would use it? What would it replace?

Improve reliability by removing heap allocations

In Rust, when we try to allocate memory, the behaviour of the default allocator in case of error is to panic. In order to make horust more resilient to those edge cases, it would be nice to get rid of all the heap allocations.
I'm not sure how to tackle this, but for sure some research will need to be done in order to figure how to tackle this.

Horust hangs forever when received SIGTERM before all services go up and running.

Version Information:

  • Version: v0.1.3

First noticed the bug on my modified branch (see PR-draft #57), however, the same bug occurs on fresh master and when my friend installed horust via cargo install.

  • OS: Linux
  • CPU: x64

Description:

Scenario:

  • I start horust services
  • Not all services are running, at least one is still "Starting",
  • I have to quickly kill them all (because for example I made a mistake in configuration and I don't want to wait till service goes up)

Then Horust will terminate all already running services, but waits forever for this late service which is never changing its state from Starting to Started.

It means that in the main loop Horust checks whether all services are Finished or Failed, but this one service will just wait forever in Starting state. I guess for some reason it never receives SpawnFailed event?

The quickest way to test it is to add start-delay = "2s" in service.toml.

I made some eprintln! and tried to debug it:

    Finished dev [unoptimized + debuginfo] target(s) in 0.19s
[2021-04-29T07:51:21Z INFO  horust] Loading services from directories:
    * ./core
    * ./extra
[2021-04-29T07:51:21Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:21Z INFO  horust::horust::supervisor] Applying events... [
    Run("1.toml"), Run("2.toml"), Run("3.toml"), Run("4.toml"), Run("5.toml"), Run("6.toml"), Run("7.toml"), Run("8.toml"), Run("9.toml"), Run("10.toml")
]
[2021-04-29T07:51:22Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:22Z INFO  horust::horust::supervisor] Applying events... []
^C[2021-04-29T07:51:22Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:22Z WARN  horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("6.toml")
[2021-04-29T07:51:22Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:22Z WARN  horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("7.toml")
[2021-04-29T07:51:23Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:23Z WARN  horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("8.toml")
[2021-04-29T07:51:23Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:23Z WARN  horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("4.toml")
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("2.toml", Pid(66623))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("5.toml", Pid(66624))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("1.toml", Pid(66625))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("10.toml", Pid(66626))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("3.toml", Pid(66627))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("9.toml", Pid(66628))
[2021-04-29T07:51:23Z INFO  horust::horust::supervisor] Applying events... [
    PidChanged("2.toml", Pid(66623)),
    PidChanged("5.toml", Pid(66624)),
    PidChanged("1.toml", Pid(66625)),
    PidChanged("10.toml", Pid(66626)),
    PidChanged("3.toml", Pid(66627)),
    PidChanged("9.toml", Pid(66628))
]
[2021-04-29T07:51:23Z WARN  horust::horust::supervisor] 1. SIGTERM received
PID CHANGED: "2.toml" - 66623
PID CHANGED: "5.toml" - 66624
PID CHANGED: "1.toml" - 66625
PID CHANGED: "10.toml" - 66626
PID CHANGED: "3.toml" - 66627
PID CHANGED: "9.toml" - 66628
[2021-04-29T07:51:24Z INFO  horust::horust::supervisor] Applying events... [
    ShuttingDownInitiated(Gracefuly),
    StatusChanged("2.toml", Started),
    StatusChanged("5.toml", Started),
    StatusChanged("1.toml", Started),
    StatusChanged("10.toml", Started),
    StatusChanged("3.toml", Started),
    StatusChanged("9.toml", Started)
]
[2021-04-29T07:51:24Z WARN  horust::horust::supervisor] 1. SIGTERM received
[2021-04-29T07:51:24Z WARN  horust::horust::supervisor] Gracefully stopping...
[2021-04-29T07:51:24Z INFO  horust::horust::supervisor] Applying events... [
    ShuttingDownInitiated(Gracefuly),
    StatusUpdate("1.toml", InKilling),
    Kill("1.toml"),
    StatusUpdate("2.toml", InKilling),
    Kill("2.toml"),
    StatusUpdate("3.toml", InKilling),
    Kill("3.toml"),
    StatusUpdate("5.toml", InKilling),
    Kill("5.toml"),
    StatusUpdate("11.toml", Finished),
    StatusUpdate("9.toml", InKilling),
    Kill("9.toml"),
    StatusUpdate("10.toml", InKilling),
    Kill("10.toml")
]
[2021-04-29T07:51:24Z WARN  horust::horust::supervisor] Gracefully stopping...
[2021-04-29T07:51:24Z INFO  horust::horust::supervisor] Applying events... [
    StatusChanged("1.toml", InKilling),
    StatusChanged("2.toml", InKilling),
    StatusChanged("3.toml", InKilling),
    StatusChanged("5.toml", InKilling),
    StatusChanged("11.toml", Finished),
    StatusChanged("9.toml", InKilling),
    StatusChanged("10.toml", InKilling)
]
[2021-04-29T07:51:25Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:25Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:25Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:25Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:26Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:26Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:26Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:27Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:27Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:27Z INFO  horust::horust::supervisor] Applying events... [
    ForceKill("1.toml"),
    ForceKill("2.toml"),
    ForceKill("3.toml"),
    ForceKill("5.toml"),
    ForceKill("9.toml"),
    ForceKill("10.toml")
]
[2021-04-29T07:51:28Z INFO  horust::horust::supervisor] Applying events... [
    StatusChanged("1.toml", Failed),
    StatusChanged("2.toml", Failed),
    StatusChanged("3.toml", Failed),
    StatusChanged("5.toml", Failed),
    StatusChanged("9.toml", Failed),
    StatusChanged("10.toml", Failed),
    StatusUpdate("1.toml", FinishedFailed),
    StatusUpdate("2.toml", FinishedFailed),
    StatusUpdate("3.toml", FinishedFailed),
    StatusUpdate("5.toml", FinishedFailed),
    StatusUpdate("9.toml", FinishedFailed),
    StatusUpdate("10.toml", FinishedFailed)
]
[2021-04-29T07:51:28Z INFO  horust::horust::supervisor] Applying events... [
    StatusChanged("1.toml", FinishedFailed),
    StatusChanged("2.toml", FinishedFailed),
    StatusChanged("3.toml", FinishedFailed),
    StatusChanged("5.toml", FinishedFailed),
    StatusChanged("9.toml", FinishedFailed),
    StatusChanged("10.toml", FinishedFailed)
]
[2021-04-29T07:51:28Z INFO  horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:28Z INFO  horust::horust::supervisor] Applying events... []
At this point there are no new events forever (loop with empty Applying events... []

My modifications:

  • Added Graceful/Forceful stop - but as I wrote, it isn't the cause as the bug occurs also on fresh main branch.
  • Added SPAWN_FOR_EXEC_HANDLER_LOOP eprintln! in process_spawner just after loop ends (before bus.send_event(ev). Thanks to that, we can see that SpawnFailed event is created!
  • I added two extra exprintln!: in supervisor/mod.rs::handle_event() to see if we ever process SpawnFailed or PidChanged. One is processed, another isnt.
  • So, I changed log level for Applying events to info -> We can see that SpawnFailed event is never received!
  • But PidChanged is received. Why?

Extra conclusion:
Whats interesting, one service ("11.toml") had start-delay = 10s instead of 2s, and it had start-after=["8.toml"].
So horust correctly were waiting till 8.toml goes up, but its hanged on Starting (Spawn Failed has not been processed), therefore 11.toml is in Initial state, so service_handler.rs changed its status to Finished correctly.

The only way to kill horust then is killall horust -9

Why horuseye?

Any connections to occultist society?

I am the only person who worries when there are Egyptian mythological deities,
get involved in a linux init starter?

what about allusions to sects like illuminati and horus eye, who want to enslave the world with Bill Gates and Klaus Schwab and Elon musk?

CI cargo fmt warnings

Consider the output in here, under the cargo fmt --check stage:

/usr/share/rust/.cargo/bin/cargo fmt --all -- --check
Warning: can't set `indent_style = Block`, unstable features are only available in nightly channel.
Warning: can't set `wrap_comments = false`, unstable features are only available in nightly channel.
Warning: can't set `format_code_in_doc_comments = false`, unstable features are only available in nightly channel.
Warning: can't set `comment_width = 80`, unstable features are only available in nightly channel.
Warning: can't set `normalize_comments = false`, unstable features are only available in nightly channel.
Warning: can't set `normalize_doc_attributes = false`, unstable features are only available in nightly channel.
Warning: can't set `license_template_path = ""`, unstable features are only available in nightly channel.
Warning: can't set `format_strings = false`, unstable features are only available in nightly channel.
Warning: can't set `format_macro_matchers = false`, unstable features are only available in nightly channel.
Warning: can't set `format_macro_bodies = true`, unstable features are only available in nightly channel.
Warning: can't set `empty_item_single_line = true`, unstable features are only available in nightly channel.
...

This looks similar to rust-lang/rustfmt#2227, needs a bit more investigation.

TODO:

  • Once this is fixed, ensure src/dummy.rs that was added in #41 is removed.

Implement Restart section

The strategy is already there, but backoff and attemps are still missing:

  • strategy = always|on-failure|never: Defines the restart strategy.
  • backoff = string: Use this time before retrying restarting the service.
  • attempts = number: How many attempts before considering the service as Failed.

List of shutdown signals is incomplete

Version Information:

  • Version: 0.1.2 (built from latest master at 5dba1c2)
  • Alpine Linux 3.13 container running on Docker

Description:

I was trying to shut down Apache2 gracefully by sending it a "SIGWINCH" (as for whatever reason that's the signal they decided on for that), but I'm getting the following error when setting signal = "WINCH" as it's not part of the list of allowed signals:

Failed loading toml file: /etc/horust/services/apache2.toml
    
    Caused by:
        unknown variant `WINCH`, expected one of `TERM`, `HUP`, `INT`, `QUIT`, `USR1`, `USR2` for key `termination.signal` at line 10 column 1

Maybe it would be possible to also allow an integer for the signal option, or to add all available signals there?

Segfault When Building For `x86_64-unknown-linux-musl`

Version Information:

  • Version: b1c5d7d
  • uname -a: Linux pop-os 5.13.0-7614-generic #14~1631647151~20.04~930e87c-Ubuntu SMP Fri Sep 17 00:26:31 UTC x86_64 x86_64 x86_64 GNU/Linux

Description:

Hey there! I'm trying to use horust as an init system for an experimental tiny operating system and I tried to build against the musl C library to keep things small and dependency-less, but I'm running into a segfault when starting the program. The segfault happens immediately with no extra info:

➜ horust
fish: “horust” terminated by signal SIGSEGV (Address boundary error)

Default working directory for services is not useful

Description:

Horust currently uses "/" as working directory for services that have no working-directory specified. It would be more useful if the working directory defaulted to the working directory of the Horust process. That way, when not running in a container, it would be easier to reference e.g. configuration files passed to services as relative paths (i.e. `command = "foo -c foo.conf"). Before coming up with a PR, I would like to know whether it was a conscious decision to default to "/".

Option for setting threshold for failed healthchecks

This issue is for fixing this todo: https://github.com/FedericoPonzi/Horust/blob/master/src/horust/healthcheck/mod.rs#L48

Right now if more than 2 healthchecks in a row fail, the service will be killed. This might be too much aggressive for some programs, and not enough aggressive for others.
It would be nice to have in the healthiness section of the config a parameter for setting the max failed amount of requests in a row before the service is killed.

Run specified services only

Currently, there are two ways to run something:

  • <command>: run single command
  • --services-path: run all services from a path

But it's missing something like --services-file or --service-configuration to run specified services only. My use case: I build an "self-contained" image with frontend, backend and auxiliary services. On the test system all services should run. But for production only backend and auxiliary services are relevant. I can workaround it by creating multiple "services-path" directories, but it would be nice just specify multiple service configurations.

Or probably better, if --services-path could accept wildcards like:

  • --services-path /dir/*: all services inside /dir. Maybe also same as --services-path /dir to maintain compatibility
  • --services-path /dir/**: all services inside /dir and children
  • --services-path /dir/backend.toml: only backend service

WDYT?

Add support for "die-if-failed" parameter

If you have service-a.toml, and service-b.toml which both start-after db.toml, and db.toml specifies as failure strategy "kill-dependencies", both service-a and service-b will be killed.
This will kill both, despite the fact that service-b might be able to survive and keep working by relying on a cache or something.
This issue propose add a new "die-if-failed" parameter under the termination section.
My first thought was to add a depends parameter, but this is too much generic because:

  1. overlaps the concept of start-after
  2. It's a depends on b, and b exit with a succesfull code, should we kill service a? Not sure.

As a side note, I don't like the naming. It sort of implies the existence of die-if-*. But I don't see the use case for die after other statuses for now. If there is some demand, we can thing of having a generic die with an array of service_name : status.

Create missing config directories and files on startup

Simple feature, but i don't know how to bite it from rust standpoint.

Technically, it would be awesome to create the default config file if it's missing. This also entails creating default directories. It may be done on setup, but since horust is working on top of cargo, cargo install just copies exec to bin location.

Right now, without it, Horust just bails out with ENOENT.

can be done either on startup or installation

  • dump default config to file if it not exists (/etc/horust/horust.toml)
  • create directory tree (/etc/horust/xxx )

Command based healthchecks

It would be nice to let user use custom commands when doing healthchecks. A command returning 0 would indicate an healthy service, unhealthy otherwise.

Usability as a crate?

I currently have a rust program that starts another via Popen. I need to do better than that, like restart on crash with backoff, et cetera. Not difficult, but I'm sure there are edge cases I'd miss if I reinvented the wheel. Horust looks like it's solved a lot of this stuff, so it'd be nice to use it.

After a 30 second glance at docs.rs and the code the sort of thing I'd be looking for would be a way to instantiate Horus with a Service or list of Service's rather than a service file, and an interface to restart and/or reload a service.

Is this the sort of thing that'd be easy for me to add to Horust, or should I look elsewhere?

Force death if any service is incorrect

Add an horust parameter to force death if any service is incorrect.
This is useful if you have container with horust and a bunch of services, and you want to bring everything down if horust cannot spin up some process.
Fail fast and loudly.

Templating service files from ENV

Intro

Okay, as a user i would like to be able to use variables inside service.toml files.
From my understanding there is no possibility to do that out of the box, i.e.
setup i have depends on various things, i.e. where the user has its own repo, what's user's username is, etc. etc.

In the field:

imagine following service file:

....
command = "../../repos/service/target/debug/service"
user = "esavier"
....

i would love to be able to modify it to look like that:

....
command = "${WORKSPACE}/target/${TARGET}/service"
user = "${USER}"
....

of course this is the variable format used by bash, and it's only here to provide an example and a background.

What i would like to achieve?

flexibility, right now i have to go through some steps that will ensure that path exists (like symlinking, bleeh)

Use pthread_atfork to make the runtime fork safe

During the run of the chained stress test, I've found a funny bug.
After the fork(), using strace I've found the process stuck on a futex() syscall, and wasn't even printing the debug line.
Futex is a thread level locking mechanisms, and according to this:

In POSIX when a multithreaded process forks, the child process looks exactly like a copy of the parent, but in which all the threads stopped dead in their tracks and disappeared.

and:

This is very bad if the threads are holding locks.
Since stdout access is synchronized across threads, we should spawn the threads using pthread_atfork call.

Avoiding prints should be enough for now, but we should use pthread_atfork to be sure that the code is safe.

Create `horustctl` for controlling horust via cli

As a normal User,

  • I can use horustctl to check the status of the services
  • I can use horustctl to start and stop the services which are being run as myself
  • After I've update a service, I can use horustctl to tell horust to pickup the new service.

As a root User,

  • I can use horustctl to check status of the services, and start / stop all of them.

Improve readability on the option "start-after"

Just as the defination of the option,it means to start after these other services,what people can expect here is the service.Therefore,this option should be filled with the service name,instead of the configuration filename.
Example,if we have two configuration files:another.toml and second.toml,we can use this:start-after = ["another", "second"].

not work ! services not restart when been killed

i pull the latest code from master and build it

here is the configuration i used

command = "top -b"
working-directory = "/tmp/"
start-delay = "0s"
[restart]
strategy = "always"

then i run horust and kill the top process , waiting ... but i found that the top do not restart

Allow Horust to load services from multiple directories

When I run my services under Horust supervision, I often split them under different categories. For example, "Services that are core for my stack" or "Extra services that are optional".

These services often need to run in parallel. Both "Extra" ones and "Core", because one depends on another. I don't want to run ALL services by providing a root directory (for example, "Another Extra service pack" I don't want to run in one scenario).

For now, I was just using two different instances of Horust to do the job. One for core and another for dependent services.

But do I have to?

I could ( I guess, I have never tried to be honest ) use symlinks and link core services to dependent services.

Or I could extend Horust to accept more than one directory path, merge fetched services into one vector and pass it to validation, and then create one single instance of Horust to rule them all :)

Service docker-compose can read multiple docker-compose.yml by accepting multiple -f docker-compose-1.yml -f docker-compose-2.yml etc.

I think (thanks to StructOpt) it is trivial to implement such behavior in Horust.
It wouldn't be a breaking change (Horust would still accept a single argument), and it would make my life much easier.

Example:

horust --service-path ./services/core --service-path ./services/extra

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.