Giter VIP home page Giter VIP logo

node-forky's Introduction

node-forky

Build Status

Forky makes using the cluster module easier without preventing you from using it directly.

Problem: using require('cluster') properly is difficult, error prone, and hard to test.
Solution:

master.js

var forky = require('forky');
forky({path: __dirname + '/worker.js'});

worker.js

var http = require('http');
http.createServer(function(req, res) {
  res.writeHead(200, {'content-type': 'text/html'});
  res.end('Hello from worker ' + require('cluster').worker.id);
});

bash

$ node master
For a more complete example please check out the detailed example implementation

installation

npm install forky

usage

Forky is meant to work with your existing http server without modifying the server.
To take advantage of a multi-core system create a new file called master.js and require forky:

master.js

var forky = require('forky');

Once this file is created, call forky by passing it the file path to your existing http server or your old entry point to your application.

example: if you used to type node index.js to start your application, your master.js file should look like this

master.js

var forky = require('forky');
forky({path: __dirname + '/index.js'});

What forky will do is spawn a number of workers equal to the number of cores you have available on your system.
If one of these workers disconnects for any reason due to a process.uncaughtException or even a kill -9 <pid> to the worker process, forky will spawn a new worker immedately.
After forky has spawned a new worker it will attempt to gracefully shut down your disconnected worker. After a timeout if your disconnected worker is still running, forky will forcefully kill it.

The best way to handle unexpected errors in node is to shut down your process and spawn a new one. Forky makes clean process shutdown & respawn easy as pie.

Let's implement an http server in node that throws an uncatchable exception.

index.js

var http = require('http');
http.createServer(function(req, res) {
  res.writeHead(200, {'content-type': 'text/html'});
  res.end('everything is groovy');
  setTimeout(function() {
    throw new Error("This will crash your node process");
  }, 1000);
});

Now if someone hits our server, 1 second later the server will crash. The easy but wrong way to handle this is by adding a process.on('uncaughtException') handler and just keep going forward as if nothing has happend:

index.js

var http = require('http');
http.createServer(function(req, res) {
  res.writeHead(200, {'content-type': 'text/html'});
  res.end('everything is groovy');
  setTimeout(function() {
    throw new Error("This will crash your node process");
  }, 1000);
});

process.on('uncaughtException', function(err) {
  //log the error
  console.error(err);
  //continue on as if nothing has happend...
  //but something HAS happened.  What if the error wasn't in a timeout?
  //what if our error came from somewhere deep down and left some dangling
  //uncloses sockets or connections to database or open files?  We could be leaking
  //resources slowly and not even know it. oh no!
});

Instead of doing that let's use our master.js file we created above and modify our worker to gracefully disconnect to cleanup the problems caused by the unexpected error:

index.js

var http = require('http');
http.createServer(function(req, res) {
  res.writeHead(200, {'content-type': 'text/html'});
  res.end('everything is groovy');
  setTimeout(function() {
    throw new Error("This will crash your node process");
  }, 1000);
});

process.on('uncaughtException', function(err) {
  //log the error
  console.error(err);
  //let's tell our master we need to be disconnected
  require('forky').disconnect();
  //in a worker process, this will signal the master that something is wrong
  //the master will immediately spawn a new worker
  //and the master will disconnect our server, allowing all existing traffic to end naturally
  //but not allowing this process to accept any new traffic
});

All of the above is to help with graceful shutdowns. Forky doesn't actually need you to signal disconnect from your workers. You can just let the exception crash the process, you can call process.exit(), or do anything else you want to clean up. Once your worker closes, regardless of the reason, forky will spawn a new one.

Options

can take several options in addition to the file path:

  • path - The path to the file to launcher for workers
  • workers - The number of workers to launch (default: the number of cores)
  • callback - A callback to call when forky has launched the workers
  • enable_logging - Whether to enable forky logging (default: false)
  • kill_timeout - The kill timeout (milliseconds) to use if a worker does not kill shutdown properly and is not given a timeout when it is told to disconnect (default: 1000)
  • scheduling_policy - The scheduling policy to use for cluster.

Contributing

I love contributions. If you'd like to contribute a bug fix, send in yer pull requests!

If you want to add a more substantial feature open an issue, and let's discuss it. We can turn that issue into a pull request and get new features added. Open Source Is Awesome. ๐Ÿ‘

Due to the race-condition type nature of managing a cluster of workers the tests don't use a test framework, they just batter the hell out of the example server and make sure it never returns an unexpected result. To run the tests just type make after cloning & doing an npm install

License

Copyright (c) 2013 Brian M. Carlson

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

node-forky's People

Contributors

brianc avatar dependabot[bot] avatar freewil avatar jtrautman avatar sehrope avatar shehzan10 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-forky's Issues

Error: channel closed

Hey, I'm not sure, but I think there's a little bug in the disconnect function.

When a client repeatedly hits a route that throws an error (that results in disconnecting the worker), I get the following error:

Error: channel closed
18:09:12 web.1  |     at process.target.send (child_process.js:406:26)
18:09:12 web.1  |     at Worker.send (cluster.js:406:21)
18:09:12 web.1  |     at Function.forky.disconnect (./node_modules/forky/index.js:77:12)

When the client hits the route the first time, the worker will get disconnected but the connection to the client remains. If the client hits the route again the worker will try to send another disconnect message to the master. Since the channel is closed, the above error will occur.

In the disconnect function the following lines (I guess) should prevent this behavior:

if(worker.state == 'disconnecting') return;
  worker.state = 'disconnecting';

The worker however will set its state to disconnected (not disconnecting), which will lead the worker to send another message.

Correct me If I'm wrong!?

Maybe the above lines should be changed to:

if(worker.state == 'disconnecting' || worker.state == 'disconnected') return;
  worker.state = 'disconnecting';

What do you think?

Best regards!

Be able to set our own number of workers

// please add optional count variable to be able to set a custom number of workers ;-)

forky = module.exports = function (path, cb, count) {
cluster.setupMaster({
exec: path
});
var cores = count || os.cpus().length;

Add 0.8.x support

worker.kill doesn't exist in 0.8.x of node. Please use worker.destroy instead which is supported in 0.8.x and 0.10.x

How to detect error on startup?

Hi @brianc,

Forky is great โ€”ย we've been using it for two years now. Thank you!

One question: a couple of times now, we've accidentally shipped a bug where we hit a runtime error on startup. Forky sees the process crash and restarts it. But this just repeats ad infinitum.

Because the master Forky process itself doesn't fail, we don't detect any error immediately. The good news is that we do eventually detect a failure after a brief time threshold. (Heroku sees that we haven't bound to the network port after a minute ==> R10 error.)

Just curious: is there any way Forky could detect an error on startup, whether exactly or heuristically? E.g. an error <N seconds from spawn, or >N errors in <M seconds, or similar?

Thanks!

Error R14 (Memory quota exceeded)

Hi!

Thanks for this great library.
Unfortunately, while trying to use it on Heroku, the dyno is getting killed because it's exceeding the memory quota. This is the exact log:

2014-09-03T19:55:38.416261+00:00 heroku[web.1]: Process running mem=523M(102.2%)
2014-09-03T19:55:38.416525+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)

It works fine for a while (no error logs), and then it starts exceeding the quota in a loop, until I restart Heroku and everything is back to normal for a while.

app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=7, time=Wed Sep 03 2014 19:49:31 GMT+0000 (UTC)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=2ms service=15ms status=200 bytes=429
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=4ms service=14ms status=200 bytes=428
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=6, time=Wed Sep 03 2014 19:49:31 GMT+0000 (UTC)
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=5, time=Wed Sep 03 2014 19:49:35 GMT+0000 (UTC)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id=39f79216-d71f-40c0-8a59-464325425c1e fwd={removed} dyno=web.1 connect=2ms service=261ms status=200 bytes=429
heroku[web.1]: Process running mem=512M(100.1%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=512M(100.1%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=1ms service=8ms status=200 bytes=430
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=4, time=Wed Sep 03 2014 19:50:12 GMT+0000 (UTC)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=0ms service=10ms status=200 bytes=430
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=4, time=Wed Sep 03 2014 19:50:24 GMT+0000 (UTC)
heroku[web.1]: Process running mem=512M(100.1%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id=261f26a7-e5e4-4fba-81a6-ebf427e1257b fwd={removed} dyno=web.1 connect=1ms service=14ms status=200 bytes=429
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=4, time=Wed Sep 03 2014 19:50:34 GMT+0000 (UTC)
heroku[web.1]: Process running mem=513M(100.3%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=513M(100.3%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=514M(100.5%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=514M(100.5%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=514M(100.6%)
heroku[web.1]: Error R14 (Memory quota exceeded)
(and repeat)

(the HEAD requests are newrelic's)

I seem to be following all the steps as indicated in the README. Perhaps the memory leaks are my fault, but I've never seen these messages before using forky. I'll try not using the cluster for a while, and see if the errors still occur.

Thoughts, anyone?

Does not respect `numWorkers` arg when using an object as first arg

master.js

require('forky')({ path: __dirname + '/worker.js' }, 1 );

worker.js

console.log( 'Hello from worker', require('cluster').worker.id );
require('http').createServer( function( req, res ){ } );

node master

Hello from worker 1
Hello from worker 3
Hello from worker 2
Hello from worker 4

If you change the master.js to:

require('forky')( __dirname + '/worker.js', 1 );

it works fine

Controlled shutdown

Forky makes it difficult to operate graceful shutdowns in environments such as Heroku that blast all processes with SIGTERM on shutdown.

Forky has a flag shuttingDown that if it exported directly or via a setter function to allow it to be set to true, it could prevent the restart of child processes during the exit.

Is there a reason the flag shuttingDown is not exposed and seems to be a defacto constant? It might be a bug.

It's a real easy fix to allow a setter for shuttingDown and I'd put up a PR for that if it's something the admins would agree to.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.