Giter VIP home page Giter VIP logo

streamlinejs's Introduction

streamline.js

streamline.js is a language tool to simplify asynchronous Javascript programming.

Instead of writing hairy code like:

function archiveOrders(date, cb) {
  db.connect(function(err, conn) {
    if (err) return cb(err);
    conn.query("select * from orders where date < ?", [date], function(err, orders) {
      if (err) return cb(err);
      helper.each(orders, function(order, next) {
        conn.execute("insert into archivedOrders ...", [order.id, ...], function(err) {
          if (err) return cb(err);
          conn.execute("delete from orders where id=?", [order.id], function(err) {
            if (err) return cb(err);
            next();
          });
        });
      }, function() {
        console.log("orders have been archived");
        cb();
      });
    });
  });
}

you write:

function archiveOrders(date, _) {
  var conn = db.connect(_);
  conn.query("select * from orders where date < ?", [date], _).forEach_(_, function(_, order) {
    conn.execute("insert into archivedOrders ...", [order.id, ...], _);
    conn.execute("delete from orders where id=?", [order.id], _);
  });
  console.log("orders have been archived");
}

and streamline transforms the code and takes care of the callbacks!

No flow control APIs to learn! You just have to follow a simple rule:

Replace all callbacks by an underscore and write your code as if all functions were synchronous.

Streamline is not limited to a subset of Javascript. You can use all the features of Javascript in your asynchronous code: conditionals, loops, try/catch/finally blocks, anonymous functions, chaining, this, etc.

Streamline also provides futures, and asynchronous variants of the EcmaScript 5 array functions (forEach, map, etc.).

Installation

NPM, of course:

npm install streamline -g

The -g option installs streamline globally. You can also install it locally, without -g but then the _node and _coffee commands will not be in your default PATH.

Note: If you encounter a permission error when installing on UNIX systems, you should retry with sudo.

If you want to use the fibers option (see below), you must also install the fibers library:

npm install fibers [-g]

Hello World

Streamline modules have ._js or ._coffee extensions and you run them with the _node or _coffee loader.

Javascripters:

$ cat > hello._js
console.log('hello ...');
setTimeout(_, 1000);
console.log('... world');
^D
$ _node hello

Coffeescripters:

$ cat > hello._coffee
console.log 'hello ...'
setTimeout _, 1000
console.log '... world'
^D
$ _coffee hello

You can also create standalone shell utilities:

$ cat > hello.sh
#!/usr/bin/env _node
console.log('hello ...');
setTimeout(_, 1000);
console.log('... world');
^D
$ ./hello.sh

or:

$ cat > hello.sh
#!/usr/bin/env _coffee
console.log 'hello ...'
setTimeout _, 1000
console.log '... world'
^D
$ ./hello.sh

Compiling and writing loaders

You can also set up your code so that it can be run directly with node or coffee. You have two options here:

The first one is to compile your source with _node -c or _coffee -c:

$ _node -c .

This command compiles all the *._js and *._coffee source files in the current directory and its sub-directories. It generates *.js files that you can run directly with node.

The second one is to create your own loader with the register API. See the loader example for details.

Compiling will give you the fastest startup time because node will directly load the compiled *.js files but the register API has a cache option which comes close and the loader saves you a compilation pass.

Browser-side use

You have three options to use streamline in the browser:

  • The first one is to compile the source with _node -c. The compiler generates vanilla Javascript code that you can load with <script> directives in an HTML page. See the flows unit test for an example.
  • You can also transform the code in the browser with the transform API. See the streamlineMe example.
  • A third option is to use the streamline-require infrastructure. This is a very efficient browser-side implementation of require that lets you load streamlined modules as well as vanilla Javascript modules in the browser.

Generation options

Streamline gives you the choice between generating regular callback-based asynchronous code, generating code that takes advantage of the fibers library, or generating code for JavaScript generators.

The callback option produces code that does not have any special runtime dependencies.

The fibers option produces simpler code but requires that you install the fibers library (easy: npm install fibers). This option gives superior development experience: line numbers and comments are preserved in the transformed code; you can step with the debugger through asynchronous calls without having to go through complex callbacks, etc.

The fibers option can be activated by passing the --fibers option to the _node command or by setting the fibers option when registering streamline (see the streamline.register(options) function.

The generators option is more experimental (but rather solid as it passes all unit tests). It produces code that is similar to the fibers option, although slightly more complex. Like the fibers option, it preserves line numbers and comments. This option does not yet work in node.js because V8 does not currently support generators but it works in Firefox and in luvmonkey. It should work, with minor tweaks, in future versions of V8 that may implement Harmony generators.

Interoperability with standard node.js code

You can call standard node functions from streamline code. For example the fs.readFile function:

function lineCount(path, _) {
  return fs.readFile(path, "utf8", _).split('\n').length;
}

You can also call streamline functions as if they were standard node functions. For example, the lineCount function defined above can be called as follows from non-streamlined modules:

lineCount("README.md", function(err, result) {
  if (err) return console.error("ERROR: " + err.message);
  console.log("README has " + result + " lines.");
});

And you can mix streamline functions, classical callback based code and synchrononous functions in the same file. Streamline only transforms the functions that have the special _ parameter.

Note: this works with all transformation options. Even if you use the fibers option, you can seamlessly call standard callback based node APIs and the asynchronous functions that you create with streamline have the standard node callback signature.

Futures

Streamline provides futures, a powerful feature that lets you parallelize I/O operations in a very simple manner.

If you omit the callback (or pass a null callback) when calling a streamline function, the function will execute synchronously and return a future. The future is just an asynchronous function that you can call later to obtain a result. Here is an example:

function countLines(path, _) {
  return fs.readFile(path, "utf8", _).split('\n').length;
}

function compareLineCounts(path1, path2, _) {
  // parallelize the two countLines operations
  var n1 = countLines(path1);
  var n2 = countLines(path2);
  // get the results and diff them
  return n1(_) - n2(_);
}

In this example, countLines is called twice without _ parameter. These calls start the fs.readFile asynchronous operations and return immediately two futures (n1 and n2). The return statement retrieves the results with n1(_) and n2(_) calls and computes their difference.

Futures are very flexible. In the example above, the results are retrieved from the same function, but you can also pass futures to other functions, store them in objects, call them to get the results from a different module, etc. You can also have several readers on the same future.

See the futures wiki page for details.

The flows module contains utilities to deal with futures: flows.collect to wait on an array of futures and flows.funnel to limit the number of concurrent operations.

Asynchronous Array functions

Streamline extends the Array prototype with asynchronous variants of the EcmaScript 5 forEach, map, filter, reduce, ... functions. These asynchronous variants are postfixed with an underscore and they take an extra _ argument (their callback too), but they are otherwise similar to the standard ES5 functions. Here is an example with the map_ function:

function dirLines(dir, _) {
  return fs.readdir(dir, _).map_(_, function(_, file) {
    return fs.readFile(dir + '/' + file, 'utf8', _).split('\n').length;
  });
}

Parallelizing loops is easy: just pass the number of parallel operations as second argument to the call:

function dirLines(dir, _) {
  // process 8 files in parallel
  return fs.readdir(dir, _).map_(_, 8, function(_, file) {
    return fs.readFile(dir + '/' + file, 'utf8', _).split('\n').length;
  });
}

If you don't want to limit the level of parallelism, just pass -1.

See the documentation of the builtins module for details.

Exception Handling

Streamline lets you do your exception handling with the usual try/catch construct. The finally clause is also supported.

Streamline overrides the ex.stack getter to give you the stack of streamline calls rather than the last callback stack. You can still get the native callback stack trace with ex.rawStack.

Exception handling also works with futures. If a future throws an exception before you try to read its result, the exception will be memorized by the future and you will get it at the point where your try to read the result. For example:

try {
  var n1 = countLines(badPath);
  var n2 = countLines(goodPath);
  setTimeout(_, 1000); // n1 fails, exception is memorized
  return n1(_) - n2(_); // exception is thrown by n1(_) expression.
} catch (ex) {
  console.error(ex.stack); // exception caught here
}

Stream Wrappers

Streamline also provides stream wrappers that simplify stream programming. The streams module contains:

  • a generic ReadableStream wrapper with an asynchronous stream.read(_[, len]) method.
  • a generic WritableStream wrapper with an asynchronous stream.write(_, buf[, encoding]) method.
  • wrappers for HTTP and TCP request and response objects (client and server).

Examples

The tutorial shows streamline.js in action on a simple search aggregator application.

The diskUsage examples show an asynchronous directory traversal that computes disk usage.

Online demo

You can see how streamline transforms the code by playing with the online demo.

If you are curious, you can also play with the generators demo - Firefox only.

Troubleshooting

Read the FAQ.

If you don't find your answer in the FAQ, post to the mailing list, or file an issue in GitHub's issue tracking.

Related Packages

The following packages use streamline.js:

Resources

The tutorial and FAQ are must-reads for starters.

The API is documented here.

For support and discussion, please join the streamline.js mailing list.

Credits

See the AUTHORS file.

Special thanks to Marcel Laverdet who contributed the fibers implementation.

License

This work is licensed under the MIT license.

streamlinejs's People

Contributors

bjouhier avatar aseemk avatar aikar avatar pguillory avatar willconant avatar flatheadmill avatar laverdet avatar mlin avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.