Giter VIP home page Giter VIP logo

jackrabbit's Introduction

Jackrabbit

Build Status NPM version

RabbitMQ in Node.js without hating life.

Simple Example

producer.js:

var jackrabbit = require('jackrabbit');
var rabbit = jackrabbit(process.env.RABBIT_URL);

rabbit
  .default()
  .publish('Hello World!', { key: 'hello' })
  .on('drain', rabbit.close);

consumer.js:

var jackrabbit = require('jackrabbit');
var rabbit = jackrabbit(process.env.RABBIT_URL);

rabbit
  .default()
  .queue({ name: 'hello' })
  .consume(onMessage, { noAck: true });

function onMessage(data) {
  console.log('received:', data);
}

Set arguments in queue

 
 rabbit
  .default()
  .queue({ name: 'hello', durable: true, arguments: {'x-expires':420000}  })
   

other arguments:

  • x-max-length
  • x-max-length-bytes
  • x-overflow
  • x-dead-letter-exchange
  • x-dead-letter-routing-key
  • x-max-priority
  • x-queue-mode
  • x-queue-master-locator
  • x-message-ttl

Ack/Nack Consumer Example

var jackrabbit = require('jackrabbit');
var rabbit = jackrabbit(process.env.RABBIT_URL);

rabbit
  .default()
  .queue({ name: 'important_job' })
  .consume(function(data, ack, nack, msg) {

    // process data...
    // and ACK on success
    ack();

    // or alternatively NACK on failure
    // NOTE: this will requeue automatically
    nack();

    // or, if you want to nack without requeue:
    nack({
      requeue: false
    });
  })

Jackrabbit is designed for simplicity and an easy API. If you're an AMQP expert and want more power and flexibility, check out Rabbot.

More Examples

For now, the best usage help is can be found in examples, which map 1-to-1 with the official RabbitMQ tutorials.

Installation

npm install --save jackrabbit

Tests

The tests are set up with Docker + Docker-Compose, so you don't need to install rabbitmq (or even node) to run them:

$ docker-compose run jackrabbit npm test

If using Docker-Machine on OSX:

$ docker-machine start
$ eval "$(docker-machine env default)"
$ docker-compose run jackrabbit npm test

Release

Releases should be tagged according to Semantic Versioning

Process:

  • Add release notes to releases.md
  • Commit add push the release notes git commit releases.md && git push origin master
  • Release it ./node_modules/release-it/bin/release-it.js

jackrabbit's People

Contributors

akamaozu avatar dependabot[bot] avatar despossivel avatar dotbr avatar gmixo avatar hunterloftis avatar jaertgeerts avatar matmar10 avatar matthewmueller avatar naartjie avatar panman avatar pwmckenna avatar quesodev avatar shurakaisoft avatar wiktorwojcikowski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jackrabbit's Issues

Add example with PDF file transfer

Hi,

It would help immensely if you would post an example of Jackrabbit with client and server, where the client requests and eventually receives a PDF (vs. a "Hello World" string).

Trial and error to figure out the magical winning code is taking a long time and there's not much documented, so.... pretty please? ;-)

Amanda

Publisher on publish event/hook

Assume you have a publisher that wants to send a message to a work queue, and this publisher is only concerned about the success of the publishing and nothing more. Is this achievable?

I have seen an example hooking into the drain event, but this event is only emitted when publishing reaches 0.

Code example:

rabbitBroker.publish(message, { key: 'somekey' });
rabbitBroker.on('publish', function() { resolve() });

Ignore / purge queue

I know this feature used to be available, but can't find how to. Is it just closing the queue?

publish without consuming?

I'm trying to use this library to both consume a worker cue, and then publish to another queue name.

var queue = jackrabbit( url )
queue.create( 'scrapes', scrapeHandler)
queue.create( 'scrapes.out' )

function scrapeHandler(job, ack) {
    var transformedWork = foo(work)
    queue.publish('scrapes.out',transformedWork)
    ack()
}

but the created 'scrapes.out', consumes the messages, even though i haven't given it a handle function. I've tried using ignore, but get 'Field 'consumerTag' is the wrong type;'

Can you please advise?

Update amqplib to v0.4.0

Hi.
Can we update to the latest amqplib version? It adds a lot of fixes and support for Node v.4.x.x LTS.

how can i ack broken messages

Hi I have a worker that is synchronous i.e processes 1 message at a time. Sometimes messages are inserted corrupted or appear to be corrupted and amqp throws an error. How can i catch these errors then ack the message. So i can move on to the next message. At the moment the thrown error kills my node process and forever restarts it and try to process the corrupted message again.

events.js:85
throw er; // Unhandled 'error' event

Error: Channel closed by server: 406 (PRECONDITION-FAILED) with message "PRECONDITION_FAILED - unknown delivery tag 3"
at Channel.C.accept (/opt/jetmobile/testing/JetMobPoller/node_modules/jackrabbit/node_modules/amqplib/lib/channel.js:406:17)
at Connection.mainAccept [as accept] (/opt/jetmobile/testing/JetMobPoller/node_modules/jackrabbit/node_modules/amqplib/lib/connection.js:63:33)
at Socket.go (/opt/jetmobile/testing/JetMobPoller/node_modules/jackrabbit/node_modules/amqplib/lib/connection.js:474:48)
at Socket.emit (events.js:104:17)
at emitReadable_ (_stream_readable.js:424:10)
at emitReadable (_stream_readable.js:418:7)
at readableAddChunk (_stream_readable.js:174:11)
at Socket.Readable.push (_stream_readable.js:126:10)
at TCP.onread (net.js:538:20)
error: Forever detected script exited with code: 1

my code looks like this, If i use noAck: true then the process is not synchronous

rabbitmq
    .default()
    .queue({ name: queue, durable: false, messageTtl: undefined })
    .consume(onRun,{ noAck: false });

function onRun(data, ack) {
    //do stuff
    //after stuff is done
    ack();
}

Changelog?

Hi.
Is there a changelog where the API changes can be seen?
Best.

getting the size of the queue

haven't dug too deep into the internals, but curious how difficult it would be to add support for checking the number of 'ready' and 'unacked' messages on the queue.

Support for Delayed Messaging

For a while people have looked for ways of implementing delayed messaging with RabbitMQ. So far the accepted solution was to use a mix of message TTL and Dead Letter Exchanges as proposed by James Carr here. Since a while we have thought to offer an out-of-the-box solution for this, and these past month we had the time to implement it as a plugin. Enter RabbitMQ Delayed Message Plugin.

Support for this would be great. Hunter I know you're a busy man, just logging the thought!

messages not received after calling .on('drain', rabbit.close)

I have a singleton clock process that needs to publish some messages to workers once every 10 - 30 seconds.

Here's an example of my clock process publishing a job to a queue using jackrabbit:

var exchange = rabbit.default();
exchange.queue({ name: MY_QUEUE_NAME, durable: true });
exchange.publish({ update_info: 'some data' }, { key: MY_QUEUE_NAME });
exchange.on('drain', rabbit.close);

My workers will receive the message the first time this code is ran, but anytime after that the messages are never received. If I delete exchange.on('drain', rabbit.close), I will continue to receive messages even after the first one. This fixes the problem, but goes against the example shown in the README.

So, what is the correct behavior?

Thanks for the help.

Sugar syntax for worker queues with promises

I'd like to suggest two additional methods for rpc calls (When global.Promise exists) ...

...
var queue = exchange.queue({ name: 'rpc_queue', prefetch: 1, durable: false });

// queue.work returns a promise -- arguments serialized as array
var request = queue.work(...arguments);
...
// worker accepts a method that returns a promise
queue.worker(doWork);

function doWork(defer, ...arguments) { //arguments spread back
  return Promise...
]

if the promise resolves, or rejects the defer object that is passed in, then it should not acknowledge the message, which will in effect re-run the message on timeout.


It would work best, if the input arguments were all passed through, serialized then deserialzed as an array, in this way, local methods that return a promise can be replaced with rpc calls to do the same interface... A Promise based workflow would look like:

var rpc = exchange.queue({ name: 'rpc_queue', prefetch: 1, durable: false });

...
//client
rpc.work('a', {'some':'object'}).then(
    function(result) {
      //successful result
    },
    function(err) {
      //error either in call, or from remote
    }
);
...
//server
rpc.worker(function(defer, strInput, objinput) {
  return new Promise(function(resolve,reject){
    //do some async work, etc
    return someAsyncWork(strInput)
      .then(moreAsyncWork.bind(null, objinput))
      .then(function(){
        return 'success';
      });
  });
})

With the benefit of async and await support fia Babel or TypeScript:

var result = rpc.work('a', {'some':'object'});
...
rpc.worker(async function(defer, strinput, objinput) {
  await someAsyncWork(strinput);
  await moreAsyncWork(objinput);
  return 'success';
})

As you can see, with the final async example, this simplifies things a LOT

Avoid EventEmitter memory leak warnings

This isn't too bad but when you call consume() on an exchange with at least 10 other consumers, you will receive the following Node warning:

(node) warning: possible EventEmitter memory leak detected. 11 ready listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at EventEmitter.addListener (events.js:239:17)
    at EventEmitter.once (events.js:265:8)
    at EventEmitter.createQueue [as queue] (/var/www/sites/ms-kegactivity-receiver/node_modules/jackrabbit/lib/exchange.js:84:15)

It'd be nice to be able to suppress this on the exchange's instance of EventEmitter (you can do it globally but that's not a great solution IMO). I'd also be down with a "suppressWarnings" option somewhere.

Calls to publish messages before "connected" event should raise error

Hoping I can get around to a PR for this, but in the mean time just marking this down.

Trying to publish when there is not an active connection should raise an error. Instead, the event is swallowed and the sender is none the wiser.

I would think you'd want to make this explicitly clear to the caller right away via raising the error.

Appear to be missing subscribe/handle callbacks

When I publish a single message at a time to rabbitmq my handle callback appears to work 100%. However I wanted to test the throughput, so I created an api route that simply iterates over 50, and calls queue.publish with the current iteration index.

I am not sure that queue.publish is actually publishing all 50...or if the handle callback is simply not getting called 50 times. Typically we have C# engines that process rabbitmq messages but using JavaScript all the way through would make the most sense...does this look correct?

// RabbitMQ url
var url = 'amqp://rabbitmq-01';

// Connect to RabbitMQ
var queue = jackrabbit(url);

// When connected
queue.on('connected', function () {

    // Create a queue, create(name, options, callback, func)
    queue.create('nodeTest', { prefetch: 1 }, onReady);

    function onReady () {
        // Start handling jobs in 'nodeTest'
        queue.handle('nodeTest', onJob);

        // Publish a job to queue
        queue.publish('nodeTest', { name: 'test'});
    }

    function onJob (job, ack) {
        console.log('Hello ' + job.name);
        ack();
    }
});

app.get('/test', function (req, res) {

    function publishMessage (i) {
        queue.publish('nodeTest', { name: 'iteration ' + i });
        console.log('published ' + i);
    }
    for (var i = 0; i < 50; i++) {
        (publishMessage)(i);
    }

    res.send('published');
});

No Event/callback/promise to indicate when routingKeys have been bound to a queue.

While using jackrabbit and writing some tests, I came across an issue where I created a queue and started sending messages to it but those messages never arrived. After some digging it appears that when a queue is created routingKeys are bound but that call is asynchronous and it's possible that the queue reports ready before all routingKeys are correctly bound. There is no event/callback/promise to indicate when all routingKeys have been bound to the queue.

I have created a solution to this problem that raises a bound event to indicate when all routingKeys have been bound and the queue is ready to receive published messages.

Link to the PR

publish callback

it would be nice to have a publish callback that indicated that after this point, if the server crashed, the message would persist in the queue.

Is there anyway to convert queue.publish() into a promise?

I was wondering if there is any way of knowing if the publish() fails/succeeds. I had an issue where I deployed to production and a queue hadn't been created yet (I created it on the consumer), and it was failing silently.

This might be related to #3, but I just wanted to check if anything has changed on this?

Heroku: MaxListenersExceededWarning: Possible EventEmitter memory leak detected

I am getting the following error on Heroku
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added. Use emitter.setMaxListeners() to increase limit

The code that I believe is triggering it is relatively simple:

const rabbit = jackrabbit(constants.RABBITURL);
const exchange = rabbit.default();
exchange
.queue({ name: constants.videoProcess })
.consume(filterVideo, { noAck: true });

exchange
.queue({ name: constants.imageProcess })
.consume(filterImage, { noAck: true });`

es6

drop support for node 4 and lower

'expiration' is not documented / configurable.

On line 77 of queue.js there's an expiration for messages of 1 second.

This is a hugely confusing and problematic issue whereby messages will be lost when they don't complete in 1 second. Why is this expiration in place? Why is it not documented?

If an asynchronous process does not ack() until the end of, for example, a long waiting API call, and there's only 1 worker, the worker just drops the message and the queue_length reflects this. Am I missing something?

Update amqplib to v0.4.1

(Same as issue "Update amqplib to v0.4.0 #24) but later :
Change dependency into 0.4.1 ( or maybe 0.4.x for next time)
as only amqplib is 0.4.1 allows Node 5 :
{
"name": "amqplib",
"homepage": "http://squaremo.github.io/amqp.node/",
"main": "./channel_api.js",
"version": "0.4.1",
"description": "An AMQP 0-9-1 (e.g., RabbitMQ) library and client.",
"repository": {
"type": "git",
"url": "https://github.com/squaremo/amqp.node.git"
},
"engines": {
"node": ">=0.8 <5 || ^5"
},

As Cloud foundry (Bluemix) uses now a Node 5.2.0, we cannot use jackrabbit ->amqp 0.4.0->node <5

thanks

Not handling ETIMEDOUT error exception

If this is not the best place to post this, please let me know and my apologies. I am experiencing an issue when I put my computer to sleep for a longish period of time and then reopen it. I am leaving the node process running and when I wake it up I see the error below.

I am wondering where I should be handling this error event. I have pasted the code I use to instantiate jackrabbit below, I would have thought this error would have passed through the error handler I setup but it doesn't seem to. Any ideas where I should be listening for this error?

events.js:160
      throw er; // Unhandled 'error' event
      ^

Error: read ETIMEDOUT
    at exports._errnoException (util.js:1036:11)
    at TCP.onread (net.js:564:26)
  const jackrabbit = require('jackrabbit');
  this.queue = jackrabbit(rabbitUrl)
    .on('connected', () => {
      console.info({ type: 'info', msg: 'connected', service: 'rabbitmq' });
      ready();
    })
    .on('error', (err) => {
      console.error({ type: 'error', msg: err, service: 'rabbitmq' });
    })
    .on('disconnected', () => {
      console.warn({ type: 'error', msg: 'disconnected', service: 'rabbitmq' });
      lost();
    });

Unexpected close

I have tried listening to the disconnect events etc. But if the rabbit server suddenly dies / unavailable. 'Unexpected close' is thrown.

Endless queues with auto-generated names keep being created

I have a singleton clock process that needs to publish some messages to workers once every 10 - 30 seconds. This works fine, but for some reason I end up generating a ton of queues, all with auto generated / random names. Here's a screenshot of the "Queues" tab in my rabbitmq dashboard after letting the clock process run for a few minutes:

screen shot 2016-10-05 at 1 28 43 pm

Below these queues are my real ones that I am creating, you just can't see them in the screenshot. I should only have about 10 of them. I have no idea why these other queues are being created or what they are being used for.

Here is my jackrabbit code that causes this:

var exchange = rabbit.default();
exchange.queue({ name: MY_QUEUE_NAME, durable: true });
exchange.publish({ update_info: 'some data' }, { key: MY_QUEUE_NAME });

If I add exchange.on('drain', rabbit.close); to the end of the above code it will solve this issue, but then I run into another issue which I posted here: #55

I'm not sure what I'm doing wrong here and would appreciate any feedback, help, or advice that anyone could offer. Thanks.

Jackrabbit needs a new owner!

Or owners.

I've neglected this project for too long with intentions to release a "next big version" from some of the development branches I have sitting locally. But as many of you have noticed, I haven't actually shipped anything, and that's not fair to people who are depending on this.

In order to stop blocking the community, I'd like to give owner access both on Github and on npm to other developers who have been actively working on Jackrabbit. If that person should be you, please comment here so I know who is interested, available, and capable.

Thank you!

TypeError: Cannot read property 'publish' of undefined

I first noticed this error and it has since been happening every few hours on one of my servers. I don't think I have ever experience this before, but it could be that this is just the first time I have noticed.

For some reason I am getting an exception of TypeError: Cannot read property 'publish' of undefined

This error will happen after the server has been running fine for a few hours. Once it happens, several server restarts will fail as this error keeps happening. Eventually the server will try to restart again and eventually this error will stop happening.

I don't know what is causing this or how to fix it, especially since it seems to happen randomly and doesn't throw an error all the time.

Below is the stack trace. I have removed the timestamps for easier reading.

 /app/node_modules/jackrabbit/lib/exchange.js:128
app[clock.1]:       var drained = channel.publish(emitter.name, opts.key, new Buffer(msg), opts);
app[clock.1]:                            ^
app[clock.1]: 
app[clock.1]: TypeError: Cannot read property 'publish' of undefined
app[clock.1]:     at sendMessage (/app/node_modules/jackrabbit/lib/exchange.js:128:28)
app[clock.1]:     at EventEmitter.publish (/app/node_modules/jackrabbit/lib/exchange.js:114:16)
app[clock.1]:     at CronJob.sortLiveBroadcasts (/app/lib/clock.js:38:14)
app[clock.1]:     at CronJob.fireOnTick (/app/node_modules/cron/lib/cron.js:392:22)
app[clock.1]:     at Timeout.callbackWrapper [as _onTimeout] (/app/node_modules/cron/lib/cron.js:435:9)
app[clock.1]:     at tryOnTimeout (timers.js:224:11)
app[clock.1]:     at Timer.listOnTimeout (timers.js:198:5)

Here is the code for my entire clock.js file:

var config = require('./server/config/config');
var logger = require('./logger');
var CronJob = require('cron').CronJob;
var rabbit = require('jackrabbit')(config.rabbit_url);
var exchange = rabbit.default();

var SORT_LIVE_FEED_QUEUE = config.SORT_LIVE_FEED_QUEUE;
var SORT_VOD_FEED_QUEUE = config.SORT_VOD_FEED_QUEUE;

var sortLiveBroadcastsJob = new CronJob('*/10 * * * * *', sortLiveBroadcasts, null, false, 'America/Los_Angeles');
var sortVODBroadcastsJob = new CronJob('*/15 * * * * *', sortVODBroadcasts, null, false, 'America/Los_Angeles');

sortLiveBroadcastsJob.start();
sortVODBroadcastsJob.start();

// Graceful shutdown
var gracefulShutdown = function() {
    sortLiveBroadcastsJob.stop();
    sortVODBroadcastsJob.stop();
    rabbit.close();
}

process.on('SIGTERM', gracefulShutdown);

function sortLiveBroadcasts() {

    // Publish message to queue
    exchange.queue({ name: SORT_LIVE_FEED_QUEUE, durable: true });
    exchange.publish({ status: 'live' }, { key: SORT_LIVE_FEED_QUEUE });
}

function sortVODBroadcasts() {

    // Publish message to queue
    exchange.queue({ name: SORT_VOD_FEED_QUEUE, durable: true });
    exchange.publish({ status: 'vod' }, { key: SORT_VOD_FEED_QUEUE });
}

Reply queue is created even if opts.reply is set to false

In lib/exchange.js, the sendMessage() method checks for opts.reply to interact with a replyQueue. However, when the exchange is created, the replyQueue is always created regardless of the value of opts.reply. This creates a bunch of un-named queues that we're not actually using (especially since a lot of our connections to an exchange are for listening to the queue instead of writing to it).

Could you check the opts and create the queue if necessary?

If that's an acceptable solution, I'd be willing to submit a PR for that check.

Excessive number of ams.gen queues created

Is any one know why there is so many ams.gen*!
my code is below:

const Config = require('../config');

const jackrabbit = require('jackrabbit');
const url = Config.get('mq.' + moduleName + '.url');
const queueName = Config.get('mq.' + moduleName + '.queue');
const routeKey = Config.get('mq.' + moduleName + '.key');
const exchangeName = Config.get('mq.' + moduleName + '.exchange');
const module_rabbit = jackrabbit(url);

const moduleExchange = module_rabbit.direct(exchangeName);

console.log("module:", moduleName, ", url:", url, ", queue:", queueName);

moduleExchange.queue({
    name: queueName,
    key: routeKey,
    persistent: true
});

function publish(message) {
    return moduleExchange.publish(message, {
        name: queueName,
        key: routeKey,
        persistent: true
    })
}

New Version on npm

Version 4.3.0 uses Promise.defer and it is last version on npm, so it needs to be updated to prevent some future errors.

No support to nack() or reject messages?

In my understanding of amqp, there should be a way to nack() or reject messages. These messages may then be processed by a different consumer at a later point in time.

If, for example, I'm queueing up messages to send emails, and at time of sending, the email API I'm using returns an unlikely error, I'd like to reject this message and try it again later.

How do I accomplish that with this module?

Thanks

is this project/package still alive ?

My company is using jackrabbit it in production, I noticed the lib it's using quite an outdated AMQP dependency, are there any plans on fixing that ? are contributors alive ?

Is this still maintained?

Looking at the open PR’s waiting for a merge - @hunterloftis are you still interested in maintaining this library? If not, it’d be great if the community could take ownership.

Needs integration tests

Has unit tests; needs integration spot-checks that actually do nontrivial operations on nontrivial queue setups to ensure things are working as expected in the real world.

Unable to get job queue to accept jobs

Hello,

After following your example, I have the following files:

  • server.js. Node simple web server.
  • worker.js. Worker.

What I want to do is have a job queue where in a web request I can push data to a job.

In the worker.js file, I do the following:

// worker.js
var http = require('http');
var throng = require('throng');
var jackrabbit = require('jackrabbit')
 // want to use "worker: node worker.js" in heroku Procfile
var queue = jackrabbit(process.env.CLOUDAMQP_URL || 'amqp://localhost')

http.globalAgent.maxSockets = Infinity;

var start = function() {
  queue.on('connected', function() {
    queue.create('myJob', { prefetch: 5 }, function() {
        queue.handle('myJob', function(data, ack) {
            console.log(data)
            console.log("Job completed!")
            ack()
        })
    })
  })
}

throng(start, { 
  workers: 1,
  lifetime: Infinity,
  grace: 4000
})

Now I want a situation where I can push data to a job from the web application. So I in a middleware function I have,

// middleware.js
var jackrabbit = require('jackrabbit')
var queue = jackrabbit(process.env.CLOUDAMQP_URL || 'amqp://localhost')

app.get('/example', function(req, res, next) {
   queue.publish('job_name', { data: 'my_data' })
   res.send(200, { data: 'my_data' })
})

On development, I start worker.js and app.js in separate terminal windows. When I call the /example method, I expect to see the worker terminal window show the relevant console logs. However, the request just hangs.

I feel like there is something really fundamental I am missing here in terms of my understanding of jackrabbit and ampq. Any help on this would be greatly appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.