Giter VIP home page Giter VIP logo

kinesis's People

Contributors

jtblin avatar kosmikko avatar mhart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kinesis's Issues

How do you do a simple write?

I am just testing my stream and lambda function, I want to write some dummy data. I'm doing this:

var kinesis = require('kinesis')
var testing = kinesis.stream('testing')
testing.write('hello kinesis!')
testing.on('finish', () => {
    console.log('done sending...')
})

But get this error:

TypeError: LRU: key must be a string or number. Almost certainly a bug! undefined

Doesn't chunk by size

Hey!

Love the Kinesis stream module, unfortunately it doesn't break up the data/payload by size. Kinesis has a 50KB limit per message.

Closing/Deleting a Kinesis Stream

The AWS API specifies you should call 'DeleteStream'. However, I tried using this library to delete a stream using the .request function, but I couldn't get it to work (I was also using Kinesalite):

kinesis.request("DeleteStream", {StreamName: name}, { host: "localhost", port: 4567 }, (err) => {
    if (err) console.log(`Stream ${name} could not be deleted: ${err}`);
    console.log(`Stream ${name} deleted.`);
`});

It ends up silently failing somewhere in the aws cred call (line 327 of index.js). My callback function I provided was never called (nothing prints to console). I confirmed that I was able to create streams using the same method so I'm not sure what the problem could be. Any ideas?

Feature request: auto-configure credentials with IAM roles

Is there any chance you would be willing to do one of the following:

  1. Pass this module an already configured aws-sdk instance to use as its request client (aws-sdk already supports IAM roles)
  2. Extend your aws4 library to pull credentials from either environment variables or from IAM role (metadata API on-box) in a chaining fashion the same way the aws-sdk does?

undefined self.agent.defaultPort

Great project! I'm a n00b kinesis so my only assistance at this point may only be in helping ferreting out problems.

I've a kinesis write stream setup and data flows to kinesis great from my MacBook. However, when I run the exact same thing on an AWS instance I get the following:

_http_client.js:50
  var defaultPort = options.defaultPort || self.agent.defaultPort;
                                                     ^
TypeError: Cannot read property 'defaultPort' of undefined
    at new ClientRequest (_http_client.js:50:54)
    at Agent.request (_http_agent.js:301:10)
    at Object.exports.request (https.js:129:22)
    at request (/home/angleman/test/node_modules/kinesis/index.js:214:15)
    at KinesisWriteStream._write (/home/angleman/test/node_modules/kinesis/index.js:178:3)
    at doWrite (_stream_writable.js:257:12)
    at writeOrBuffer (_stream_writable.js:244:5)
    at KinesisWriteStream.Writable.write (_stream_writable.js:191:11)

Note: I'm setting the AWS keys by asserting them to process.env so that AWS4 will pick them up and I've verified that the values are in place and correct.

Running against: [email protected]

Thoughts?

TypeError: Cannot read property 'NextShardIterator' of undefined

Got the following stack trace/error:

/home/jason/Repos/webin8-bot/node_modules/kinesis/index.js:161
    if (res.NextShardIterator == null) {
           ^
TypeError: Cannot read property 'NextShardIterator' of undefined
    at /home/jason/Repos/webin8-bot/node_modules/kinesis/index.js:161:12
    at f (/home/jason/Repos/webin8-bot/node_modules/kinesis/node_modules/once/once.js:17:25)
    at /home/jason/Repos/webin8-bot/node_modules/kinesis/index.js:395:16
    at IncomingMessage.<anonymous> (/home/jason/Repos/webin8-bot/node_modules/kinesis/index.js:344:20)
    at IncomingMessage.emit (events.js:129:20)
    at _stream_readable.js:908:16
    at process._tickCallback (node.js:355:11)

This is on a stream where I am only reading events.

Right way to close a Kinesis stream

Hi,
What is the right way to close/terminate a Kinesis stream?

I would like a Kinesis stream to stop consuming any resources.
I'm using only the readable part of the stream, and I tried to pause and destroy it, remove all listeners etc but I still see my process consuming much more resources than before I initialize the Kinesis stream.

Thanks.

putRecord is not emitted after splits and merges

I have a stream where I've split and merged shards a few times, and I have an app pushing data to it using this package (which is awesome, btw). I keep track of the send queue by tracking the number of pushes into the stream and subtracting the number of times I've received putRecord from it, but when I did my splitting and merging that stopped working.

The problem seems to start on line 73, where the list of shards is cached forever. Then, on line 228, it only emits putRecord if the shard in the ShardId in the response data matches with one of the known shards. If you've split all of your shards that existed when the app started running, then this will never happen.

I can think of two solutions, and I'd gladly submit a pull request for either of them if you gave me your preference. One would be to blow the cache of shards whenever we receive a response indicating a shard that's not in the list. Another would be to move the emit to line 220, which would cause it to be called regardless of whether the shard exists in the list.

For this particular problem, I don't see the harm in doing approach #2, but I would guess that other problems could show up as a result of the cache being incorrect.

Any thoughts?

Receive the same message twice

Hi everyone,
I'm just finished my project and started to running it, but I'm receiving the same message twice.

Here is my code:

function connectToOrderStream(){     
    var kinesisSource = kinesis.stream({
        name: CONSTANT.KINESIS_STREAM_NAME,
        oldest: true,
        credentials:{
            accessKeyId: CONSTANT.AWS_ACCESS_KEY_ID,
            secretAccessKey: CONSTANT.AWS_SECRET_ACCESS_KEY
        }
    })    
    
    var getData = new Transform({objectMode: true})    
    getData._transform = function(record, encoding, next) {  
            var delivery = JSON.parse(record.Data.toString())
            if(shouldTreatDelivery(delivery)){
                console.log('>', delivery.payload.idDelivery)
            } 
        next()
    }

    kinesisSource.pipe(getData)
}

The producer of stream already checked on his side, and for them, the message it's being sending only one time.

unhandled kinesis error event

Upgrading to [email protected] allows it to run on [email protected]; however, after a few minutes of pushing records to kinesis (3 to 30/second @ ~120bytes per push), I'm getting the following:

events.js:79
      throw er; // Unhandled 'error' event
            ^
Error
    at IncomingMessage.<anonymous> (/home/angleman/test/node_modules/kinesis/index.js:229:15)
    at IncomingMessage.EventEmitter.emit (events.js:120:20)
    at _stream_readable.js:896:16
    at process._tickCallback (node.js:598:11)

Issue occurs under [email protected] and [email protected]

Thoughts?

Very weird error

OK, been using this locally with no issues. Deployed it to our staging server with no issues (on Heroku). But tried to push up to our production server and we're getting this error:

ValidationException: 1 validation error detected: Value 'java.nio.HeapByteBuffer[pos=0 lim=56817 cap=56817]' at 'data' failed to satisfy constraint: Member must have length less than or equal to 51200
2014-04-18T22:24:45.613695+00:00 app[web.1]: at OverridenClientRequest. (/app/node_modules/kaster-mongoose/node_modules/kaster/node_modules/kinesis/index.js:283:30)

From what I can tell it's aws4 that's throwing this somewhere but I haven't got a clue what's going on. We have our aws credentials saved as config variables (same exact credentials work on staging just fine).

Hoping you'll have a clue what this might be.

you need to transform bunyan output to a buffer to use kinesis

Greetings,

Using node v12.14.0

I'm using this module with bunyan and it does not work as expected. (There's a chance I'm doing something wrong, please excuse me if so.)

const kinesisSink = kinesis.stream({ name: 'my-name', region: 'us-east-1' });
const logger = bunyan.createLogger({ name: 'logger-name' });

logger.addStream({
    name: 'kinesis',
    stream: kinesisSink,
    level: 'info',
});

Trying to use this module in that manner above, results in an error:

TypeError: LRU: key must be a string or number. Almost certainly a bug! undefined

And AWS also reports a SerializationException: {"__type":"SerializationException"}.

I believe this is because the _write method is getting a string and not a buffer. Although buffers are being piped to it. <= not the case, I was using the API wrong so my transformer wasn't being called.

Wanted to give other folks a heads up. I'm assuming this issue is due to Node APIs changing.

Removing the {objectMode: true} option on this line caused the stream to receive a buffer.

There are two ways to fix it in my (somewhat naive and hurried) opinion.

  1. don't pass that option to stream.Duplex or...
  2. check for a buffer, if it's not a buffer, set data = Buffer.from(data, encoding)

If you're interested @mhart I can make a PR.

UnknownOperationException after trying to send data

Hi!,

I'm trying to send data to an existing stream and I'm getting.

{ [UnknownOperationException: {"__type":"UnknownOperationException"}]
statusCode: 400,
name: 'UnknownOperationException',
message: '{"__type":"UnknownOperationException"}' }

There's not much information about the available operations. I assume that putRecord should be used based on a couple of issues submitted.
This is my code

kinesis.request('putRecord',
    {
        "a":"aa",
        "b":"bb",
        "c":[
            {
                "d":"dd",
        ]},
    {StreamName: 'streamName'}, function(err) {

    if (err) {
        console.log(err);
    }
});

What am I doing wrong?

Thanks.

Integration with kinesalite

Recommendation on setting endpoint (host:port) and credentials in options to use a local kinesalite? This is a follow-up to mhart/kinesalite#39

For local integration tests event sourcing to a lambda function. Would this project emulate similar to KCL? I'm trying to avoid the DynamoDB issue discussed in mhart/kinesalite#11

Sorry for dragging in a separate repo into the discussion; just wanted to provide context.

Running on AWS Fargate

Hi,

I have a project that's been running fine with kinesis in docker, through ECS & EC2. Now I'm trying to see if we can run the project on AWS Fargate instead of the traditional EC2. However, the kineses connection seems to fail in there with EINVAL, it's still running fine locally and on EC2, with the environment being otherwise the same (same node versions etc).

The error I'm getting is:

Node 9.7.1:

events.js:112
throw er; // Unhandled 'error' event
^
Error: connect EINVAL 169.254.169.254:80 - Local (0.0.0.0:0)
at internalConnect (net.js:956:16)
at defaultTriggerAsyncIdScope (internal/async_hooks.js:281:19)
at net.js:1058:9
at process._tickCallback (internal/process/next_tick.js:112:11)
error An unexpected error occurred: "Command failed.
Exit code: 1

Node 8.9.4:

events.js:183
throw er; // Unhandled 'error' event
^
Error: connect EINVAL 169.254.169.254:80 - Local (0.0.0.0:0)
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
at internalConnect (net.js:971:16)
at net.js:1065:9
at _combinedTickCallback (internal/process/next_tick.js:131:7)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
error Command failed with exit code 1.

The problem most likely isn't with this library, but any ideas if that can be worked around?

missing data

When setting up a test case for issue #4, I discovered that there appears that data is being dropped by AWS Kinesis service without an error being reported by the library.

If you run node put.js -o 1 it will by default push 2000 entries to kinesis. While they are being pushed you can run get.js and view log entries coming from kinesis. After a few hundred records or less, the data from kinesis pauses (as it occasionally does) but this time it no longer progress. This is while put.js continues to run and also several minutes after put.js finishes. The record numbers are placed in the id field of the json feed which looks like:

{ "id": 452, "timestamp":"2014-02-20T22:02:48.939Z", "latitude": 23.73098758980632, "longitude": 89.53080888837576, "country": "m4", "city": "ihc2v3t3xclqu", "website": "d1eaj7s4v436r98.com" ,"visitorId": 77499261, "newVisitor": true, "md5": "828569387ce0a17422364e1d39d310ce" }

Then starting node put.js -o 2001, records will again start coming out of get.js and after a while again pause like before. Stopping get.js will start at the beginning of the records and there is a gap of missing records between the first run of put.js and the second run of put.js. In short, out of thousands of records being pushed only a few hundred at best appear to be making it.

put.js

Test for sending data to Kinesis, usage node put.js --help, ex: node put.js -o 1

// put.js - send test records to kinesis stream
var kinesis       = require('kinesis')               // mhart/kinesis
  , md5           = require('blueimp-md5').md5       // blueimp/JavaScript-MD5
  , base64_encode = require('base64').encode         // pkrumins/node-base64
  , base64_decode = require('base64').decode         // pkrumins/node-base64
  , kinesisConfig = require('./config.json').kinesis // {"oldest":true, "objectMode":true, "streamName":"logs", "accessKey":"..", "secretKey":".."}
  , argv         = require('optimist')               // substack/node-optimist
    .usage('node put.js [options]')
    .options('s', { alias: 'start', describe: 'Start id number, ex: 1'})
    .options('r', { alias: 'rows',  describe: 'Rows to send', default: 2000 })
    .check(function (argv) { if (argv.start == undefined) throw '' })
    .argv
  , iteration  = argv.start
  , remaining  = argv.rows
;
// mhart/kinesis uses mhart/aws4 which uses environment variables for configuration
process.env['AWS_ACCESS_KEY_ID']     = process.env['AWS_ACCESS_KEY_ID']     || kinesisConfig['accessKey']; 
process.env['AWS_SECRET_ACCESS_KEY'] = process.env['AWS_SECRET_ACCESS_KEY'] || kinesisConfig['secretKey'];
var outStream = kinesis.createWriteStream(kinesisConfig['streamName'], kinesisConfig);
outStream.on('error', function(err) {
    console.log('Stream error:', err);
});
outStream.on('end', function(data) {
    console.log('Stream end:', data);
});


function random(max) { return Math.floor(Math.random()*max); }

function randomChars(len) {
    var result = '';
    for (var i=0; i<len; i++) {
        result += random(36).toString(36);
    }
    return result;
}

function generateData() {
    var data = {
        "id": iteration,
        "timestamp": new Date().toISOString(),
        "latitude": (Math.random() - 0.5) * 180,
        "longitude": (Math.random() - 0.5) * 180,
        "country": randomChars(random(20)),
        "city": randomChars(random(30)),
        "website": randomChars(random(20)) + '.com',
        "visitorId": random(99999999),
        "newVisitor": (random(2) == 1)
    }
    var json = JSON.stringify(data);
    data['md5']= md5(json);
    return data;
}

function pushData() {
    remaining--;
    if (remaining < 0) { console.log('Done'); process.exit(0); }
    var randata  = generateData();
    var jsondata = JSON.stringify(randata);
    var data     = base64_encode(jsondata); // data seems to need base64 encoding
    outStream.write(data);
    console.log(jsondata);
    setTimeout(pushData, random(200));
    iteration++;
}

pushData();

get.js

Test for getting data from kinesis, , usage node get.js

// get.js - get test records from kinesis stream
var kinesis       = require('kinesis')               // mhart/kinesis
  , md5           = require('blueimp-md5').md5       // blueimp/JavaScript-MD5
  , base64_encode = require('base64').encode         // pkrumins/node-base64
  , base64_decode = require('base64').decode         // pkrumins/node-base64
  , kinesisConfig = require('./config.json').kinesis // {"oldest":true, "objectMode":true, "streamName":"logs", "accessKey":"..", "secretKey":".."}
  , iteration  = 0
;

// mhart/kinesis uses mhart/aws4 which uses environment variables for configuration
process.env['AWS_ACCESS_KEY_ID']     = process.env['AWS_ACCESS_KEY_ID']     || kinesisConfig['accessKey']; 
process.env['AWS_SECRET_ACCESS_KEY'] = process.env['AWS_SECRET_ACCESS_KEY'] || kinesisConfig['secretKey'];
var inStream = kinesis.createReadStream(kinesisConfig['streamName'], kinesisConfig);
inStream.on('error', function(err) {
    console.log('Stream error:', err);
});
inStream.on('end', function(data) {
    console.log('Stream end:', data);
});

inStream.on('data', function(rawdata) {
    iteration++;
    var data, dup;
    if (kinesisConfig['objectMode']) {
        data = ''+rawdata['data']
    } else {
        data = ''+rawdata
    }
    console.log(iteration, data);
    var ok = true;
    try { 
        dup  = JSON.parse(data) 
        data = JSON.parse(data) 
    } catch (e) { 
        ok=false; 
        console.log('Received data error:', e, 'ShardId:', rawdata['shardId'], 'SequenceNumber:', rawdata['sequenceNumber']) 
    }
    if (ok) {
        delete dup.md5;
        var check = md5(JSON.stringify(dup));
        if (check != data.md5) {
            console.log('* * * md5 changed from:', data.md5, 'to:', check)
        }
    }
});

package.json dependencies:

{
  "dependencies": {
    "kinesis": "~0.2.0",
    "blueimp-md5": "~1.1.0",
    "base64": "~2.1.0",
    "optimist": "~0.6.1"
  }
}

config.json

{
    "kinesis": {
        "oldest":     true,
        "objectMode": true,
        "streamName": "testlog",
        "accessKey":  "AWS_ACCESS_KEY_HERE",
        "secretKey":  "AWS_PRIVATE_KEY_HERE"
    }
}

Note: there is a records process count before each data record when get.js is run.

Following is the results after running node put.js -o 1, node put.js -o 2001 and node put.js -o 4001:

Note a break in the data between running node put.js -o 1 and node put.js -o 2001:

403 '{"id":402,"timestamp":"2014-02-20T22:46:39.137Z","latitude":35.72085717692971,"longitude":-24.67943819705397,"country":"e1g","city":"bkf8xxr9lqt9564r3","website":"w4ag39yngwhx90el5a.com","visitorId":39835795,"newVisitor":false,"md5":"e02ec61fc4cf21b4ee90c56cba99e26c"}'
404 '{"id":404,"timestamp":"2014-02-20T22:46:39.300Z","latitude":89.99360532034189,"longitude":7.257631164975464,"country":"cshzoiveiudh3b","city":"j0kfwanf90ii","website":"6bfw4twpvm4x.com","visitorId":8391394,"newVisitor":true,"md5":"d92fb31fe8c8757f8d571e876262f6cf"}'
405 '{"id":2002,"timestamp":"2014-02-20T22:49:59.845Z","latitude":-4.631718406453729,"longitude":71.66051094420254,"country":"rwcgueo5fkb","city":"m7yls0ggekmu9n450fmx3l","website":"pkw.com","visitorId":22583845,"newVisitor":false,"md5":"133a6c5c545c9ea332c154841ecb23d2"}'
406 '{"id":2001,"timestamp":"2014-02-20T22:49:59.794Z","latitude":-6.7506805853918195,"longitude":65.19749178551137,"country":"p2gfx94hxhf6eceo","city":"zl","website":"i3du2ri96oz.com","visitorId":30981831,"newVisitor":false,"md5":"f2d7d5e20814b20dba396e715506a3e7"}'

Another break in the data between node put.js -o 2001 and node put.js -o 4001:

816 '{"id":2412,"timestamp":"2014-02-20T22:50:42.189Z","latitude":26.874455879442394,"longitude":-65.67560254130512,"country":"l3rho32","city":"3fwolhovds99nkl5z5obzhc25","website":"n9p6603t38ag1.com","visitorId":90863010,"newVisitor":true,"md5":"9e6a70214a8ed76c034cd2d3da1eb2e2"}'
817 '{"id":2413,"timestamp":"2014-02-20T22:50:42.318Z","latitude":36.90951867029071,"longitude":17.173184570856392,"country":"ztlqcv1yga3w8e1s","city":"h1w9eefwezf61zkbgogty0m5wxd70","website":"j20vvqdlo.com","visitorId":29955976,"newVisitor":false,"md5":"4bbf586c559a9b6ddcf7aad76c72300f"}'
818 '{"id":4002,"timestamp":"2014-02-20T22:55:31.254Z","latitude":-83.57626740820706,"longitude":0.9906801022589207,"country":"02lh58lnld","city":"ihlfmvg8qyb4f3zu","website":"rdgtn.com","visitorId":42474356,"newVisitor":true,"md5":"2bc6c40639f7b94d244019f4d3314ea4"}'
819 '{"id":4001,"timestamp":"2014-02-20T22:55:31.145Z","latitude":-8.226841441355646,"longitude":49.73004483617842,"country":"y6","city":"46lkykv1arl0lg6t2g1mg5eg","website":"43hwoqqxlu9zo.com","visitorId":63666339,"newVisitor":true,"md5":"6461a207225997cef1b4062bda9a9cbc"}'

Finally, the premature data ending which remains the same 5 minutes after the run of node put.js -o 4001 has completed:

1242 '{"id":4423,"timestamp":"2014-02-20T22:56:15.100Z","latitude":87.27577074430883,"longitude":-42.72471230942756,"country":"yj04ycgdj2jewq","city":"xzemk9wd0inec0mpki4obbqmg7","website":"inip7td6puenijvoit.com","visitorId":76886500,"newVisitor":false,"md5":"5643051b56ccd312cc2f3e1531874307"}'
1243 '{"id":4426,"timestamp":"2014-02-20T22:56:15.405Z","latitude":70.86829867679626,"longitude":25.954127563163638,"country":"wmj8squdn14bn22","city":"pa7t05a5ohmrlemr3onk","website":"m1vl288mcm2r3go2.com","visitorId":96453790,"newVisitor":false,"md5":"4787a2ccfc6775d7c59a895414d36043"}'

Re-running get.js produces the stream again with the same gaps and last record as above.

Thoughts?

Memory leak with stream pipe

Hello,

first of all thanks for the great script - it makes getting data from kinesis via node.js so much easier as the aws sdk does. I am using a very basic script for testing without doing anything with the received records right now:

var kinesis = require('kinesis'),
Transform = require('stream').Transform,
accessKey = "xxx",
secretKey = "xxx";
require('https').globalAgent.maxSockets = Infinity;
var bufferify = new Transform({objectMode: true})
bufferify._transform = function(record, encoding, cb) {
cb();
}
var kinesisStream = kinesis.stream({cacheSize:100, region: 'eu-west-1', credentials: {accessKeyId: accessKey, secretAccessKey: secretKey}, name: 'xxx', oldest: false});
kinesisStream.pipe(bufferify);

After a few hours, the process runs out of memory and the script stucks. I am using an EC2 m3.medium with 3,75 GB Ram. Is there some problem in the concept between node.js streams and kinesis?

Would be really happy, if you could help with this!

no putRecords for batch writting

Hi,
I found it is only putRecord but no putRecords for batch operation. I have just do the load test today, when the write reach 200/sec, response will be 400 and show exceed rate limitation which may tell the requests send to fast according to amazon's document.
Did I miss something in the code?

Contributions

You beat me to this ๐Ÿ˜‰

I'm interested in helping out and contributing to this repository. If there's anything specific I can do, please add some issues and I'll take a look.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.