Giter VIP home page Giter VIP logo

node-unzipper's Introduction

NPM Version NPM Downloads code coverage

unzipper

Installation

$ npm install unzipper

Open methods

The open methods allow random access to the underlying files of a zip archive, from disk or from the web, s3 or a custom source.

The open methods return a promise on the contents of the central directory of a zip file, with individual files listed in an array.

Each file record has the following methods, providing random access to the underlying files:

  • stream([password]) - returns a stream of the unzipped content which can be piped to any destination
  • buffer([password]) - returns a promise on the buffered content of the file.

If the file is encrypted you will have to supply a password to decrypt, otherwise you can leave blank.

Unlike adm-zip the Open methods will never read the entire zipfile into buffer.

The last argument to the Open methods is an optional options object where you can specify tailSize (default 80 bytes), i.e. how many bytes should we read at the end of the zipfile to locate the endOfCentralDirectory. This location can be variable depending on zip64 extensible data sector size. Additionally you can supply option crx: true which will check for a crx header and parse the file accordingly by shifting all file offsets by the length of the crx header.

Open.file([path], [options])

Returns a Promise to the central directory information with methods to extract individual files. start and end options are used to avoid reading the whole file.

Here is a simple example of opening up a zip file, printing out the directory information and then extracting the first file inside the zipfile to disk:

async function main() {
  const directory = await unzipper.Open.file('path/to/archive.zip');
  console.log('directory', directory);
  return new Promise( (resolve, reject) => {
    directory.files[0]
      .stream()
      .pipe(fs.createWriteStream('firstFile'))
      .on('error',reject)
      .on('finish',resolve)
  });
}

main();

If you want to extract all files from the zip file, the directory object supplies an extract method. Here is a quick example:

async function main() {
  const directory = await unzipper.Open.file('path/to/archive.zip');
  await directory.extract({ path: '/path/to/destination' })
}

Open.url([requestLibrary], [url | params], [options])

This function will return a Promise to the central directory information from a URL point to a zipfile. Range-headers are used to avoid reading the whole file. Unzipper does not ship with a request library so you will have to provide it as the first option.

Live Example: (extracts a tiny xml file from the middle of a 500MB zipfile)

const request = require('request');
const unzipper = require('./unzip');

async function main() {
  const directory = await unzipper.Open.url(request,'http://www2.census.gov/geo/tiger/TIGER2015/ZCTA5/tl_2015_us_zcta510.zip');
  const file = directory.files.find(d => d.path === 'tl_2015_us_zcta510.shp.iso.xml');
  const content = await file.buffer();
  console.log(content.toString());
}

main();

This function takes a second parameter which can either be a string containing the url to request, or an options object to invoke the supplied request library with. This can be used when other request options are required, such as custom headers or authentication to a third party service.

const request = require('google-oauth-jwt').requestWithJWT();

const googleStorageOptions = {
  url: `https://www.googleapis.com/storage/v1/b/m-bucket-name/o/my-object-name`,
  qs: { alt: 'media' },
  jwt: {
      email: google.storage.credentials.client_email,
      key: google.storage.credentials.private_key,
      scopes: ['https://www.googleapis.com/auth/devstorage.read_only']
  }
});

async function getFile(req, res, next) {
  const directory = await unzipper.Open.url(request, googleStorageOptions);
  const file = zip.files.find((file) => file.path === 'my-filename');
  return file.stream().pipe(res);
});

Open.s3([aws-sdk], [params], [options])

This function will return a Promise to the central directory information from a zipfile on S3. Range-headers are used to avoid reading the whole file. Unzipper does not ship with with the aws-sdk so you have to provide an instantiated client as first arguments. The params object requires Bucket and Key to fetch the correct file.

Example:

const unzipper = require('./unzip');
const AWS = require('aws-sdk');
const s3Client = AWS.S3(config);

async function main() {
  const directory = await unzipper.Open.s3(s3Client,{Bucket: 'unzipper', Key: 'archive.zip'});
  return new Promise( (resolve, reject) => {
    directory.files[0]
      .stream()
      .pipe(fs.createWriteStream('firstFile'))
      .on('error',reject)
      .on('finish',resolve)
  });
}

main();

Open.buffer(buffer, [options])

If you already have the zip file in-memory as a buffer, you can open the contents directly.

Example:

// never use readFileSync - only used here to simplify the example
const buffer = fs.readFileSync('path/to/arhive.zip');

async function main() {
  const directory = await unzipper.Open.buffer(buffer);
  console.log('directory',directory);
  // ...
}

main();

Open.custom(source, [options])

This function can be used to provide a custom source implementation. The source parameter expects a stream and a size function to be implemented. The size function should return a Promise that resolves the total size of the file. The stream function should return a Readable stream according to the supplied offset and length parameters.

Example:

// Custom source implementation for reading a zip file from Google Cloud Storage
const { Storage } = require('@google-cloud/storage');

async function main() {
  const storage = new Storage();
  const bucket = storage.bucket('my-bucket');
  const zipFile = bucket.file('my-zip-file.zip');
  
  const customSource = {
    stream: function(offset, length) {
      return zipFile.createReadStream({
        start: offset,
        end: length && offset + length
      })
    },
    size: async function() {
      const objMetadata = (await zipFile.getMetadata())[0];
      return objMetadata.size;
    }
  };

  const directory = await unzipper.Open.custom(customSource);
  console.log('directory', directory);
  // ...
}

main();

Open.[method].extract()

The directory object returned from Open.[method] provides an extract method which extracts all the files to a specified path, with an optional concurrency (default: 1).

Example (with concurrency of 5):

unzip.Open.file('path/to/archive.zip')
  .then(d => d.extract({path: '/extraction/path', concurrency: 5}));

Please note: Methods that use the Central Directory instead of parsing entire file can be found under Open

Chrome extension files (.crx) are zipfiles with an extra header at the start of the file. Unzipper will parse .crx file with the streaming methods (Parse and ParseOne).

Streaming an entire zip file (legacy)

This library began as an active fork and drop-in replacement of the node-unzip to address the following issues:

  • finish/close events are not always triggered, particular when the input stream is slower than the receivers
  • Any files are buffered into memory before passing on to entry

Originally the only way to use the library was to stream the entire zip file. This method is inefficient if you are only interested in selected files from the zip files. Additionally this method can be error prone since it relies on the local file headers which could be wrong.

The structure of this fork is similar to the original, but uses Promises and inherit guarantees provided by node streams to ensure low memory footprint and emits finish/close events at the end of processing. The new Parser will push any parsed entries downstream if you pipe from it, while still supporting the legacy entry event as well.

Breaking changes: The new Parser will not automatically drain entries if there are no listeners or pipes in place.

Unzipper provides simple APIs similar to node-tar for parsing and extracting zip files. There are no added compiled dependencies - inflation is handled by node.js's built in zlib support.

Extract to a directory

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Extract({ path: 'output/path' }));

Extract emits the 'close' event once the zip's contents have been fully extracted to disk. Extract uses fstream.Writer and therefore needs an absolute path to the destination directory. This directory will be automatically created if it doesn't already exist.

Parse zip file contents

Process each zip file entry or pipe entries to another stream.

Important: If you do not intend to consume an entry stream's raw data, call autodrain() to dispose of the entry's contents. Otherwise the stream will halt. .autodrain() returns an empty stream that provides error and finish events. Additionally you can call .autodrain().promise() to get the promisified version of success or failure of the autodrain.

// If you want to handle autodrain errors you can either:
entry.autodrain().catch(e => handleError);
// or
entry.autodrain().on('error' => handleError);

Here is a quick example:

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse())
  .on('entry', function (entry) {
    const fileName = entry.path;
    const type = entry.type; // 'Directory' or 'File'
    const size = entry.vars.uncompressedSize; // There is also compressedSize;
    if (fileName === "this IS the file I'm looking for") {
      entry.pipe(fs.createWriteStream('output/path'));
    } else {
      entry.autodrain();
    }
  });

and the same example using async iterators:

const zip = fs.createReadStream('path/to/archive.zip').pipe(unzipper.Parse({forceStream: true}));
for await (const entry of zip) {
  const fileName = entry.path;
  const type = entry.type; // 'Directory' or 'File'
  const size = entry.vars.uncompressedSize; // There is also compressedSize;
  if (fileName === "this IS the file I'm looking for") {
    entry.pipe(fs.createWriteStream('output/path'));
  } else {
    entry.autodrain();
  }
}

Parse zip by piping entries downstream

If you pipe from unzipper the downstream components will receive each entry for further processing. This allows for clean pipelines transforming zipfiles into unzipped data.

Example using stream.Transform:

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse())
  .pipe(stream.Transform({
    objectMode: true,
    transform: function(entry,e,cb) {
      const fileName = entry.path;
      const type = entry.type; // 'Directory' or 'File'
      const size = entry.vars.uncompressedSize; // There is also compressedSize;
      if (fileName === "this IS the file I'm looking for") {
        entry.pipe(fs.createWriteStream('output/path'))
          .on('finish',cb);
      } else {
        entry.autodrain();
        cb();
      }
    }
  }
  }));

Example using etl:

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse())
  .pipe(etl.map(entry => {
    if (entry.path == "this IS the file I'm looking for")
      return entry
        .pipe(etl.toFile('output/path'))
        .promise();
    else
      entry.autodrain();
  }))

Parse a single file and pipe contents

unzipper.parseOne([regex]) is a convenience method that unzips only one file from the archive and pipes the contents down (not the entry itself). If no search criteria is specified, the first file in the archive will be unzipped. Otherwise, each filename will be compared to the criteria and the first one to match will be unzipped and piped down. If no file matches then the the stream will end without any content.

Example:

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.ParseOne())
  .pipe(fs.createWriteStream('firstFile.txt'));

Buffering the content of an entry into memory

While the recommended strategy of consuming the unzipped contents is using streams, it is sometimes convenient to be able to get the full buffered contents of each file . Each entry provides a .buffer function that consumes the entry by buffering the contents into memory and returning a promise to the complete buffer.

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse())
  .pipe(etl.map(async entry => {
    if (entry.path == "this IS the file I'm looking for") {
      const content = await entry.buffer();
      await fs.writeFile('output/path',content);
    }
    else {
      entry.autodrain();
    }
  }))

Parse.promise() syntax sugar

The parser emits finish and error events like any other stream. The parser additionally provides a promise wrapper around those two events to allow easy folding into existing Promise-based structures.

Example:

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse())
  .on('entry', entry => entry.autodrain())
  .promise()
  .then( () => console.log('done'), e => console.log('error',e));

Parse zip created by DOS ZIP or Windows ZIP Folders

Archives created by legacy tools usually have filenames encoded with IBM PC (Windows OEM) character set. You can decode filenames with preferred character set:

const il = require('iconv-lite');
fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse())
  .on('entry', function (entry) {
    // if some legacy zip tool follow ZIP spec then this flag will be set
    const isUnicode = entry.props.flags.isUnicode;
    // decode "non-unicode" filename from OEM Cyrillic character set
    const fileName = isUnicode ? entry.path : il.decode(entry.props.pathBuffer, 'cp866');
    const type = entry.type; // 'Directory' or 'File'
    const size = entry.vars.uncompressedSize; // There is also compressedSize;
    if (fileName === "ะขะตะบัั‚ะพะฒั‹ะน ั„ะฐะนะป.txt") {
      entry.pipe(fs.createWriteStream(fileName));
    } else {
      entry.autodrain();
    }
  });

Licenses

See LICENCE

node-unzipper's People

Contributors

alice-was-here avatar alubbe avatar blake-regalia avatar dergutehirte avatar dimfeld avatar durisvk avatar evanoxfeld avatar huikaihoo avatar hypesystem avatar joeferner avatar jsnajdr avatar konecnyna avatar lechuhuuha avatar mheggeseth avatar mimetnet avatar mrayermannmsft avatar neverendingqs avatar oaleynik avatar pdugas avatar pwoldberg avatar rhodgkins avatar sabrehagen avatar silverwind avatar snyk-bot avatar syedhannan avatar uwx avatar vitalyster avatar vltansky avatar vvo avatar zjonsson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-unzipper's Issues

Error: invalid signature: 0x88b1f - can we add support?

I have a file with a .tsv.gz file ending, but when I try to unzip it with unzipper, it fails with this error:

Error: invalid signature: 0x88b1f
    at ~/my_repo/node_modules/unzipper/lib/parse.js:62:26
...

Anyone have knowledge to be able to add support for the 0x88b1f signature?

I tried editing the parse.js file and having this treated the same as a file with a signature of if 0x04034b50, but it only read part of the file, and not the entire file.

Thanks!

Events not being trigged on some zips

Close, error or finish events are not triggering on certain zips and I cannot seem to find out why. I had this issue on unzip too - and this only happens on Windows.

this.extract = function(id, onSuccess, onError){
    var zip = app.path.join(nw.App.dataPath, 'ca-user-data/' + app.session.domain + '/' + app.user.name + '/presentations/' + id + '.zip');
    var target = app.path.join(nw.App.dataPath, 'ca-user-data/' + app.session.domain + '/' + app.user.name + '/presentations/' + id);
    var zipper = app.fs.createReadStream(zip).pipe(app.unzip.Extract({ path: target }));
    
    zipper.on('close', function(e){
        onSuccess();
    });

    zipper.on('error', function(e){
        console.log('Error: ', e);
        onError(e);
    });

    zipper.on('drain', function(e){
        console.log('Drained: ', e);
    });

    zipper.on('entry', function(e){
        console.log('Entry: ', e);
    });

    zipper.on('finish', function(e){
        console.log('Finished: ', e);
    });
};

uncaught FILE_ENDED error

I think this is the same as #48 - I've opened a new issue as that was closed and changes have been made since then...!

After finding this out when moving over to the Open APIs (#74) I've been looking into it and found that it's caused by the following line:

self.emit('error', new Error('FILE_ENDED'));

Is this supposed to be emitting on p - the returned PassThrough stream instead?

I've been playing around with a fix here which instead emits the error to the returned stream instead.
I've also had to include a similar fix as #57 (nulling out self.cb) and these changes fix the uncaught error and its propagated up to the Promise retuned from unzipper.Open.file but it breaks the parseOneEntry.js: error - file ended test case and I'm not sure why :-/

I hope this helps you debug it a bit! It might be the case that after the bug in the fix for #57 is implemented (#71) everything might work without this anyway...

"TypeError: cb is not a function" When Calling unzipper.Parse() From An SSH Stream

I'm trying to read multiple streams asynchronously from an SFTP server which are all zip folders. Running into an issue with larger zip files (>64KB) causing errors. With the default stream chunk size I'm getting a cb is not a function error, but if I increase the highWaterMark size I don't get the error, but 'end' never gets called on the stream. Also worth mentioning that if I don't create the stream inside of a Promise I don't seem to get this error, not sure why that is.

Here is what I've got:

sftp.connect(configObj).then(() => {
  var fileNames = ['file1'];
  var streamPromiseArr = [];
  for (fileName of fileNames) {

    // Transformers
    var selectFile = new Transform({
      objectMode: true,
      transform: function (entry, e, cb) {
        var fileName = entry.path;
        if (fileName.indexOf('DailyPunch') >= 0) {
          entry.pipe(concat(res => {
            this.push(res);
            cb();
          }))
        } else {
          entry.autodrain();
          cb();
        }
      }
    });

    // Collect the promises (promises resolve to a buffer)
    var streamPromise = new Promise((resolve, reject) => {
      sftp.get(fileName, true, 'utf8', { highWaterMark: 256 * 1024 }).then((stream) => {
        var text = '';
        stream
          .pipe(unzipper.Parse())
          .pipe(selectFile)
          .pipe(filterArray)
          .on('data', (d) => text += d)
          .on('finish', () => {
            resolve(text);
          });
      });
    });
    streamPromiseArr.push(streamPromise);
  }

  // Consolidate promised streams and write to a file
  Promise.all(streamPromiseArr).then((textArr) => {
    fs.writeFile('consolidate_file.csv', textArr.join(), ((err) => {
      if (err) console.log(err);
      console.log('Finished');
      sftp.end();
    }));
  });
}).catch(err => {
  console.log(err)
  sftp.end();
});```

Add AES-256 decryption

System not understand passwords whitch put to open zip.
unzipper.Open.file('test123_3/lol.zip') .then(function(d) { return new Promise(function(resolve,reject) { console.log(d.files[0]); d.files[0].stream("qqq") }); });
password_not_working

promise warnings on latest node

(node:18540) Warning: a promise was created in a handler at domain.js:121:23 but was not returned from it, see http://goo.gl/rRqMUw
    at Function.Promise.cast (/Users/contra/Projects/staeco/node_modules/unzipper/node_modules/bluebird/js/release/promise.js:195:13)

On latest node. Just using this in a basic way - for each file, pipe it to the fs. Seems like this is caused by this promise being created in autodrain, which I'm not using: https://github.com/ZJONSSON/node-unzipper/blob/master/lib/parse.js#L85

example request

hi

can you please add a example on how to extract a specific folder ?
thanks in advance for your help

download('urltofile').pipe(unzipper.Parse()).on('entry', function (entry)
{
	var fileName = entry.path;
	var type = entry.type;
	var size = entry.size;

	if (type === 'Directory' && /src\/assets\/scss/i.test(fileName))
	{
	    //extract this folder
	}
	else
	{
	    entry.autodrain();
	}
});

Open.file Promise does not catch all errors

I would expect that Open.file "rejects" all errors, promise style. But it does not:

example.js:

const unzipper = require('unzipper');

unzipper.Open.file('./example.js')
  .then(console.log)
  .catch(console.error)
  .then(() => console.log('this log should appear, but does not'));

Reading a js file should fail, but not crash the node process. The above example results in:

events.js:174
      throw er; // Unhandled 'error' event
      ^

Error: FILE_ENDED
    at PullStream.pull (/project/node_modules/unzipper/lib/PullStream.js:72:28)
    at PullStream.emit (events.js:189:13)
    at PullStream.<anonymous> (/project/node_modules/unzipper/lib/PullStream.js:20:10)
    at PullStream.emit (events.js:194:15)
    at finishMaybe (_stream_writable.js:646:14)
    at endWritable (_stream_writable.js:663:3)
    at PullStream.Writable.end (_stream_writable.js:594:5)
    at ReadStream.onend (_stream_readable.js:633:10)
    at Object.onceWrapper (events.js:277:13)
    at ReadStream.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1107:12)
    at process.internalTickCallback (internal/process/next_tick.js:72:19)
Emitted 'error' event at:
    at errorOrDestroy (internal/streams/destroy.js:98:12)
    at PullStream.onerror (_stream_readable.js:695:7)
    at PullStream.emit (events.js:189:13)
    at PullStream.pull (/project/node_modules/unzipper/lib/PullStream.js:72:14)
    at PullStream.emit (events.js:189:13)
    [... lines matching original stack trace ...]
    at process.internalTickCallback (internal/process/next_tick.js:72:19)

How to stub unzipper?

Hello,

I use unzipper in a node.js module and everything works well:

...
readableStream.pipe(unzipper.Parse())
    .on('entry', (entry) => {
        // Filter and write files like in the doc
        ...
    })
    .on('finish', () => {
        callback();
    });
...

Now I would like to unit test my module with Mocha/Sinon/Chai and avoid to really call unzipper using a stub (or something else).
I tried many things but I can't figure out how to handle events (entry and finish), so the callback() is never called and the test returns a timeout error.

Does anyone have an example of unit test with a "fake" unzipper?

PS: I know it's not really an issue, so if you think it's not relevant, fell free to close it.

PullStream.js causes Uncaught, unspecified "error" event. (FILE_ENDED)

Hello,

I made a script that downloads the sirene_201612_L_M.zip from http://files.data.gouv.fr/sirene/ that contains a .csv file (file size: 8.5G).
The script throws the following error:

events.js:165
      throw err;
      ^

Error: Uncaught, unspecified "error" event. (FILE_ENDED)
    at PassThrough.emit (events.js:163:17)
    at Parse.pull (/home/guillaume/dev/projects/clients/siren-api/node_modules/unzipper/lib/PullStream.js:80:11)
    at emitOne (events.js:96:13)
    at Parse.emit (events.js:188:7)
    at Parse.<anonymous> (/home/guillaume/dev/projects/clients/siren-api/node_modules/unzipper/lib/PullStream.js:19:10)
    at emitNone (events.js:91:20)
    at Parse.emit (events.js:185:7)
    at finishMaybe (_stream_writable.js:514:14)
    at afterWrite (_stream_writable.js:388:3)
    at onwrite (_stream_writable.js:378:7)
    at Immediate.WritableState.onwrite (_stream_writable.js:89:5)
    at runCallback (timers.js:637:20)
    at tryOnImmediate (timers.js:610:5)
    at processImmediate [as _immediateCallback] (timers.js:582:5)

If I refer to the code, I see that the part throwing the error is here: https://github.com/ZJONSSON/node-unzipper/blob/master/lib/PullStream.js#L78.

But the file seems to be entirely downloaded so I don't understand what could be the problem.

Here is the part of my code:

const unzipper = require('unzipper')
const request  = require('superagent')
const fs       = require('fs')

module.exports = function (url, filename) {
  return new Promise((resolve, reject) => {
    request.get(url)
      .pipe(unzipper.Parse())
      .on('entry', entry => {
        entry.pipe(fs.createWriteStream(`data/${filename}.csv`))
      })
      .promise()
      .then(
        () => {
          console.log('done')
          resolve({
           url: url,
           path: `data/${filename}.csv`
         })
        },
        e => {
          reject(err)
        }
      )
  })
}

Can you help me to understand if it's my code, a bug, a real problem with the file or another reason please?
Thanks!

large zip extract in s3 with aws lambda in nodejs

how to put entry to S3?

    s3.getObject({
    Bucket: srcBucket,
    Key: srcKey
   }).createReadStream()
    .on('error', function (err) {
        console.log('S3 get object err:: ', err);
    })
    .pipe(unzipper.Parse())
    .on('entry', function (entry) {
        var fileName = entry.path;
        var type = entry.type; // 'Directory' or 'File'
        var size = entry.size;
        if (fileName === "this IS the file I'm looking for") {
            entry.pipe(how to S3 putObject);
        } else {
            entry.autodrain();
        }
    });

Streams stopped

We have been using your unzipper library for some time now with great results ( Thanks! )

We did run into an issue with a zip file that had a single corrupt file, it appears that it was missing the EOF, really hard to tell.

Where I was able to trace it to was Pullstream.js in the stream(eof, includeEof) function.
We are somehow hitting the else in this block of code:

    if (match !== -1) {
      if (includeEof) match = match + eof.length;
      packet = self.buffer.slice(0,match);
      self.buffer = self.buffer.slice(match);
      done = true;
    } else {
      var len = self.buffer.length - eof.length;
      packet = self.buffer.slice(0,len);
      self.buffer = self.buffer.slice(len);
    }

and len is resulting in a negative value and the streams stop.

As a workaround, we have patched the function to emit an error when this condition occurs:
var len = self.buffer.length - eof.length;

      if ( len < 0 ) { 
        self.removeListener('chunk', pull);
        console.log("*********************************************************")
        console.log("UNZIP::PULLSTREAM ERROR!!!! Buffer less than 0!!!!");
        console.log("*********************************************************")
        self.emit('error', new Error('BUFFER_LESS_THAN_0'));
        this.__ended = true;
        return;
      }

I would be curious if you have a better idea of what might be happening here, and maybe how to better handle it. The error being emitted will abort the entire unzip, when it is really just a single file in the archive that appears to be corrupt.

I can provide the zip archive that caused this issues if that helps.

Thanks for the great library & any assistance.

Dean

7z support

unzipper is not able unzip 7z file. It shows this error:

Unhandled rejection Error: invalid signature: 0xafbc7a37

And this says, that it has hex code for 7z, which mean, that it is not supported.

I use this (from example):

fs.createReadStream('./packedFolder.7z')
  .pipe(unzipper.Extract({ path: './' }));

Stream decompression uncompleted

Hi !

I have a little problem with the decompression stream. I don't know why, but there is not all the entries of the zip file (attached below). Here is my code:

function uploadZip(filepath){
    fs.createReadStream(filepath)
        .pipe(unzipper.Parse())
        .on('entry', function(entry){
            let filename = entry.path;
            console.log(filename);
            if (isImage(filename)){
                uploadFile(entry);
            }
            else {
                entry.autodrain();
            }
        });
};

Actually, I have only two logs which are: WTxBc.jpg and text.txt
Normaly, I should have a second jpg in the log.

Did someone have an idea to solve this problem ?

Desktop - Copie.zip

finish/close event emitted after error event and then crashes

Both the finish and close events get emitted after the error event (as FILE_ENDED) with the following ZIP file. Then once this has happened a TypeError is thrown from:

if (self.buffer.length === (eof.length || 0)) self.cb();

I'm not sure whether or not finish (or close) should be emitted if an error occurs - to my mind they shouldn't, but I can't work out why this ZIP is causing the crash.

I've managed to stop the crash (but not the multiple events) by checking __ended before calling the write's cb() - have no idea if this is the correct fix:
https://github.com/bookcreator/node-unzipper/blob/bad-zip/lib/PullStream.js#L59

The above "fix" actually breaks normal ZIP files.

Reproduction code with code from master branch:

const fs = require('fs')
const unzipper = require('unzipper')

fs.createReadStream('./bad.zip')
    .pipe(unzipper.Extract({ path: './bad' })
    .on('error', err => console.error('error', err))
    .on('finish', () => console.log('finish'))
    .on('close', () => console.log('close')))

Output:

error FILE_ENDED
close
finish
_stream_writable.js:464
  cb();
  ^

TypeError: cb is not a function
    at afterWrite (_stream_writable.js:464:3)
    at onwrite (_stream_writable.js:455:7)
    at ./node-unzipper/lib/PullStream.js:59:60
    at afterWrite (_stream_writable.js:464:3)
    at _combinedTickCallback (internal/process/next_tick.js:144:20)
    at process._tickCallback (internal/process/next_tick.js:180:9)

Is there a lib that will zip to S3 as a stream?

The ideal feature of this library for my use case it that I can unzip a file in S3 a a stream to process locally in chunks. Essential for some of the large files I am dealing with. In that respect this lib is perfect for me. So many, many thanks for making it available. Much appreciated.

However I have a reverse use case where I need to create a zip file on S3 and upload data to it as a stream. I cannot find any libraries that do this. I was wondering if anyone knows of one and can point me in the right direction. For the sake of clarity, I am generating data and need to write that into a zip file on the fly. I do not want to upload local files as a stream.

Failing that, when I have chance I might take a look under the hood of this library in the hope of working out how to do it...

MaxListenersExceededWarning: Possible EventEmitter memory leak detected.

Hi,

when I'm trying to uzip the following file: https://tools.hana.ondemand.com/additional/sapui5-rt-1.44.42.zip

I have some console message:

(node:13812) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added. Use emitter.setMaxListeners() to increase limit

I'm just using pipe(unzipper.Extract({ path }));

Do you have any recommendations to limit memory usage during the extraction?

Thank you!

Stream pausing in certain edge cases

I'm using node-unzipper to unzip archives with many files. In some of my archives, I run into an issue where the stream just pauses and never responds. I've narrowed it down certain edge cases in the PullStream.js file. In the block of code below, I have a situation where the buffer size is 3, but the eof buffer size is 4. That means that "len" is equal to -1. This causes the stream to pause, no errors. This seems like a bug to me, but I don't fully understand what the eof buffer is or why my self.buffer size is 3. Any ideas? Thanks!

if (self.buffer && self.buffer.length) {
      if (typeof eof === 'number') {
        packet = self.buffer.slice(0,eof);
        self.buffer = self.buffer.slice(eof);
        eof -= packet.length;
        done = !eof;
      } else {
        var match = self.buffer.indexOf(eof);
        if (match !== -1) {
          if (includeEof) match = match + eof.length;
          packet = self.buffer.slice(0,match);
          self.buffer = self.buffer.slice(match);
          done = true;
        } else {
          var len = self.buffer.length - eof.length;
          // Length is -1 here. 
          packet = self.buffer.slice(0,len);
          self.buffer = self.buffer.slice(len);
        }
      }
      p.write(packet,function() {
        if (self.buffer.length === (eof.length || 0)) self.cb();
      });
    }

External Attributes

What would be the correct way to get the externalAttributes for a file?
I am using this library for testing some autogenerated zips and I need to ensure that the unix mode is set correctly for every file.
That's because I need to ensure that for example symlinks are zipped correctly.

Since that's saved in the external attributes it would be great to be able to access those.

Progress bar needed

Hi,

This is actually a great module.

However having it as a CLI tool - it could be quite useful to have a progress bar in the console or at least percentage .

Do you know any modules which can work with your module to show the state of extraction?

Thank you!

Extract does exactly nothing

  const zipper = fs.createReadStream(filePath).pipe(unzipper.Extract({ path: './' }));

Does nothing - no error, no files extracted

zipper.on('entry', function(entry) { console.log(entry.path);}

Returns correct zip contents.

Using wrong filePath throws proper error

Any idea how to run this thing???

Errors - silently fails upon extracting

It doesn't throw errors, works unexpectedly, better to just install zip and do child_process. Come on, I just wasted 3 hours of my life trying to use it and it's complete mess

Files are messed up

I'm downloading a zip file with "request" and try to extract it with this library.

Sadly some files are messed up after extraction.

This is the zip file I'm downloading and extracting: https://tools.hana.ondemand.com/additional/sap-webide-personal-edition-1.45.3-prod-macosx.cocoa.x86_64.zip

E.g. for a file messed up eclipse/orion

Here the code I use this plugin (it's not stable or clean as I'm currently build up): https://github.com/dweber019/ui5-web-ide-installer/tree/develop

Any help very appreciated.

unhandled error FILE_ENDED from PullStream.js

My app has an SFTP service which receives a ZIP file. Upon completion of the upload I fire off a task which unzips the file. Last night the upload was aborted part way through. This resulted in the node process terminating due to an unhandled error.

I am going to look at the SFTP handler to see if I can catch the aborted upload at that point and avoid any attempt at doing the unzip. However, it seems that unzipper should not cause a server crash due to a truncated zip file.

Having had a quick look, it seems to me that PullSteam.js is detecting the unexpected end of file and emitting an error which is not handled in the calling code. My error dump as as follows:

server stopping { Error: Unhandled "error" event. (FILE_ENDED) at PullStream.emit (events.js:185:19) at PullStream.onerror (_stream_readable.js:652:12) at emitOne (events.js:115:13) at PullStream.emit (events.js:210:7) at PullStream.pull (/Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/unzipper/lib/PullStream.js:66:14) at emitOne (events.js:115:13) at PullStream.emit (events.js:210:7) at PullStream.<anonymous> (/Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/unzipper/lib/PullStream.js:19:10) at emitNone (events.js:110:20) at PullStream.emit (events.js:207:7) at finishMaybe (_stream_writable.js:587:14) at endWritable (_stream_writable.js:595:3) at PullStream.Writable.end (_stream_writable.js:546:5) at ReadStream.onend (_stream_readable.js:584:10) at Object.onceWrapper (events.js:314:30) at emitNone (events.js:110:20) at ReadStream.emit (events.js:207:7) context: 'FILE_ENDED' } server stopping TypeError: cb is not a function at afterWrite (_stream_writable.js:438:3) at onwrite (_stream_writable.js:429:7) at /Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/unzipper/lib/PullStream.js:59:60 at afterWrite (_stream_writable.js:438:3) at /Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/async-listener/glue.js:188:31 at _combinedTickCallback (internal/process/next_tick.js:144:20) at Immediate._tickCallback (internal/process/next_tick.js:180:9) at Immediate._onImmediate (/Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/async-listener/glue.js:188:31) at runCallback (timers.js:781:20) at tryOnImmediate (timers.js:743:5) at processImmediate [as _immediateCallback] (timers.js:714:5) DataManagerFeedProducer closed... kafka dataFeedClient closed... LogsProducer closed... kafka LogsClient closed... opsProducer closed... kafka opsClient closed... server stopping 1

Max password length?

Hi,

what is the max password length of zip file that is supported by unzipper ?

Currently i have a lengthy password and unzipper says BAD_PASSWORD.

Backpressure not managed correctly

A write callback should not fire until the chunk has been ultimately pulled out of the PullStream and written successfully to destination (i.e. it should be inside the callback of entry.write(d,e,cb). Otherwise we risk blowing up memory if the destination for the entry is slow

Unzipping very slow for 285 kb file

First of all thanks for adding the feature of password protected zips! It works great except it is really slow.
I know there's something going on behind the scenes as this should not take this long, 35 seconds!
I can't figure out at the present moment why this is taking so long, but maybe there's something I'm doing that is wrong?

Here is my code:

const unzip = (filename, password, sha256) => {
  const start = new Date();
  console.log(`key ${filename}`);
  unzipper.Open.file(filename)
    .then((data) => {
      console.log('got data');
      return new Promise((resolve, reject) => {
        data.files[0].stream(password)
          .pipe(fs.createWriteStream(data.files[0].path))
          .on('error', reject)
          .on('finish', () => {
            const end = new Date() - start;
            console.info('Execution time: %dms', end);
            resolve(verifyChecksum(data.files[0].path, sha256));
          });
      });
    });
};

And Standard Out:

(node:12627) DeprecationWarning: Using Buffer without `new` will soon stop working. Use `new Buffer()`, or preferably `Buffer.from()`, `Buffer.allocUnsafe()` or `Buffer.alloc()` instead.
key firefox.zip
got data
Execution time: 35347ms
verifyChecksum
84597a81c67a20d342d57d979b4a7887328f505f268acb9c3ef38224116fa283
84597a81c67a20d342d57d979b4a7887328f505f268acb9c3ef38224116fa283
Checksum for firefox.exe passed

[QUESTION] Handling of directory entries

Hi,

Quick query around the parsing of directory entries...

When checking if an entry is a File or Directory the following check is used:

entry.type = (vars.compressedSize === 0 && /[\/\\]$/.test(fileName)) ? 'Directory' : 'File';

Should this use vars.uncompressedSize instead of vars.compressedSize? I only ask as I'm getting a directory being typed as a file in this archive!

(You can tell its being typed as a file as the inflating: prefix is used for files)

  inflating: META-INF/
{ filename: 'META-INF/',
  vars: 
   { versionsNeededToExtract: 20,
     flags: 2048,
     compressionMethod: 8,
     lastModifiedTime: 42365,
     lastModifiedDate: 19241,
     crc32: 0,
     compressedSize: 2,
     uncompressedSize: 0,
     fileNameLength: 9,
     extraFieldLength: 0 },
  extra: 
   { signature: null,
     partsize: null,
     uncompressedSize: null,
     compressedSize: null,
     offset: null,
     disknum: null } }

I note in this vars that the uncompressed size is 0 (while the compressed size is 2), hence my query!

Also running zipinfo Jk5ER6CRSVi32R_ZHeuAyg.epub | grep META-INF/

?rwxr-xr-x  2.0 unx        0 b- defN 17-Sep-09 20:43 META-INF/
?rw-------  2.0 unx      244 b- defN 17-Sep-09 20:43 META-INF/container.xml

Cheers!

Unable to get Promises to work

I assume the typo here is that the on events bind to the pipe output not the Parse() output?

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse()
  .on('entry', entry => entry.autodrain())
  .promise()
  .then( () => console.log('done'), e => console.log('error',e));

i.e.

fs.createReadStream('path/to/archive.zip')
  .pipe(unzipper.Parse())
  .on('entry', entry => entry.autodrain())
  .promise()
  .then( () => console.log('done'), e => console.log('error',e));

I assume also that I need not include a Promise library to use promises? I'm not a Node.JS developer and am struggling to get the following to work, the promise's then|end aren't called:

  object.createReadStream()
    .pipe(unzipper.Parse())
    .on("entry", (entry) => {
      var filePath = entry.path;
      var type = entry.type;
      var size = entry.size;
      console.log(`Found ${type}: ${filePath}`);
	  entry.autodrain();
    })
    .promise()
    .then(() => {
      console.log("Zip File Processed");
    }, (err) => {
      console.log(`Zip Error: ${err}`);
    });

The code appears to work correctly and iterates over the zip file's contents but even though it appears to get through all the entries, neither the promise's then nor err callback is made.

Fails when extracting files simultaneously

I have a zip of around 5 files. I try to extract them all as follows:

const mapdata = await Promise.all(mapfiles.map(file => file.buffer()));

However, this often (unpredictably) yields the following error:

(node:21288) UnhandledPromiseRejectionWarning: Error: FILE_ENDED
    at PullStream.pull (/home/cor/Desktop/webtaiko/node_modules/unzipper/lib/PullStream.js:78:28)
    at PullStream.emit (events.js:182:13)
    at PullStream.<anonymous> (/home/cor/Desktop/webtaiko/node_modules/unzipper/lib/PullStream.js:20:10)
    at PullStream.emit (events.js:182:13)
    at finishMaybe (_stream_writable.js:641:14)
    at afterWrite (_stream_writable.js:481:3)
    at onwrite (_stream_writable.js:471:7)
    at /home/cor/Desktop/webtaiko/node_modules/unzipper/lib/PullStream.js:70:11
    at afterWrite (_stream_writable.js:480:3)
    at process._tickCallback (internal/process/next_tick.js:63:19)
(node:21288) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:21288) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

But if I rewrite the code as follows, it seems to work without error

  let mapdata = [];
  for (const file of mapfiles) {
    // Note: deliberately doing these synchonously
    mapdata.push(await file.buffer());
  }

I'd like to clarify if this is a bug, or if this library does not support async like this.

Error: EBADF: bad file descriptor, read when parsing over Open.file()

Dear @ZJONSSON

Thank you for Open.file() which saves actually a lot of time . However most of times I have a very strange error. The problem is - it's very random, always raising after different files:

events.js:167
      throw er; // Unhandled 'error' event
      ^

Error: EBADF: bad file descriptor, read
Emitted 'error' event at:
    at lazyFs.read (internal/fs/streams.js:165:12)
    at FSReqWrap.wrapper [as oncomplete] (fs.js:463:17)

Here is the sample code how I use it:

/* eslint-disable no-console */
const unzipper = require("unzipper");
const path = require("path");
const fs = require("fs");

(async () => {
  let oZip = await unzipper.Open.file("./sapui5-rt-1.44.42.zip");

  // extract theme source files
  let aFilesToExtract = oZip.files.filter(oFile =>
    /(?<=\/sap_belize.+)\.less$/.test(oFile.path)
  );

  while (aFilesToExtract.length) {
    let oFile = aFilesToExtract.shift();
    console.log(`Extracting: ${oFile.path}`);

    //keeping the reference
    try {
      await new Promise((resolve, reject) => {
        let sDir = path.dirname(oFile.path);
        if (!fs.existsSync(sDir)) {
          fs.mkdirSync(sDir, { recursive: true });
        }

        let oFileNew = fs.createWriteStream(oFile.path);

        oFile
          .stream()
          .pipe(oFileNew)
          .on("finish", resolve)
          .on("error", reject);
      });
    } catch (error) {
      console.error(error);
      //do something
    }
  }
})();

this is the reference to the file: https://tools.hana.ondemand.com/additional/sapui5-rt-1.44.42.zip

Do you have any ideas what it can be?

THank you!

entry.size undefined?

Even though the example shows use of entry.size this does not exist on the entry object at all.

EBUSY: resource busy or locked

Hi, we are facing few problem while migrating the node-unzip api to node-unzipper apis. earlier we were using node-unzip api for extract and parsing purpose, but now we decided to use unzipper instead of unzip.
following are the error messages we are getting while installing or unzipping the zip file.

npm ERR! error rolling back Error: EBUSY: resource busy or locked, rmdir 'E:\1\node_modules\repo'
npm ERR! error rolling back at Error (native)
npm ERR! error rolling back { [Error: EBUSY: resource busy or locked, rmdir 'E:\1\node_modules\repo']
npm ERR! error rolling back errno: -4082,
npm ERR! error rolling back code: 'EBUSY',
npm ERR! error rolling back syscall: 'rmdir',
npm ERR! error rolling back path: 'E:\1\node_modules\repo' }

following are the code samples which we were using earlier and now, please suggest me what is best way to use unzipper api for my purpose.

earlier: with node-unzip

var ZIP_FILE= path.resolve(CURRENT_DIR, 'build.zip');
readStream = fs.createReadStream(ZIP_FILE);
writeStream = fstream.Writer(CURRENT_DIR);

        readStream
          .pipe(unzip.Parse())
          .pipe(writeStream).on("unpipe", function () {
            fs.unlinkSync(ZIP_FILE);

            var FILE1 = path.resolve(CURRENT_DIR, 
                          'build\\Release\\file1.node');
            var FILE2 = path.resolve(CURRENT_DIR,
                          'build\\Release\\file2.node');
            var FILE3 = path.resolve(CURRENT_DIR,
                          'build\\Release\\file3.node');
           
            fs.exists(FILE1 , function() {
              if(Number(process.version.match(/^v(\d+\.\d+)/)[1]) < 0.12) {
                  fs.renameSync(FILE2 , FILE1);
                  fs.unlinkSync(FILE3);             
              } else if(Number(process.version.match(/^v(\d+\.\d+)/)[1]) < 4.0) {
                  fs.renameSync(FILE3 , FILE1);
                  fs.unlinkSync(FILE2);             
              } else {
                  fs.unlinkSync(FILE2 );
                  fs.unlinkSync(FILE3 );             
              }
            });
        });

Now with node-unzipper

var ZIP_FILE= path.resolve(CURRENT_DIR, 'build.zip');
readStream = fs.createReadStream(ZIP_FILE);

        var extractBuildZip = readStream.pipe(unzipper.Extract({path: CURRENT_DIR}));
	extractBuildZip.on('close', function(){
		fs.unlinkSync(ZIP_FILE);
		var FILE1= path.resolve(CURRENT_DIR,
						 'build\\Release\\file1.node');
		var FILE2= path.resolve(CURRENT_DIR,
						     'build\\Release\\file2.node');
		var FILE3= path.resolve(CURRENT_DIR,
						     'build\\Release\\file3.node');
		
		fs.exists(FILE1, function() {
			if(Number(process.version.match(/^v(\d+\.\d+)/)[1]) < 0.12) {
				fs.renameSync(FILE2, FILE1);
				fs.unlinkSync(FILE3);	
			} else if(Number(process.version.match(/^v(\d+\.\d+)/)[1]) < 4.0) {
				fs.renameSync(FILE3, FILE1);
				fs.unlinkSync(FILE2);
			} else {
				fs.unlinkSync(FILE2);
				fs.unlinkSync(FILE3);
			}
		});
		extractBuildZip.on('err', function(){
			console.log(err);
		});
	});

but now it started giving error on fs.renameSync or fs.unlinkSync calls, after restarting windows it works fine but then again if I do install, it throws the same error.

[QUESTION] Last modified date parsing

Hi everyone,

When I check the vars property of an entry during unzipping, I get this:

{
  versionsNeededToExtract: 20,
  flags: 2048,
  compressionMethod: 8,
  lastModifiedTime: 19920,
  lastModifiedDate: 19764,
  crc32: 3025953602,
  compressedSize: 461,
  uncompressedSize: 947,
  fileNameLength: 100,
  extraFieldLength: 0
}

Do you know what is this lastModifiedTime + lastModifiedDate format and how can I parse it with JavaScript?

Thanks in advance!

Zlib Error Unzipping Large Files

I am having trouble unzipping a large zip file.. It contains 1 text file, uncompressed it is about 35 GB and compressed it is about 2.38 GB. I am attempting to The error I get is:

Error: too many length or distance symbols
at Zlib._handle.onerror (zlib.js:370:17)

I can replicate the issue with the Open.url example, replacing your 500MB file with my large one. As well the issue is replicated with the "ParseOne" function. Any idea as to what might be going on?

Can this unzip a KMZ?

The KMZ has a doc.kml file inside it. But it can only be accessible programmatically once the KMZ has been unzipped.

I've tried changing the KMZ file's extension from .kmz to .zip and then unzip the file from there. However, the KMZ file's .name properly is read-only.

Error handler

How can I catch errors, before unzipper crash all app?

I build decryptor. So, the only way to find out, that a key was incorrect is to try decompress it. But unzipper just crashes app and that's all:
[parse.js, line 57]

elf.emit('error', Error('invalid signature: 0x' + signature.toString(16)));

Auth for Open.url

The docs don't explain how to add custom options to request when using unzipper.Open.url(request,'my-url'). To save others digging through the code (like myself) it would be useful to specify that the second parameter can take an options object which is passed to request, not just a url.

Stream ends prematurely

Hi !

Sorry to trouble you, but I have a little problem with my stream. Actually, I have an application which can manage image uploads. The images can also be in a zip file. In the two cases, I need managing the big pictures (max 1000*1000). So, I use the module sharp for that (which can use stream). Finally, I upload files on S3. To resume:

Image file => sharp => S3

Zip file => unzipper => sharp => S3

In the case of simple picture files, it works. But for the zip file, I have the error Write After End. If I have well understood, this error happens if I try to write after closing a stream.
This is my buggy code:

function uploadZip(filepath, idParty){
    fs.createReadStream(filepath)
        .pipe(unzipper.Parse())
        .on('entry', function(entry){
            let filename = entry.path;
            if (isImage(filename)){
                console.log(filename);
                uploadFile(imageStream, filename, idParty).then(function(result){
                        entry.autodrain();
                    },function(err){
                        entry.autodrain();
                    });
            }
            else {
                entry.autodrain();
            }
        });
};

I don't understand what I'm doing wrong. With the promise, I have tried to force closing the stream only after the promise finished. But it doesn't work.

I have tried with the buffer method:

function uploadZip(filepath, idParty){
    fs.createReadStream(filepath)
        .pipe(unzipper.Parse())
        .on('entry', function(entry){
            let filename = entry.path;
            if (isImage(filename)){
                console.log(filename);
                entry.buffer().then((buf) => uploadFile(buf, filename, idParty));
            }
            else {
                entry.autodrain();
            }
        });
};

This approach works. But you have said the recommended strategy is the stream approach, so I really want to try with streams. Can you help me, please ?

Thank you very much :)

How to filtering file or directory during unzip

Hello there !

Is there a way to filter file when unzipping ?

I tried this :

fs.createReadStream('backup.zip')
  .pipe(unzipper.Parse())
  .on('entry', function (entry) {
    var fileName = entry.path;
	console.log(fileName)
    if (fileName.endsWith('.svg')) {
		return false
    
    } else {
	  return true
    }
  });

Thanks for your support

if zip file that size under 1kb the close event will emit after full extract when use Extract function

hello:
I found a issue that if zip file that size under 1kb the close event will emit after full extract when use Extract function.for example:
fs.createReadStream(packagePath).pipe(unzip.Extract({path: extractPath}).on('close', () => {}))

the resolve method of mine is that append a 1kb file in zip when zip a file,so when extract the zip file will correct.for example:
const fillBuffer = Buffer.alloc(1024);
archive.append(fillBuffer, {name: '_ignore'})
please quickly resolve this issue,thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.