Giter VIP home page Giter VIP logo

loopback-component-storage's Introduction

LoopBack Storage Component

⚠️ LoopBack 3 is in Maintenance LTS mode, only critical bugs and critical security fixes will be provided. (See Module Long Term Support Policy below.)

We urge all LoopBack 3 users to migrate their applications to LoopBack 4 as soon as possible. Refer to our Migration Guide for more information on how to upgrade.

Overview

LoopBack storage component provides Node.js and REST APIs to manage binary file contents using pluggable storage providers, such as local file systems, Amazon S3, or Rackspace cloud files. It uses pkgcloud to support cloud-based storage services including:

  • Amazon
  • Azure
  • Google Cloud
  • Openstack
  • Rackspace

Please see the Storage Service Documentation.

For more details on the architecture of the module, please see the introduction section of the blog post.

Examples

See https://github.com/strongloop/loopback-example-storage.

Module Long Term Support Policy

This module adopts the Module Long Term Support (LTS) policy, with the following End Of Life (EOL) dates:

Version Status Published EOL
3.x Maintenance LTS Dec 2016 Dec 2020

Learn more about our LTS plan in docs.

loopback-component-storage's People

Contributors

0candy avatar agnes512 avatar amir-61 avatar bajtos avatar cgole avatar crandmck avatar dhmlau avatar diegoazh avatar hgouveia avatar jury89 avatar kallenboone avatar loay avatar nabdelgadir avatar qard avatar raymondfeng avatar richardpringle avatar rmg avatar sam-github avatar sanosom avatar seriousben avatar siddhipai avatar simonhoibm avatar smehrbrodt avatar superkhau avatar syntheticgoo avatar timosaikkonen avatar timowolf avatar tonysoft avatar virkt25 avatar yorkie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loopback-component-storage's Issues

"TypeError: Cannot read property 'statusCode' of undefined"

The given error "TypeError: Cannot read property 'statusCode' of undefined" appear only when uploading a file bigger than 4mo using a callback on uploadStream.

var s3 = new StorageService({
  provider: 'amazon',
  key: providers.amazon.key,
  keyId: providers.amazon.keyId
});

var fs = require('fs');
var stream = s3.uploadStream('uniq_container_name', 'file_bigger_than_4mo', function() {
    console.log('upload finish')
});
fs.createReadStream('/home/user/pictures/file_bigger_than_4mo.jpg').pipe(stream);

The callback is needed because stream.on('finish') is not triggered with amazon adapter.

Amazon S3 REST services not working.

Hi,
This issue is related to #16.
I am trying to upload to amazon S3 using REST API. My keys have full admin access.
I am able to get files but any kind of post is not working.

I tried running associated example. The GET method is working but any other operation is not working.

upload response is returning 200 but nothing is uploaded.
image
Not just this `POST' response is returning uploaded file but subsequent get call returns actual list of files which is sans earlier uploaded files.

image

image

Kindly do the needful.

** Update **
I did some research and found out that if files don't have space in their name are in fact getting uploaded but in example provided GET request immediately followed by POST doesn't return them (as well) in response. If page is refreshed then it returns correct response.

So basically it's here we are stuck apart from dynamic container creation.
IMHO loopback-storage REST services are not behaving as documented. I should have been able to upload the files using postman but I am not. So may be either I am missing something or the documentation.

Kindly guide me accordingly. Thanks.

cannot read property 'storage' of undefined

When running my app I get the message I posted in the title. Please any hints. Here is my package.json file

},
"dependencies": {
"pkgcloud": "~0.9.4",
"async": "~0.2.10"
},
"devDependencies": {
"express": "~3.4.0",
"loopback": "1.x.x",
"formidable": "~1.0.14",
"mocha": "~1.18.2",
"supertest": "~0.10.0",
"mkdirp": "~0.3.5"
},

How to create a custom remote method for uploading file ?

I have created the container model as described in the examples. I now have the endpoints which allows me to upload files to container.

I have a user model. I want to create an remote method api/users/uploadDisplayPic using container model which will allow users to upload jpegs as their display pics. How to write such a remote method.

Any pointers will be helpful for me ?

afterRemote hook not called on `download`

I receive no console output with the following lines:

Container.afterRemote('download',
    function(ctx, affectedModelInstance, next) {
        console.log('Downloaded');
        next();
    }
);

I'm trying to encrypt files before sending and decrypt afterwards.

Not Working with Angular 1.3.x

The Example-2.0 does not work with Angular version 1.3.0 and above. Looks like (at least in Angular v1.3.6) the defaultHttpResponseTransform function has been updated to accept a function "headers" as a 2nd parameter:

function defaultHttpResponseTransform(data, headers) {
  if (isString(data)) {
    // strip json vulnerability protection prefix
    data = data.replace(JSON_PROTECTION_PREFIX, '');
    var contentType = headers('Content-Type');
    if ((contentType && contentType.indexOf(APPLICATION_JSON) === 0 && data.trim()) ||
        (JSON_START.test(data) && JSON_END.test(data))) {
      data = fromJson(data);
    }
  }
  return data;
}

However, it's called inside _transformResponse function in the angular-file-upload.js file without the 2nd parameter:

_transformResponse: function (response) {
  $http.defaults.transformResponse.forEach(function (transformFn) {
    response = transformFn(response);
  });
  return response;
}

Am I doing something wrong here or the example should be adjusted in some way?

Amazon S3 does't work

Hi, I'm playing around the example 2.0 of this nice component and trying AWS S3. I put the right keys, but when I uploaded files from browser, I got this error:

/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback/node_modules/continuation-local-storage/context.js:78
throw exception;
^
TypeError: Cannot call method 'end' of undefined
at ChunkedStream. (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/node_modules/pkgcloud/lib/pkgcloud/amazon/storage/client/files.js:200:15)
at ChunkedStream.emit (events.js:92:17)
at ChunkedStream.end (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/node_modules/pkgcloud/lib/pkgcloud/amazon/storage/utils.js:71:8)
at Stream. (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/lib/storage-handler.js:93:14)
at Stream.emit (events.js:117:20)
at MultipartParser.parser.onPartEnd (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/node_modules/formidable/lib/incoming_form.js:382:14)
at callback (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/node_modules/formidable/lib/multipart_parser.js:102:31)
at MultipartParser.write (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/node_modules/formidable/lib/multipart_parser.js:267:15)
at IncomingForm.write (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/node_modules/formidable/lib/incoming_form.js:157:34)
at IncomingMessage. (/Users/fostery/dev-playground/loopback/loopback-component-storage-master/example-2.0/node_modules/loopback-component-storage/node_modules/formidable/lib/incoming_form.js:123:12)

Any ideas? Do I miss anything? Thanks!

POST file to an nonexistant container throws an uncaught exception and crash my nodejs

Posting a file to a nonexistant container will throw an uncaught exception and crash your server.

ie : POST localhost:3000/test/api/storage/90/upload

crashes the server with this exception :
events.js:72
throw er; // Unhandled 'error' event
^
Error: ENOENT, open 'C:......\storage\logos\sites\90\myfile.jpg'

The API should create the container OR at least respond with a 404 so that we can handle it.

Thanks

Getting notified metadata removed so you can remove file from provider...

I've got some things set up to store metadata ("uploadedFile") for an uploaded file, with it storing the container and filename in the db (this is associated with a user).

However, I can't figure out a clean way to remove the file from the provider (filesystem or cloud service) after you delete the record for uploadedFile in the db.

I got something working using the before/afterRemote, but it seems to require setting some of the metadata on the request in beforeDelete so that you can get it back after the delete occurs and afterDelete is called, but that feels overly complicated.

I tried using afterDestroy but it doesn't look like much is implemented for that yet - or at least I couldn't get it to work.

What is the best way to do this kind of thing with loopback?

Upload local file

For example I have some local file or url.

I want to put it in the container. Is there any way to do it from the script?

something like

app.models.container.save('../filepath.jpg', 'container1' , function (err, data) {});

?

And similar question. How can I move files between containers?

Thanks

How can I use authentication with storage service?

I'd like to use authentication with the loopback storage service to make users can only upload to or delete files in their particular folders? I cannot find any document on that, so please give me some hints.

Where to retrieve filename in a hook

Hi,

I placed this code in my server.js

// -- Add your pre-processing middleware here --
// SETTING UP AMAZON
var amazon = loopback.createDataSource({
connector: require('loopback-component-storage'),
provider: 'amazon',
key: '_',
keyId: '
_'
});

var container = amazon.createModel('container');

container.afterRemote('upload', function(ctx, inst, next) {

    console.log(ctx);

});
app.model(container);

I am capturing an event, but I can't find what is the filename of my upload.
in which object/path can I capture this information?

Thanks!

GET unkown file/container returns 500 instead of 404

GET /containers/does-not-exist

{
  "error": {
    "name": "Error",
    "status": 500,
    "message": "ENOENT, stat '/Users/bajtos/src/loopback/android/test-server/storage/does-not-exist'",
    "errno": 34,
    "code": "ENOENT",
    "path": "/Users/bajtos/src/loopback/android/test-server/storage/does-not-exist",
    "stack": "Error: ENOENT, stat '/Users/bajtos/src/loopback/android/test-server/storage/does-not-exist'"
  }
}

Simplify response of "upload"

The current response is way to complex:

{
  "result":  {
    "files": {
      "{form-parameter-name}": [
          {
            "container": "album1",
            "name": "test.jpg",
           "type": "image/jpeg"
          }
        ],
      "etc... more files"
     },
    "fields": {}
  }
}

This is the response I would expect:

{
  "{form-parameter-name}": {
    "container": "album1",
    "name": "test.jpg",
    "type": "image/jpeg"
  },
  "etc... more files"
}

slc loopback:datasource misconfigures connector in datasources.json

When using slc loopback:datasource these docs say:

At the prompt "Enter the connector name without the loopback-connector- prefix," enter storage.

This creates the following in datasources.json:

"localFS": {
    "name": "localFS",
    "connector": "storage",
}

However, when testing the api, one gets:

loopback "Object #<Storage> has no method 'all'"

Which is fixed, based on all of the example on this repository, but changing it datasources.json to:

"localFS": {
    "name": "localFS",
    "connector": "loopback-component-storage",
}

Processing hook or loopback-component-image

Would be good to have a processing hook, which would be used for file manipulations for example like image resizing. Another idea is to have another component loopback-component-image, which could be used by the model created by developer. This component would use loopback-component-storage for saving file to different storage services.

wrapper & several storages

Hi,

I am quite new to loopback so please don't be afraid :)

I want to upload files into different storages and to extract metadata in order to insert a record inside a database.

I saw the post #11 but the answer is not clear to me how to link the database with the storage data source. Besides, It does not seem to handle multiple-storage.

I was thinking to do a kind of wrapper:

  • Create a service projects/:projectId/upload
  • when the service is called with a file: extract the metadata & create the record. Create also dynamically the storage and call the upload method with the file content.

How does it sound? Any help appreciated :)
Best,
Thibaut

Files should be written to disk before calling `afterRemote` hook

I'm using the afterRemote hook to create image thumbnails after an image has been uploaded. When I want to access the uploaded file in that hook, the image is not yet completely written to disk and thus I can't create thumbnails without errors.

A workaround is to copy the file and operate on the copied file.
fs.createReadStream(originalfile).pipe(fs.createWriteStream(copyfile));
This works, whyever.

However, loopback should make sure that the file is written to disk before calling the afterRemote hook.

Question: How do I get the root folder value when the datasource is programmatically set?

In my server.js that starts the express / loopback server, I am creating a loopback storage component datasource and model like this:

var ds = loopback.createDataSource({
    connector: require('loopback-component-storage'),
    provider: 'filesystem',
    root: '/var/nodefiles',
    name: 'f250server'
});

var container = ds.createModel('container');

app.model(container);

In another separate script, I want to get the "root" folder value that was set, in order to utilize exec() with a file that was uploaded.
I cannot seem to find it. When the root folder value is set in Datasources.json, it can be found like this:

var server = require('../../server/server'); // To access models and database
var async = require('async'); // Allows multiple asynchronous calls in a row or parallel
var exec = require('child_process').exec;


console.log(server.models.container.app.dataSources.f250server.settings.root);    // Displays datasource f250server's root folder value that was set in Datasources.json
        process.exit(0);

When it is set programmatically, it is not there, but I suspect it is elsewhere...

filesystem provider should trigger "success" event

The S3 storage component of pkgcloud wraps the upload stream into a proxy and forwards the "uploaded" event as "success" as well as the "data" and "error" events.
The filesystem provider that comes with loopback-component-storage returns the fs.createWritableStream which triggers the usual filesystem events. That means that it only triggers "finish" which triggers too early for S3 storage.

This means that at the moment it is not possible to write a truly provider independent upload code.

Access Denied

hi am trying to add loopback-storage-service in my application, when i try to list all the containers its giving me the following error

error

am using the amazon provider and gave correct key and keyId. Please help me to resolve this problem thanks in advance.

File rename - settings by configuration file

Can anybody point me examples of creating a file rename service via config.

My scenario is that I would like to rename files on local for my tests and on amazon for my production.

Failed to apply acl header for amazon s3 upload to make it public- read using rest api

Failed to apply acl header for amazon s3 upload to make it public- read using rest api post call

in earlier version of "loopback-component-storage - 1.0.5"

i made the change in module to make it public-read.

var writer = provider.upload({container: container, remote: part.filename, headers: {'x-amz-acl': 'public-read'}});

/lib/storage-handler.js

--- a/lib/storage-handler.js
+++ b/lib/storage-handler.js
@@ -57,7 +57,7 @@ exports.upload = function (provider, req, res, container, cb) {
if ('content-type' in part.headers) {
headers['content-type'] = part.headers['content-type'];
}

  • var writer = provider.upload({container: container, remote: part.filename});
  • var writer = provider.upload({container: container, remote: part.filename, headers: {'x-amz-acl': 'public-read'}});

In new version of module, how can i make acl to public read.

Help with Amazon REST API

Greetings,

I'm having serious trouble getting the amazon storage service connector working with an auto-generated REST API. Following the example on http://docs.strongloop.com/display/DOC/Storage+service, I see no way of making this work, whereas, the example code in this repo only uses the storage service using the Model API. Also I am confused with the lib/ directory in the base of this repo -- the docs don't seem to mention these requirements regarding getting the storage service running.

Any thoughts? Amazing work with loopback btw ~

Got stuck when uploading a file

I added storage service feature to my app like this https://gist.github.com/chocstarfish/10224682,
and used the example code in here https://github.com/strongloop/loopback-storage-service/tree/master/example to test it.

The problem is when I upload a file, the request is always pending and after a while it throws an error like this:

http://localhost:3000/api/containers/container1/upload net::ERR_EMPTY_RESPONSE

image

and the back-end also logs an error:

Error: Request aborted

I think I am using the latest npm packages, then what is the problem?

POST existing container returns 500

An attempt to create a container with a name that already exists fails with 500. The user should get a different error, most likely a validation error "name must be unique".

Error in POST /containers: Error: EEXIST, mkdir '/Users/bajtos/src/loopback/android/test-server/storage/a-container-1392317969686'

Custom metadata

Loving loopback and the storage service. Any appetite toward an accompanying custom metadata storage option? I'm finding that in order to group files together or apply any type of filter on the files for application use, I obviously have to store a separate reference to the file paths in a data source. Would be amazingly awesome if I could include tag(s) with the upload.

Potential benefits

  1. Ability to query uploaded files by tag value. (e.g. where tags in ["2014", "org", "gallery"])
  2. Ability to query uploads by implied meta (e.g. where type == 'jpeg', where size > 10000, etc).
  3. I imagine for the filesystem provider that getFiles is just an 'ls' on the directory. This could instead be a much more performant db listing vs file io on the app server host.

finish event on stream upload with adapter amazon is not triggered.

var s3 = new StorageService({
  provider: 'amazon',
  key: providers.amazon.key,
  keyId: providers.amazon.keyId
});

var fs = require('fs');
var stream = s3.uploadStream('uniq_container_name', 'file.jpg');
fs.createReadStream('/home/user/pictures/file.jpg').pipe(stream);

stream.on('finish', function() {
    console.log('upload finish');
});

Does not work when authentication is enabled

When authentication is enabled via app.enableAuth, remotable methods are not called in the same tick as the HTTP request handler, they are called later.

storage-handler.upload calls formidable to parse the request and get the uploaded files. Because this calls happens on a next tick, formidable misses some of the first data events and starts parsing from the middle of the request body.

A standalone script reproducing the issue: bug.js

When app.enableAuth() is removed, the upload works.

Error reported by formidable:

parser error, 32 of 36 bytes parsed

This is the body received by formidable:

<Buffer
 2d 2d 5f 57 63 69 72 6b 70 77 58 38 6d 6d 54 71 55 34 
 4d 70 38 7a 53 73 54 73 73 36 59 76 66 65 2d 2d 0d 0a>

Stringwise:

--_WcirkpwX8mmTqU4Mp8zSsTss6Yvfe--

I have discovered this issue while working on the Android SDK. The android-async-http client sends:

  • three packets (header, body, close-delimiter) when sending a stream. Since the body arrives later than the header, formidable has a chance to get all data.
  • two packets (header+body, close-delimiter) when sending a file. Formidable receives only the close-delimiter.

/to: @raymondfeng

Controlling filename used for upload based on user info

I was able to tweak this so that I could put things in a place specific to an authenticated user (via passport social login with ensureloggedin), modifying the storage-handler.js file. Not sure if it's the best way, though, so adding this note to compare when you get around to adding something like this.

Primarily, at the top of exports.upload, check to see if req.user.getNewFilenameForUser exists (and req.user, of course), and if so then call that to get the filename to for the upload (strategy for creating new filenames arbitrary - I'm using uuid's and userid). This function can be added to user via the suggestions in loopback 2.0 docs by creating a user.js file (http://docs.strongloop.com/display/LB/Migrating+existing+apps+to+version+2.0, the section on "Models"). The resulting filename is then used further below in the provider.upload(...) call. When the file is finished uploading, the method user.fileWasUploaded(info) is called (if it exists) so that the User model can save info to db on the upload.

There's other grunt stuff to do on managing uploads, but this is a first step - curious to see what you end up doing.

FileSystemProvider: Invalid name

I want to upload The whole directory file,so changed the input ,add webkitdirectory:
<input ng-file-select type="file" webkitdirectory multiple
After,could not upload the folder files with the error :FileSystemProvider: Invalid name

Sub container?!

I would like to split application to couple of modules so containers should have subcontainer.
Is it possible to do?

.../container1/userId/....
../container2/bookId/...

I don't feel like pushing everything in one folder is good solution. For example if I will have 100 000 containers?!

Thanks

Cannot call method 'forEach' of undefined

Hi !
When I call api/storages/:containers/files where ":containers" is a folder that doesn't exists, server crash with this error :

node_modules/loopback-component-storage/lib/providers/filesystem/index.js:204
entries.forEach(function (f) {
^
TypeError: Cannot call method 'forEach' of undefined

Enjoy ;-)

Error 500 in Post method and get,post method listed twice.

..,
"storage": {
"name": "storage",
"connector": "loopback-component-storage",
"provider": "filesystem",
"root": "storage"
}...
I have setting up the datasource using slc and JSON following the steps of http://docs.strongloop.com/display/public/LB/Storage+service then through "loopback arc" I defined a model named "bucket" the problem is when I see trough "API EXPLORER" the methods get and post are listed twice.

selection_047

And when I try to create a new bucket I get an error 500.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.