Giter VIP home page Giter VIP logo

s3rver's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3rver's Issues

Doesn't seem to show files when using `ls` command from aws-cli

I made a folder with some sub folders and files but none of my files seem to show up when I use the aws-cli. I am on osx, running the s3rver programmatically from within gulp.

I have a dir structure like:

integration
  /testbucket1
    - example.txt
  /testbucket2
    - example2.txt

But when I run this:

$ aws s3 ls s3:// --endpoint http://localhost:4569 --recursive
2017-02-21 15:29:54 testbucket1
2017-02-22 09:46:10 testbucket2

The files are not being picked up by recursive. Also if I ls into a specific bucket I get no files:

$ aws s3 ls s3://testbucket1 --endpoint http://localhost:4569 --recursive

Output from s3rver from above commands:

[09:52:26] Starting 'startServer'...
info: Fetched 2 buckets
info: GET / 200 478 - 10.822 ms
info: Fetched bucket "testbucket1" with options [object Object]
info: Found 0 objects for bucket "testbucket1"
info: GET /testbucket1 200 222 - 2.326 ms

No results. What am I doing wrong? Why isn't it finding files as objects?

What I want to do is to use this to just serve up some local files for integration testing purposes and just configure the endpoint to point locally. Any help would be appreciated here.

BucketName is empty

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://doc.s3.amazonaws.com/2006-03-01">
  <IsTruncated>false</IsTruncated>
  <Marker/>
  <Name/>
  <Prefix/>
  <MaxKeys>1000</MaxKeys>

when using the bucketname from the ListBucketResult

java.lang.IllegalArgumentException: BucketName cannot be empty
	at com.amazonaws.util.ValidationUtils.assertStringNotEmpty(ValidationUtils.java:89)
	at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1374)
	at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1271)

regression? https://github.com/jamhall/s3rver/pull/100/files

Support actions via presigned urls

You can create presignedUrls with s3.getSignedUrl method.

A url looks like this: http://localhost:3001/file-uploads-temporary/de456c2c-0f93-4df7-ad82-0c2943e447f0.jpeg?AWSAccessKeyId=123&Content-Type=image%2Fjpeg&Expires=1519727198&Signature=L0RMhVZCyfrp37sDZbVfSLCuOSU%3D&x-amz-acl=private

Currently the metadata (aka Content-Type Content-Type=image%2Fjpeg) is not added to the .dummys3_metadata file.

I would suggest that we check for the Content-Type param and if present simply use that as the value for the objects content-type.

The current workaround is to add the header to your request e.g.

"use strict";
const fetch = require("node-fetch");
const S3 = require("aws-sdk");

const s3 = new S3(config);

const s3Params = {
  Bucket: "some-bucket-name",
  Key: "my-file.jpeg",
  ContentType: "image/jpeg",
  ACL: "private"
};

s3
  .getSignedUrl("putObject", s3Params)
  .promise()
  .then(uploadUrl => {
    const readStream = fs.createReadStream("my-local-image.jpeg");
    return fetch(uploadUrl, {
      method: `PUT`,
      body: readStream,
      // On AWS you do not have to specify this. The Header is taken from the presigned url
      headers: {
        "Content-Type": "image/jpeg"
      }
    });
  });

What do you think? @specialkk @leontastic?

If you agree I would come up with a PR for this.

XML root tag on lquery bucket is wrong - it should be ListBucketResult instead of ListAllMyBucketsResult

the result of this bug is that you can't use AmazonS3Client.listObjects: when parsing the response it expects ListBucketResult
https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/transform/XmlResponsesSaxParser.java#L498

also you can see that the fake-s3 implementation responds with "ListBucketResult" as expected:
https://github.com/jubos/fake-s3/blob/master/lib/fakes3/xml_adapter.rb#L153

Versioning

Although this feature doesn't currently exist with fakes3, is it possible to implement bucket versioning?

Thanks

Cleaning up the directory after s3rver.close()

I am using s3rver in my mocha tests, it's great so far!

I have one recommendation, though:
When calling s3Instance.close(), the directory passed to the constructor should be cleaned up and emptied. It's more convenient that way;)

Can I disable cors ?

From v2.1.0, the behavior of cors as argument was changed.
In the past version, When I set cors to false, I could disable this function.
But in current version, When I set cors to null(or false), an error occured and displayed the follwoing messages.

TypeError: Cannot read property 'CORSRule' of undefined
    at /Users/argon/workspace/managed/s3rver/lib/cors.js:48:42
    at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
    at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
    at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
    at /Users/argon/workspace/managed/s3rver/lib/app.js:45:5
    at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
    at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
    at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
    at logger (/Users/argon/workspace/managed/s3rver/node_modules/morgan/index.js:144:5)
    at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
    at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
    at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
    at expressInit (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/middleware/init.js:40:5)
    at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
    at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
    at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
    at query (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/middleware/query.js:45:5)
    at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
    at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
    at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
    at Function.handle (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:174:3)
    at Function.handle (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/application.js:174:10)
    at Server.app (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/express.js:39:9)
    at emitTwo (events.js:125:13)
    at Server.emit (events.js:213:7)
    at parserOnIncoming (_http_server.js:602:12)
    at HTTPParser.parserOnHeadersComplete (_http_common.js:116:23)

It seems that CORSConfiguration in cors() @ cors.js is not initialized.
Because the config as argument of cors is evaluated to false.

What should I do ?

listObjectsV2 does not consider "Prefix" parameter

When calling this.s3Client.listObjectsV2({Bucket: 'baseBucket', Prefix: 'pending'}) the following is returned:

GET /baseBucket?list-type=2&prefix=pending 200 1763 - 7.539 ms
{ IsTruncated: false,
  Contents: 
   [ { Key: 'harFiles/2',
       LastModified: 2017-03-07T14:17:01.073Z,
       ETag: '"9d6b3ad272abe35c3c6b95a948b000db"',
       Size: 14,
       StorageClass: 'Standard',
       Owner: [Object] },
     { Key: 'pending/1',
       LastModified: 2017-03-07T14:17:01.353Z,
       ETag: '"10ed3a4e7f510696c325d6249c82d69e"',
       Size: 14,
       StorageClass: 'Standard',
       Owner: [Object] },
     { Key: 'recipes/1',
       LastModified: 2017-03-07T14:17:01.073Z,
       ETag: '"d4793fe3394939cf279c23e9045f7afc"',
       Size: 36,
       StorageClass: 'Standard',
       Owner: [Object] },
     { Key: 'recipes/2',
       LastModified: 2017-03-07T14:17:01.333Z,
       ETag: '"d897f47178a9e3a89bf4e8abe0497eed"',
       Size: 39,
       StorageClass: 'Standard',
       Owner: [Object] },
     { Key: 'recipes/5',
       LastModified: 2017-03-07T14:17:01.303Z,
       ETag: '"f3e713eee1619c0eeb29b42a6a4aab8d"',
       Size: 29,
       StorageClass: 'Standard',
       Owner: [Object] } ],
  Prefix: '',
  MaxKeys: 1000,
  CommonPrefixes: [] }

Although I indicated a Prefix pending, keys which start with recipes are included.

Problem with index and error documents when Dockerizing

I am trying to wrap s3rver in a Docker container so that it can be easily run standalone. A number of other people have done this as evidenced by all the different Github and DockerHub repos available. However, nobody seems to be supporting --indexDocument and --errorDocument. When I include those params in my builds s3rver still starts but all GET calls simply return:

<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<Resource>192.168.99.100</Resource>
<RequestId>1</RequestId>
</Error>

If I remove those params from my ENTRYPOINT and leave --hostname, --port and --directory everything works as expected.

s3rver is a great tool especially for prototyping S3 hosted SPA apps where the index and error docs provide the infrastructure for routing. I would be happy to submit a pull request for a working Dockerfile if I can figure out how to get i and -e params working.

List buckets

Hi, I really like that you have taken fakes3 and ported it to Node.js, since my main project is already in Node it keeps me from setting up Ruby etc.

Is there any chance for implementing list bucket operations for this?

Use a mounted SMB drive with s3rver as directory

Hello,

we like to use a mounted SMB drive via S3, but we get the following error.

Our SMB drive mount:

\\192.168.2.17/smb-www on /var/www type cifs (rw,nosuid,nodev,noexec,relatime,vers=1.0,cache=strict,username=administrator,domain=xxx,uid=0,forceuid,gid=33,forcegid,addr=192.168.2.17,file_mode=0775,dir_mode=0775,iocharset=iso8859-1,nounix,nobrl,noperm,rsize=61440,wsize=16580,actimeo=1,_netdev,user)

Start of s3rver:

s3rver -h 192.168.2.16 -p 8000 -d /var/www/

Error after access:

now listening on host 192.168.2.16 and port 8000
info: [S3rver] GET / 500 1192 - 9.980 ms
Error: ENOENT: no such file or directory, stat '/var/www/SMB-PUBLIC (192.168.2.17) (P) - Verkn�pfung.lnk'
    at Object.fs.statSync (fs.js:955:11)
    at Object.statSync (/usr/lib/node_modules/s3rver/node_modules/graceful-fs/polyfills.js:297:22)
    at /usr/lib/node_modules/s3rver/lib/file-store.js:40:21
    at Array.filter (<anonymous>)
    at Object.getBuckets (/usr/lib/node_modules/s3rver/lib/file-store.js:39:35)
    at getBuckets (/usr/lib/node_modules/s3rver/lib/controllers.js:154:31)
    at Layer.handle [as handle_request] (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/layer.js:95:5)
    at next (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/route.js:137:13)
    at Route.dispatch (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/route.js:112:3)
    at Layer.handle [as handle_request] (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/layer.js:95:5)

Write protection

I know it's probably not the plan of the tool,

but i wanted to know if i have a possibility to set like a tiny security thing to avoid everyone writing on it... (reading shouldn't be a problem)

Thanks for your tool

Issue with `listObjects` in v2.2.1

We've got some integration tests wrapped around our S3 utility, which depend on s3rver and one of them started failing with the 2.2.1 release. (Pinning the version back to 2.2.0 caused the test to start passing again)

The test adds three objects to s3, with keys "some/dir/x", "some/dir/y", and "some/dir/z". However, when I send this request:

bucket.listObjects({
    Bucket: bucketName,
    Prefix: "some/dir",
    Delimiter: '/',
    Marker: null
});

The result is an empty array, rather than the expected three results. I added a headObject check to ensure that the data does exist before calling listObjects, so I think the issue must be with the listObjects call itself.

Any idea why this might be suddenly breaking? Thanks!

Listing buckets returns wrongly formatted xml

Hi!

I use https://github.com/tpyo/amazon-s3-php-class as the REST client for our server to interact with our S3. Listing the buckets returns nothing and I got to the root cause of it.

I use it's listBuckets() function which sends a GET request to the endpoint and returns the bucket names.

When I fetch from our remote S3, something like this is given:

stdClass Object ( 
    [error] => 
    [body] => SimpleXMLElement Object ( 
        [Owner] => SimpleXMLElement Object ( 
            [ID] => 
            [DisplayName] =>  
        ) 
        [Buckets] => SimpleXMLElement Object ( 
            [Bucket] => Array ( 
                [0] => SimpleXMLElement Object ( 
                    [Name] =>  
                    [CreationDate] =>  
                ) 
                [1] => SimpleXMLElement Object ( 
                    [Name] =>  
                    [CreationDate] =>  
                ) 
                [2] => SimpleXMLElement Object ( 
                    [Name] =>  
                    [CreationDate] =>  
                ) 
                [3] => SimpleXMLElement Object ( 
                    [Name] =>  
                    [CreationDate] =>  
                ) 
            ) 
        ) 
    ) 
    [headers] => Array ( 
        [date] => 1453650086 
        [type] => application/xml 
    ) 
    [code] => 200 
)

But when I pipe this to s3rver:

stdClass Object ( 
    [error] => 
    [body] => 123 S3rver development 2016-01-24T14:45:57.372Z hahaha 2016-01-24T14:46:07.610Z 
    [headers] => Array ( 
        [date] => 1453649895 
        [type] => application/xml; charset=utf-8 
        [size] => 469 
        [hash] => W/"1d5-hN+ARE1TnyYYv8AGyEv8Wg" 
    ) 
    [code] => 200 
)

Seems that sending a GET request returns an array of SimpleXMLElement Objects but when using s3rver it doesn't. Thoughts?

Object Copy does not work

Sorry for the click-bait issue name. Object Copy does work, but read on and see what I mean.

I've been having a discussion over at aws/aws-sdk-js#901 regarding their documentation on s3.copyObject(...). In the same document where x-amz-copy-source is specified, there are conflicting descriptions on how it should be set.

AWS Documentation

The 'Syntax' section of the documentation says:

x-amz-copy-source: /source_bucket/sourceObject

However, in the 'Request Headers' section, the documentation says:

Name:
   x-amz-copy-source
Description:
   The name of the source bucket and key name of the source object, separated by a slash (/).

   Type: String

   Default: None
Required:
   Yes

So, which one should be followed?

  1. '/' + bucketName + '/' + keyName
  2. bucketName + '/' + keyName

What s3rver does

s3rver expects x-amz-copy-source: /source_bucket/sourceObject. See lib/controllers.js#L264-L268.

Without the initial '/', srcBucket = srcObjectParams[1] ends up being the source object's key. Eventually this will fail with a

error: No bucket found for "image.png"
info: PUT /BUCKET_NAME/image.copy.png 404 207 - 60.642 ms

What should we do?

I think that in the mean time we should do something to accept both forms. Otherwise integration with other libraries fail. s3fs is an example. See s3fs/lib/s3fs.js#L345 (It does not use an initial '/')

However, I invite you all to participate on the discussion over at aws/aws-sdk-js#901.

Delete bucket fails after deleting all objects in the bucket

If I create a bucket "test" and add the following object keys:

index.html
styles/main.css

Then if I delete all the objects and try to delete the bucket I get the following error:

Unable to delete bucket test: {
"message": "The bucket your tried to delete is not empty",
"code": "BucketNotEmpty",
"region": null,
"time": "2016-04-18T19:27:26.649Z",
"requestId": null,
"extendedRequestId": null,
"statusCode": 409,
"retryable": false,
"retryDelay": 0.46673882752656937
}

However, the bucket is empty if I view it through the browser.

The problem appears to be that the "styles" directory is still present in the data dir even though all keys have been deleted.

If I manually delete the styles directory, then I am able to delete the bucket. So there appears to be an issue cleaning up empty directories once all keys have been deleted.

Multipart upload

It would be great to accept multipart uploads. Using s3.upload from the AWS SDK (link) attempts a multipart upload if the file is larger than some cutoff (something like 5 megabytes).

The service process runs, but the port is not accessible

s3rver runs on a Raspberry Pi 3 with Raspbian (Debian Jessie 8.0) and the service port is accessible from localhost

$ s3rver -p 8080 -d ~/s3test/
now listening on host localhost and port 8080

Port 8080 is open.

$ sudo netstat -tulpn | grep LISTEN
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      779/node        
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      719/sshd        
tcp6       0      0 :::22                   :::*                    LISTEN      719/sshd   
$ nmap localhost -p 22,8080
...
22/tcp   open  ssh
8080/tcp open  http-proxy
$ telnet localhost 8080
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

But when I execute nmap from a different host in the subnet, the s3rver port is closed:

$ nmap 10.0.0.253 -p 22,8080
...
PORT     STATE  SERVICE
22/tcp   open   ssh
8080/tcp closed http-proxy
$ telnet 10.0.0.253 8080
Trying 10.0.0.253...
telnet: Unable to connect to remote host: Connection refused

I have no firewall running.

Some system details:

$ s3rver --version
1.0.3
$ node --version
v0.10.29
$ npm --version
1.4.21
$ uname -a
Linux raspberrypi 4.4.50-v7+ #970 SMP Mon Feb 20 19:18:29 GMT 2017 armv7l GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Raspbian
Description:	Raspbian GNU/Linux 8.0 (jessie)
Release:	8.0
Codename:	jessie

Support running on any available port (port 0)

I want to instantiate many s3rver instances within my AVA tests, where they will run concurrently, with some in their own process.

The best I have been able to do so far is increment a counter for the port number so the different instances do not conflict with each other. But this is brittle because using a large contiguous range of port numbers increases the chances of conflict with an existing service on a developer's machine.

Ideally, I should be able to just run s3rver on port 0 to dynamically allocate an available port.

const server = new S3rver({
    directory : await mkdirtemp(),
    hostname  : 'localhost',
    port      : 0,
    silent    : true
});
server.run((err, hostname, port) => {
    const endpoint = `http://${hostname}:${port}`;
    // ... create S3 client that connects to endpoint ...
});

I figured it would work, as most servers support that, so I was surprised to see this error:

NetworkingError (Error) {
    address: '127.0.0.1',
    code: 'NetworkingError',
    errno: 'ECONNREFUSED',
    hostname: 'localhost',
    message: 'connect ECONNREFUSED 127.0.0.1:80',
    port: 80,
    region: 'us-east-1',
    retryable: true,
    syscall: 'connect',
    time: Date 2018-03-05 07:54:29 704ms UTC {},
}

After investigation, it seems my AWS client tried to connect on port 80 because s3rver just returns the port as-is instead of the allocated port, and port 0 is invalid when making a request. The expected behavior is that it would use the return value of server.address().

MaxListenersExceededWarning when running the tests

  S3rver Tests
    ✓ should fetch fetch six buckets
    ✓ should create a bucket with valid domain-style name
    ✓ should fail to create a bucket because of invalid name
    ✓ should fail to create a bucket because of invalid domain-style name
    ✓ should fail to create a bucket because name is too long
(node:29986) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 uncaughtException listeners added. Use emitter.setMaxListeners() to increase limit

Introduced by #96. Possible memory leak?

LastModified isn't in ISO 8601

This is what I get from a GET call on a bucket

...
<LastModified>Tue, 13 Mar 2018 17:25:04 GMT</LastModified>
...

According to this stack trace using the last aws-java-sdk-s3 client

Caused by: java.lang.IllegalArgumentException: Invalid format: "Tue, 13 Mar 2018 15:54:25 GMT"
	at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187)
	at org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:826)
	at com.amazonaws.util.DateUtils.doParseISO8601Date(DateUtils.java:98)
	at com.amazonaws.util.DateUtils.parseISO8601Date(DateUtils.java:77)
	at com.amazonaws.services.s3.internal.ServiceUtils.parseIso8601Date(ServiceUtils.java:76)
	at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler.doEndElement(XmlResponsesSaxParser.java:703)
	at com.amazonaws.services.s3.model.transform.AbstractHandler.endElement(AbstractHandler.java:52)
	at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:609)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2967)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
	at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:841)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:770)
	at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
	at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
	at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:147)

and this
capture d ecran 2018-03-13 a 18 28 37

I guessed that it should be ISO8601-stringified.

Do you agree?

objects.CommonPrefixes

This is part of the object structure retrieved by listObjects(), similar to objects.Contents, which lists out all keys that match between the prefix and the delimiter. Note, this value only exists if a delimiter is used.

The definition for the CommonPrefixes property is:
Gets the CommonPrefixes property. A response can contain CommonPrefixes only if you specify a delimiter. When you do, CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by delimiter. In effect, CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix. For example, if prefix is notes/ and delimiter is a slash (/), in notes/summer/july, the common prefix is notes/summer/.

Source: http://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Model_ListObjectsResponse.htm

This issue relies on issue #9 to support delimiters.

Delimiter stripped from paths

It looks like there is a small difference between s3rver's handling of delimiter parameters. In the AWS response, each Prefix in CommonPrefixes includes the trailing delimiter, but s3rver strips it off. I believe this line should be
match = match.substr(0, delimiterIndex + 1);

Hash doesnt match error - solution

So for the past day I have had a bizarre issue where the hashes never match on PUT requests, I was not sure if it was a problem with the .net awssdk I was using, or the lib itself, anyway I have found the solution.

I was defaulting my aws credentials to:

var credentials = new BasicAWSCredentials("foo", "bar");

Turns out if you do this and dont use those credentials on calls the actual call will work but you get a hash error, so if you instead just put empty strings for your credentials BasicAWSCredentials("", ""); it all works and everyone is happy :)

Use Koa instead of Express

I've contributed enough to this project to know it decently well and I think it could benefit pretty massively from async/await and Koa's support for processing the response body after middleware is run.

I'm just opening this issue seeking an opinion on potentially including a Babel build pipeline if I were to write for Koa 2. However it looks like Travis is only doing tests for Node >= 6, so Koa 1 would also work fine if we want to avoid Babel. This isn't intended to be just changes for the sake of using newer features; it really should make things a lot more readable and approachable for adding new features such as versioning.

XMLParserError: Non-whitespace before first tag

Creating a bucket using the aws-sdk-js client produces the following error:

XMLParserError: Non-whitespace before first tag.
Line: 0
Column: 1
Char: C

s3rver logs:

info: PUT / 404 13 - 4.628 ms

fake-s3 works fine though, so I thought it might be a problem with s3rver.
FYI, running code in node v5.3.0.

Copying from bucket to bucket fails when the path contains a colon

There appears to be a bug in the fake S3 server implementation whereby the applciation is not decoding the HTTP message headers (only the URL) when handling the S3 copy command.
The following script works against the real AWS S3 but fails to copy from bucket to bucket in your fake implementation with this error:

Copying file from s3://bob-test-bucket-1/test-2017-03-09T10:54:24.txt to s3://bob-test-bucket-2/test-2017-03-09T10:54:24.txt
copy failed: s3://bob-test-bucket-1/test-2017-03-09T10:54:24.txt to s3://bob-test-bucket-2/test-2017-03-09T10:54:24.txt An error occurred (NoSuchKey) when calling the CopyObject operation: The specified key does not exist

And here’s the script to test it:

#!/bin/bash

export OVERRIDE_URL="--endpoint-url http://localhost:4569"

echo "THIS IS A TEST" > test-$$.txt

echo "Listing buckets"
aws s3 $OVERRIDE_URL ls
echo

echo "Creating bucket bob-test-bucket-1"
aws s3 $OVERRIDE_URL mb s3://bob-test-bucket-1
echo

echo "Creating bucket bob-test-bucket-2"
aws s3 $OVERRIDE_URL mb s3://bob-test-bucket-2
echo

export SUFFIX=$(date -u "+%Y-%m-%dT%H:%M:%S")

echo "Uploading test data file to s3://bob-test-bucket-1/test-$SUFFIX.txt"
aws s3 $OVERRIDE_URL cp test-$$.txt s3://bob-test-bucket-1/test-$SUFFIX.txt
echo

echo "Copying file from s3://bob-test-bucket-1/test-$SUFFIX.txt to s3://bob-test-bucket-2/test-$SUFFIX.txt"
aws s3 $OVERRIDE_URL cp s3://bob-test-bucket-1/test-$SUFFIX.txt s3://bob-test-bucket-2/test-$SUFFIX.txt
echo

rm test-$$.txt

echo "Done!"

When using the AWS Java SDK to put an object, I get ClientException

Exception in thread "main" com.amazonaws.AmazonClientException: Unable to verify integrity of data upload. Client calculated content hash (contentMD5: Jm1/xmb+
/xtOXxs3kouTwg== in base 64) didn't match hash (etag: 67c2ab887738c99b76097e94e2c46293 in hex) calculated by Amazon S3. You may need to delete the data stored
in Amazon S3. (metadata.contentMD5: null, md5DigestStream: com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@b672aa8, bucketName: hello-world, key: hello-world/file_636120414036520000.txt)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1611)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:3149)
at xxx.common.aws.s3.S3Context.putObjectAsString(S3Context.java:26)
at xxx.hello.MainService.Put(MainService.java:23)
at xxx.hello.HelloWorldApplication.main(HelloWorldApplication.java:17)

Any clue?

listObjects delimiter not implemented?

My projects' unit tests fail when trying to list objects using a delimiter value (namely to find objects contained within a folder but not including objects within sub-folders).

As far as I can tell, this feature is only partially implemented. I am able to supply a delimiter option, but it is not actually used.

Project no longer maintained?

I see "Currently looking for maintainers" in the project description. There are 10 pull requests on this repo right now, and it seems like @jamhall is MIA with no activity on Github and no responses to recent PRs and issues.

I want to add support for multipart uploads in PR #78, which I think will cover a gaping hole in the functionality of this mock S3 server – I don't have any other choice but to run my own fork for my integration tests.

@jamhall If you see this, would you be open to transferring ownership of the NPM module and this repository to someone else to maintain? Or at least give full permissions on NPM and Github to a contributor so we can keep this project alive. A lot of good work was done here and it would go to waste if others aren't able to improve it to handle their own use cases. I'd love to volunteer, I'm sure there are others who would as well.

Use prettier

There are many reformats in the last pull requests.
If we use a code format tool like prettier we could prevent that.

What do you think? I could make a pull request for this.

Update npm

Sorry, would it be possible to update NPM with the latest changes?

Error when object has no content-type

If you request a resource that has no content-type stored in its .dummys3_metadata the server throws an unhandled error.

AWS fallsback to the binary/octet-stream content-type when there is none specified (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html).

If you upload a file to s3 via the web interface it guesses the correct file type.

This issue occurs if you do a PUT operation using node-fetch or curl without specifying the Content-Type header.

Set Object ACL on existing file replaces it with an empty file

When using setObjectAcl(awsBucket, awsKey, CannedAccessControlList.AuthenticatedRead) over an existing uploaded file on s3rver, it shows following logs:

info: Stored object "myAwsKey" in bucket "myAwsBucket" successfully
info: PUT /myAwsBucket/myAwsKey?acl 200 - - 6.578 ms

Then downloading from http://localhost:4568/myAwsBucket/myAwsKey serves file is an empty file just like the ACL change resulted in an upload of an empty file on that key.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.