jamhall / s3rver Goto Github PK
View Code? Open in Web Editor NEWA fake S3 server written in NodeJs
License: MIT License
A fake S3 server written in NodeJs
License: MIT License
I made a folder with some sub folders and files but none of my files seem to show up when I use the aws-cli. I am on osx, running the s3rver programmatically from within gulp.
I have a dir structure like:
integration
/testbucket1
- example.txt
/testbucket2
- example2.txt
But when I run this:
$ aws s3 ls s3:// --endpoint http://localhost:4569 --recursive
2017-02-21 15:29:54 testbucket1
2017-02-22 09:46:10 testbucket2
The files are not being picked up by recursive. Also if I ls
into a specific bucket I get no files:
$ aws s3 ls s3://testbucket1 --endpoint http://localhost:4569 --recursive
Output from s3rver from above commands:
[09:52:26] Starting 'startServer'...
info: Fetched 2 buckets
info: GET / 200 478 - 10.822 ms
info: Fetched bucket "testbucket1" with options [object Object]
info: Found 0 objects for bucket "testbucket1"
info: GET /testbucket1 200 222 - 2.326 ms
No results. What am I doing wrong? Why isn't it finding files as objects?
What I want to do is to use this to just serve up some local files for integration testing purposes and just configure the endpoint to point locally. Any help would be appreciated here.
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://doc.s3.amazonaws.com/2006-03-01">
<IsTruncated>false</IsTruncated>
<Marker/>
<Name/>
<Prefix/>
<MaxKeys>1000</MaxKeys>
when using the bucketname from the ListBucketResult
java.lang.IllegalArgumentException: BucketName cannot be empty
at com.amazonaws.util.ValidationUtils.assertStringNotEmpty(ValidationUtils.java:89)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1374)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1271)
regression? https://github.com/jamhall/s3rver/pull/100/files
Are there any plans to support CORS headers?
You can create presignedUrls with s3.getSignedUrl method.
A url looks like this: http://localhost:3001/file-uploads-temporary/de456c2c-0f93-4df7-ad82-0c2943e447f0.jpeg?AWSAccessKeyId=123&Content-Type=image%2Fjpeg&Expires=1519727198&Signature=L0RMhVZCyfrp37sDZbVfSLCuOSU%3D&x-amz-acl=private
Currently the metadata (aka Content-Type Content-Type=image%2Fjpeg
) is not added to the .dummys3_metadata
file.
I would suggest that we check for the Content-Type
param and if present simply use that as the value for the objects content-type.
The current workaround is to add the header to your request e.g.
"use strict";
const fetch = require("node-fetch");
const S3 = require("aws-sdk");
const s3 = new S3(config);
const s3Params = {
Bucket: "some-bucket-name",
Key: "my-file.jpeg",
ContentType: "image/jpeg",
ACL: "private"
};
s3
.getSignedUrl("putObject", s3Params)
.promise()
.then(uploadUrl => {
const readStream = fs.createReadStream("my-local-image.jpeg");
return fetch(uploadUrl, {
method: `PUT`,
body: readStream,
// On AWS you do not have to specify this. The Header is taken from the presigned url
headers: {
"Content-Type": "image/jpeg"
}
});
});
What do you think? @specialkk @leontastic?
If you agree I would come up with a PR for this.
the result of this bug is that you can't use AmazonS3Client.listObjects: when parsing the response it expects ListBucketResult
https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/transform/XmlResponsesSaxParser.java#L498
also you can see that the fake-s3 implementation responds with "ListBucketResult" as expected:
https://github.com/jubos/fake-s3/blob/master/lib/fakes3/xml_adapter.rb#L153
Although this feature doesn't currently exist with fakes3, is it possible to implement bucket versioning?
Thanks
I am using s3rver in my mocha tests, it's great so far!
I have one recommendation, though:
When calling s3Instance.close()
, the directory passed to the constructor should be cleaned up and emptied. It's more convenient that way;)
Can we please use expect over should and extending the object prototypes?
From v2.1.0, the behavior of cors as argument was changed.
In the past version, When I set cors to false, I could disable this function.
But in current version, When I set cors to null(or false), an error occured and displayed the follwoing messages.
TypeError: Cannot read property 'CORSRule' of undefined
at /Users/argon/workspace/managed/s3rver/lib/cors.js:48:42
at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
at /Users/argon/workspace/managed/s3rver/lib/app.js:45:5
at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
at logger (/Users/argon/workspace/managed/s3rver/node_modules/morgan/index.js:144:5)
at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
at expressInit (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/middleware/init.js:40:5)
at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
at query (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/middleware/query.js:45:5)
at Layer.handle [as handle_request] (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:317:13)
at /Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:284:7
at Function.process_params (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:335:12)
at next (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:275:10)
at Function.handle (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/router/index.js:174:3)
at Function.handle (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/application.js:174:10)
at Server.app (/Users/argon/workspace/managed/s3rver/node_modules/express/lib/express.js:39:9)
at emitTwo (events.js:125:13)
at Server.emit (events.js:213:7)
at parserOnIncoming (_http_server.js:602:12)
at HTTPParser.parserOnHeadersComplete (_http_common.js:116:23)
It seems that CORSConfiguration in cors() @ cors.js is not initialized.
Because the config as argument of cors is evaluated to false.
What should I do ?
When calling this.s3Client.listObjectsV2({Bucket: 'baseBucket', Prefix: 'pending'})
the following is returned:
GET /baseBucket?list-type=2&prefix=pending 200 1763 - 7.539 ms
{ IsTruncated: false,
Contents:
[ { Key: 'harFiles/2',
LastModified: 2017-03-07T14:17:01.073Z,
ETag: '"9d6b3ad272abe35c3c6b95a948b000db"',
Size: 14,
StorageClass: 'Standard',
Owner: [Object] },
{ Key: 'pending/1',
LastModified: 2017-03-07T14:17:01.353Z,
ETag: '"10ed3a4e7f510696c325d6249c82d69e"',
Size: 14,
StorageClass: 'Standard',
Owner: [Object] },
{ Key: 'recipes/1',
LastModified: 2017-03-07T14:17:01.073Z,
ETag: '"d4793fe3394939cf279c23e9045f7afc"',
Size: 36,
StorageClass: 'Standard',
Owner: [Object] },
{ Key: 'recipes/2',
LastModified: 2017-03-07T14:17:01.333Z,
ETag: '"d897f47178a9e3a89bf4e8abe0497eed"',
Size: 39,
StorageClass: 'Standard',
Owner: [Object] },
{ Key: 'recipes/5',
LastModified: 2017-03-07T14:17:01.303Z,
ETag: '"f3e713eee1619c0eeb29b42a6a4aab8d"',
Size: 29,
StorageClass: 'Standard',
Owner: [Object] } ],
Prefix: '',
MaxKeys: 1000,
CommonPrefixes: [] }
Although I indicated a Prefix pending
, keys which start with recipes
are included.
This allows easier dependency upgrades. (https://renovateapp.com/)
At the moment many dependencies are outdated. (https://david-dm.org/jamhall/s3rver)
If you are fine with it I can create a Pull Request for it once #79 is merged.
I am trying to wrap s3rver in a Docker container so that it can be easily run standalone. A number of other people have done this as evidenced by all the different Github and DockerHub repos available. However, nobody seems to be supporting --indexDocument and --errorDocument. When I include those params in my builds s3rver still starts but all GET calls simply return:
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<Resource>192.168.99.100</Resource>
<RequestId>1</RequestId>
</Error>
If I remove those params from my ENTRYPOINT and leave --hostname, --port and --directory everything works as expected.
s3rver is a great tool especially for prototyping S3 hosted SPA apps where the index and error docs provide the infrastructure for routing. I would be happy to submit a pull request for a working Dockerfile if I can figure out how to get i and -e params working.
Hi, I really like that you have taken fakes3 and ported it to Node.js, since my main project is already in Node it keeps me from setting up Ruby etc.
Is there any chance for implementing list bucket operations for this?
If we just want to update metadata, we normally can call a copyObject with the src = dest and just update metadata
but this fail here
https://github.com/jamhall/s3rver/blob/master/lib/file-store.js#L277 (from node-fs-extra i think)
maybe it's ok to check if src = dest and so just update metadata without copying etc.. ?
hi
can you check that it should be
results.objects.length
instead of
results.length
https://github.com/jamhall/s3rver/blob/master/lib/controllers.js#L165
Hello,
we like to use a mounted SMB drive via S3, but we get the following error.
Our SMB drive mount:
\\192.168.2.17/smb-www on /var/www type cifs (rw,nosuid,nodev,noexec,relatime,vers=1.0,cache=strict,username=administrator,domain=xxx,uid=0,forceuid,gid=33,forcegid,addr=192.168.2.17,file_mode=0775,dir_mode=0775,iocharset=iso8859-1,nounix,nobrl,noperm,rsize=61440,wsize=16580,actimeo=1,_netdev,user)
Start of s3rver:
s3rver -h 192.168.2.16 -p 8000 -d /var/www/
Error after access:
now listening on host 192.168.2.16 and port 8000
info: [S3rver] GET / 500 1192 - 9.980 ms
Error: ENOENT: no such file or directory, stat '/var/www/SMB-PUBLIC (192.168.2.17) (P) - Verkn�pfung.lnk'
at Object.fs.statSync (fs.js:955:11)
at Object.statSync (/usr/lib/node_modules/s3rver/node_modules/graceful-fs/polyfills.js:297:22)
at /usr/lib/node_modules/s3rver/lib/file-store.js:40:21
at Array.filter (<anonymous>)
at Object.getBuckets (/usr/lib/node_modules/s3rver/lib/file-store.js:39:35)
at getBuckets (/usr/lib/node_modules/s3rver/lib/controllers.js:154:31)
at Layer.handle [as handle_request] (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/layer.js:95:5)
at next (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/route.js:137:13)
at Route.dispatch (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/route.js:112:3)
at Layer.handle [as handle_request] (/usr/lib/node_modules/s3rver/node_modules/express/lib/router/layer.js:95:5)
I know it's probably not the plan of the tool,
but i wanted to know if i have a possibility to set like a tiny security thing to avoid everyone writing on it... (reading shouldn't be a problem)
Thanks for your tool
Normally they must be manually enabled/set after creating each bucket in the S3 console. This can cause problems if you want to have one bucket doing static web hosting and another for normal storage/retrieval.
If we also get around to implementing versioning it's also important that it can be enabled per-bucket.
We've got some integration tests wrapped around our S3 utility, which depend on s3rver
and one of them started failing with the 2.2.1 release. (Pinning the version back to 2.2.0 caused the test to start passing again)
The test adds three objects to s3, with keys "some/dir/x"
, "some/dir/y"
, and "some/dir/z"
. However, when I send this request:
bucket.listObjects({
Bucket: bucketName,
Prefix: "some/dir",
Delimiter: '/',
Marker: null
});
The result is an empty array, rather than the expected three results. I added a headObject
check to ensure that the data does exist before calling listObjects
, so I think the issue must be with the listObjects
call itself.
Any idea why this might be suddenly breaking? Thanks!
Hi!
I use https://github.com/tpyo/amazon-s3-php-class as the REST client for our server to interact with our S3. Listing the buckets returns nothing and I got to the root cause of it.
I use it's listBuckets()
function which sends a GET request to the endpoint and returns the bucket names.
When I fetch from our remote S3, something like this is given:
stdClass Object (
[error] =>
[body] => SimpleXMLElement Object (
[Owner] => SimpleXMLElement Object (
[ID] =>
[DisplayName] =>
)
[Buckets] => SimpleXMLElement Object (
[Bucket] => Array (
[0] => SimpleXMLElement Object (
[Name] =>
[CreationDate] =>
)
[1] => SimpleXMLElement Object (
[Name] =>
[CreationDate] =>
)
[2] => SimpleXMLElement Object (
[Name] =>
[CreationDate] =>
)
[3] => SimpleXMLElement Object (
[Name] =>
[CreationDate] =>
)
)
)
)
[headers] => Array (
[date] => 1453650086
[type] => application/xml
)
[code] => 200
)
But when I pipe this to s3rver:
stdClass Object (
[error] =>
[body] => 123 S3rver development 2016-01-24T14:45:57.372Z hahaha 2016-01-24T14:46:07.610Z
[headers] => Array (
[date] => 1453649895
[type] => application/xml; charset=utf-8
[size] => 469
[hash] => W/"1d5-hN+ARE1TnyYYv8AGyEv8Wg"
)
[code] => 200
)
Seems that sending a GET request returns an array of SimpleXMLElement Objects but when using s3rver it doesn't. Thoughts?
Sorry for the click-bait issue name. Object Copy does work, but read on and see what I mean.
I've been having a discussion over at aws/aws-sdk-js#901 regarding their documentation on s3.copyObject(...)
. In the same document where x-amz-copy-source
is specified, there are conflicting descriptions on how it should be set.
The 'Syntax' section of the documentation says:
x-amz-copy-source: /source_bucket/sourceObject
However, in the 'Request Headers' section, the documentation says:
Name:
x-amz-copy-source
Description:
The name of the source bucket and key name of the source object, separated by a slash (/).
Type: String
Default: None
Required:
Yes
So, which one should be followed?
s3rver
expects x-amz-copy-source: /source_bucket/sourceObject
. See lib/controllers.js#L264-L268.
Without the initial '/', srcBucket = srcObjectParams[1]
ends up being the source object's key
. Eventually this will fail with a
error: No bucket found for "image.png"
info: PUT /BUCKET_NAME/image.copy.png 404 207 - 60.642 ms
I think that in the mean time we should do something to accept both forms. Otherwise integration with other libraries fail. s3fs
is an example. See s3fs/lib/s3fs.js#L345 (It does not use an initial '/')
However, I invite you all to participate on the discussion over at aws/aws-sdk-js#901.
If I create a bucket "test" and add the following object keys:
index.html
styles/main.css
Then if I delete all the objects and try to delete the bucket I get the following error:
Unable to delete bucket test: {
"message": "The bucket your tried to delete is not empty",
"code": "BucketNotEmpty",
"region": null,
"time": "2016-04-18T19:27:26.649Z",
"requestId": null,
"extendedRequestId": null,
"statusCode": 409,
"retryable": false,
"retryDelay": 0.46673882752656937
}
However, the bucket is empty if I view it through the browser.
The problem appears to be that the "styles" directory is still present in the data dir even though all keys have been deleted.
If I manually delete the styles directory, then I am able to delete the bucket. So there appears to be an issue cleaning up empty directories once all keys have been deleted.
I mean can this action be added to s3rver. It is very similar putOblect:
app.post('/:bucket/:key(*)', controllers.bucketExists, controllers.postObject)
It would be great to accept multipart uploads. Using s3.upload
from the AWS SDK (link) attempts a multipart upload if the file is larger than some cutoff (something like 5 megabytes).
s3rver runs on a Raspberry Pi 3 with Raspbian (Debian Jessie 8.0) and the service port is accessible from localhost
$ s3rver -p 8080 -d ~/s3test/
now listening on host localhost and port 8080
Port 8080 is open.
$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 779/node
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 719/sshd
tcp6 0 0 :::22 :::* LISTEN 719/sshd
$ nmap localhost -p 22,8080
...
22/tcp open ssh
8080/tcp open http-proxy
$ telnet localhost 8080
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
But when I execute nmap from a different host in the subnet, the s3rver port is closed:
$ nmap 10.0.0.253 -p 22,8080
...
PORT STATE SERVICE
22/tcp open ssh
8080/tcp closed http-proxy
$ telnet 10.0.0.253 8080
Trying 10.0.0.253...
telnet: Unable to connect to remote host: Connection refused
I have no firewall running.
Some system details:
$ s3rver --version
1.0.3
$ node --version
v0.10.29
$ npm --version
1.4.21
$ uname -a
Linux raspberrypi 4.4.50-v7+ #970 SMP Mon Feb 20 19:18:29 GMT 2017 armv7l GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 8.0 (jessie)
Release: 8.0
Codename: jessie
I want to instantiate many s3rver
instances within my AVA tests, where they will run concurrently, with some in their own process.
The best I have been able to do so far is increment a counter for the port number so the different instances do not conflict with each other. But this is brittle because using a large contiguous range of port numbers increases the chances of conflict with an existing service on a developer's machine.
Ideally, I should be able to just run s3rver
on port 0 to dynamically allocate an available port.
const server = new S3rver({
directory : await mkdirtemp(),
hostname : 'localhost',
port : 0,
silent : true
});
server.run((err, hostname, port) => {
const endpoint = `http://${hostname}:${port}`;
// ... create S3 client that connects to endpoint ...
});
I figured it would work, as most servers support that, so I was surprised to see this error:
NetworkingError (Error) {
address: '127.0.0.1',
code: 'NetworkingError',
errno: 'ECONNREFUSED',
hostname: 'localhost',
message: 'connect ECONNREFUSED 127.0.0.1:80',
port: 80,
region: 'us-east-1',
retryable: true,
syscall: 'connect',
time: Date 2018-03-05 07:54:29 704ms UTC {},
}
After investigation, it seems my AWS client tried to connect on port 80 because s3rver
just returns the port as-is instead of the allocated port, and port 0 is invalid when making a request. The expected behavior is that it would use the return value of server.address().
For 'NoSuchBucket' error code, currently 'The resource you requested does not exist' is returned. But the correct message is 'The specified bucket does not exist'.
Ref: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
Hi all,
Using this with Minio client and getting the error
TypeError: region should be of type "string"
Any ideas on how to get this server to pass back a fake region?
S3rver Tests
✓ should fetch fetch six buckets
✓ should create a bucket with valid domain-style name
✓ should fail to create a bucket because of invalid name
✓ should fail to create a bucket because of invalid domain-style name
✓ should fail to create a bucket because name is too long
(node:29986) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 uncaughtException listeners added. Use emitter.setMaxListeners() to increase limit
Introduced by #96. Possible memory leak?
This is what I get from a GET call on a bucket
...
<LastModified>Tue, 13 Mar 2018 17:25:04 GMT</LastModified>
...
According to this stack trace using the last aws-java-sdk-s3 client
Caused by: java.lang.IllegalArgumentException: Invalid format: "Tue, 13 Mar 2018 15:54:25 GMT"
at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187)
at org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:826)
at com.amazonaws.util.DateUtils.doParseISO8601Date(DateUtils.java:98)
at com.amazonaws.util.DateUtils.parseISO8601Date(DateUtils.java:77)
at com.amazonaws.services.s3.internal.ServiceUtils.parseIso8601Date(ServiceUtils.java:76)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler.doEndElement(XmlResponsesSaxParser.java:703)
at com.amazonaws.services.s3.model.transform.AbstractHandler.endElement(AbstractHandler.java:52)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:609)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2967)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:841)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:770)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:147)
I guessed that it should be ISO8601-stringified.
Do you agree?
This is part of the object structure retrieved by listObjects()
, similar to objects.Contents
, which lists out all keys that match between the prefix
and the delimiter
. Note, this value only exists if a delimiter is used.
The definition for the CommonPrefixes
property is:
Gets the CommonPrefixes property. A response can contain CommonPrefixes only if you specify a delimiter. When you do, CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by delimiter. In effect, CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix. For example, if prefix is notes/ and delimiter is a slash (/), in notes/summer/july, the common prefix is notes/summer/.
Source: http://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Model_ListObjectsResponse.htm
This issue relies on issue #9 to support delimiters.
It looks like there is a small difference between s3rver's handling of delimiter parameters. In the AWS response, each Prefix in CommonPrefixes includes the trailing delimiter, but s3rver strips it off. I believe this line should be
match = match.substr(0, delimiterIndex + 1);
We currently don't return NextMarker in truncated List Object V1 responses: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
Nor do we return NextContinuationTokens in truncated List Object V2 responses: https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html
Would be great to support these so users can write integration tests for cases where >1000 objects exist in a bucket.
So for the past day I have had a bizarre issue where the hashes never match on PUT requests, I was not sure if it was a problem with the .net awssdk I was using, or the lib itself, anyway I have found the solution.
I was defaulting my aws credentials to:
var credentials = new BasicAWSCredentials("foo", "bar");
Turns out if you do this and dont use those credentials on calls the actual call will work but you get a hash error, so if you instead just put empty strings for your credentials BasicAWSCredentials("", "");
it all works and everyone is happy :)
I've contributed enough to this project to know it decently well and I think it could benefit pretty massively from async/await and Koa's support for processing the response body after middleware is run.
I'm just opening this issue seeking an opinion on potentially including a Babel build pipeline if I were to write for Koa 2. However it looks like Travis is only doing tests for Node >= 6, so Koa 1 would also work fine if we want to avoid Babel. This isn't intended to be just changes for the sake of using newer features; it really should make things a lot more readable and approachable for adding new features such as versioning.
Creating a bucket using the aws-sdk-js
client produces the following error:
XMLParserError: Non-whitespace before first tag.
Line: 0
Column: 1
Char: C
s3rver
logs:
info: PUT / 404 13 - 4.628 ms
fake-s3
works fine though, so I thought it might be a problem with s3rver
.
FYI, running code in node v5.3.0
.
While using getObjects (or maybe listObjects: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property) s3 fake does not skip marked element
Marker
option according to S3 doc:
Specifies the key to start with when listing objects in a bucket. Amazon S3 returns object keys in UTF-8 binary order, starting with key after the marker in order.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
Heyho, is it planned to add more options (e.g. header options like 'cache-control', as for example defined here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#copyObject-property) to the aws-functions as well as the according controllers?
I am manually setting the 'Cache-control' option on a couple of objects and would like to be able to test for the correct response.
Thanks for your time and good work!
There appears to be a bug in the fake S3 server implementation whereby the applciation is not decoding the HTTP message headers (only the URL) when handling the S3 copy command.
The following script works against the real AWS S3 but fails to copy from bucket to bucket in your fake implementation with this error:
Copying file from s3://bob-test-bucket-1/test-2017-03-09T10:54:24.txt to s3://bob-test-bucket-2/test-2017-03-09T10:54:24.txt
copy failed: s3://bob-test-bucket-1/test-2017-03-09T10:54:24.txt to s3://bob-test-bucket-2/test-2017-03-09T10:54:24.txt An error occurred (NoSuchKey) when calling the CopyObject operation: The specified key does not exist
And here’s the script to test it:
#!/bin/bash
export OVERRIDE_URL="--endpoint-url http://localhost:4569"
echo "THIS IS A TEST" > test-$$.txt
echo "Listing buckets"
aws s3 $OVERRIDE_URL ls
echo
echo "Creating bucket bob-test-bucket-1"
aws s3 $OVERRIDE_URL mb s3://bob-test-bucket-1
echo
echo "Creating bucket bob-test-bucket-2"
aws s3 $OVERRIDE_URL mb s3://bob-test-bucket-2
echo
export SUFFIX=$(date -u "+%Y-%m-%dT%H:%M:%S")
echo "Uploading test data file to s3://bob-test-bucket-1/test-$SUFFIX.txt"
aws s3 $OVERRIDE_URL cp test-$$.txt s3://bob-test-bucket-1/test-$SUFFIX.txt
echo
echo "Copying file from s3://bob-test-bucket-1/test-$SUFFIX.txt to s3://bob-test-bucket-2/test-$SUFFIX.txt"
aws s3 $OVERRIDE_URL cp s3://bob-test-bucket-1/test-$SUFFIX.txt s3://bob-test-bucket-2/test-$SUFFIX.txt
echo
rm test-$$.txt
echo "Done!"
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to verify integrity of data upload. Client calculated content hash (contentMD5: Jm1/xmb+
/xtOXxs3kouTwg== in base 64) didn't match hash (etag: 67c2ab887738c99b76097e94e2c46293 in hex) calculated by Amazon S3. You may need to delete the data stored
in Amazon S3. (metadata.contentMD5: null, md5DigestStream: com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@b672aa8, bucketName: hello-world, key: hello-world/file_636120414036520000.txt)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1611)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:3149)
at xxx.common.aws.s3.S3Context.putObjectAsString(S3Context.java:26)
at xxx.hello.MainService.Put(MainService.java:23)
at xxx.hello.HelloWorldApplication.main(HelloWorldApplication.java:17)
Any clue?
Or is it the same deal as this? jubos/fake-s3#30
When I load the root directory I get a list of folder names as buckets, but visiting any of those buckets gives me:
http://localhost:5353/media/
<ListBucketResult xmlns="http://doc.s3.amazonaws.com/2006-03-01">
<Prefix/>
<Marker/>
<MaxKeys>1000</MaxKeys>
<IsTruncated>false</IsTruncated>
</ListBucketResult>
I found only the default credentials (123/abc).
But there is no command line parameter to modify the used credentials.
$ s3rver --version
1.0.3
My projects' unit tests fail when trying to list objects using a delimiter value (namely to find objects contained within a folder but not including objects within sub-folders).
As far as I can tell, this feature is only partially implemented. I am able to supply a delimiter option, but it is not actually used.
I see "Currently looking for maintainers" in the project description. There are 10 pull requests on this repo right now, and it seems like @jamhall is MIA with no activity on Github and no responses to recent PRs and issues.
I want to add support for multipart uploads in PR #78, which I think will cover a gaping hole in the functionality of this mock S3 server – I don't have any other choice but to run my own fork for my integration tests.
@jamhall If you see this, would you be open to transferring ownership of the NPM module and this repository to someone else to maintain? Or at least give full permissions on NPM and Github to a contributor so we can keep this project alive. A lot of good work was done here and it would go to waste if others aren't able to improve it to handle their own use cases. I'd love to volunteer, I'm sure there are others who would as well.
There are many reformats in the last pull requests.
If we use a code format tool like prettier we could prevent that.
What do you think? I could make a pull request for this.
Sorry, would it be possible to update NPM with the latest changes?
thanks for the module guys!
lib/index.js
requires this file. Some preprocessing modules delete the test/ folder as it's not expected to be needed in production (e.g. https://github.com/tj/node-prune)
If you request a resource that has no content-type stored in its .dummys3_metadata
the server throws an unhandled error.
AWS fallsback to the binary/octet-stream
content-type when there is none specified (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html).
If you upload a file to s3 via the web interface it guesses the correct file type.
This issue occurs if you do a PUT operation using node-fetch or curl without specifying the Content-Type
header.
When using setObjectAcl(awsBucket, awsKey, CannedAccessControlList.AuthenticatedRead)
over an existing uploaded file on s3rver, it shows following logs:
info: Stored object "myAwsKey" in bucket "myAwsBucket" successfully
info: PUT /myAwsBucket/myAwsKey?acl 200 - - 6.578 ms
Then downloading from http://localhost:4568/myAwsBucket/myAwsKey
serves file is an empty file just like the ACL change resulted in an upload of an empty file on that key.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.