awsdocs / amazon-s3-developer-guide Goto Github PK
View Code? Open in Web Editor NEWThis guide has been archived. Please see https://github.com/awsdocs/amazon-s3-userguide for an open source version of the Amazon S3 docs.
License: Other
This guide has been archived. Please see https://github.com/awsdocs/amazon-s3-userguide for an open source version of the Amazon S3 docs.
License: Other
Hi all,
Thank you for AWS team who provides this. I really appreciate you working on this. I don't know where to post the issue but S3 is my favorite service so I think I just posted this.
I am wondering if it's also possible that you provide the build script as well to output the docs to PDF/HTML format. Plus if it also possible, Kindle format. I am having a problem downloading the Kindle format of the docs at Amazon Kindle store due to country restrictions.
Regards,
I found a very neat tool while searching for ways to debug why I'm failing to make request to an s3 bucket for a particular IAM user at https://policysim.aws.amazon.com/ here https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html?icmpid=docs_iam_console
Unfortunately, when I tried to find "S3" as an option to test permissions (as done in the video even!) I saw the options stop after Route53 services.
After scrolling to the very bottom of the service options, this is where it stops.
The documentation on "Specifying Resources in a Policy" states (also here):
You can use wildcards as part of the resource ARN. You can use wildcard characters (* and ?) within any ARN segment (the parts separated by colons). An asterisk (*) represents any combination of zero or more characters, and a question mark (?) represents any single character. You can use multiple * or ? characters in each segment, but a wildcard cannot span segments.
I checked in the policy simulator and the part on wildcards not spanning segments seems to be wrong.
Policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/foo/*/bar"
}]
}
With the above policy, the user is granted access to arn:aws:s3:::my-bucket/foo/1/2/bar
. So the wildcard DOES span ARN segments.
I believe this could lead to serious security issues in organization relying on the documented behavior. An example of such misconfiguration would be arn:aws:s3:::my-bucket-*/public/*
. In this case, the policy matches arn:aws:s3:::my-bucket-prod/private/public/*
- this is unexpected.
virtual style requests -> virtual hosted‐style requests
Just to make the wordings more consistent ◡̈
Hi All
I am working in Unity 2018.4.7.. In My project i have to upload the images from unity. So i have used this aws code.
https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileDotNet.html..
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ";
private const string keyName = " provide a name for the uploaded object ";
private const string filePath = " provide the full path name of the file to upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}
private static async Task UploadFileAsync()
{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
// Option 1. Upload a file. The file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
// Option 2. Specify object key name explicitly.
await fileTransferUtility.UploadAsync(filePath, bucketName, keyName);
Console.WriteLine("Upload 2 completed");
// Option 3. Upload data from a type of System.IO.Stream.
using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
await fileTransferUtility.UploadAsync(fileToUpload,
bucketName, keyName);
}
Console.WriteLine("Upload 3 completed");
// Option 4. Specify advanced settings.
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");
await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
}
}
}
}
My project settings Scripting Runtime Version is 4.x... When i run the above code in unity. It says
error CS0234: The type or namespace name 'Transfer' does not exist in the namespace 'Amazon.S3' (are you missing an assembly reference?)
I have installed Aws AWSSDK.S3.3.3.113.2,AWSSDK.SimpleEmail.3.3.101.193 ... But it shows the above error.
I have downgraded to 3.5 framework but it says error CS1644: Feature `asynchronous functions' cannot be used because it is not part of the C# 4.0 language specification
I have downloaded the Aws https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-install-assemblies.html
but it also not worksw...
How can i solve this error.
In section "Example Virtual Hosted–Style Request' of page- "https://github.com/awsdocs/amazon-s3-developer-guide/blob/master/doc_source/RESTAPI.md", HOST parameter is shown as "examplebucket.s3-us-west-2.amazonaws.com" in sample request.
Should it not be "examplebucket.s3.amazonaws.com" as virtual hosted-styled request Host doesn't need to have region information as per documentation?
Regardless of what version of Bouncy Castle I pull in I can't get past this runtime error. 1.8 Sources and JDK 14. Using the maven-assembly-plugin to assemble a single Jar.
Thanks in advance for any help.
Exception in thread "main" java.lang.UnsupportedOperationException: A more recent version of Bouncy castle is required for authenticated encryption.
at com.amazonaws.services.s3.model.CryptoConfigurationV2.checkBountyCastle(CryptoConfigurationV2.java:379)
at com.amazonaws.services.s3.model.CryptoConfigurationV2.checkCryptoMode(CryptoConfigurationV2.java:366)
at com.amazonaws.services.s3.model.CryptoConfigurationV2.<init>(CryptoConfigurationV2.java:68)
at com.amazonaws.services.s3.model.CryptoConfigurationV2.<init>(CryptoConfigurationV2.java:47)
at securities.CryptoUtil.Encrypt(CryptoUtil.java:41)
at securities.App.main(App.java:19)
KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
keyPairGenerator.initialize(2048);
KeyPair keyPair = keyPairGenerator.generateKeyPair();
String s3ObjectKey = "test.txt";
String s3ObjectContent = "This should be encrypt";
AmazonS3EncryptionV2 s3EncryptionClientV2 = AmazonS3EncryptionClientV2Builder.standard()
.withRegion(Regions.DEFAULT_REGION)
.withClientConfiguration(new ClientConfiguration())
.withCryptoConfiguration(new CryptoConfigurationV2().withCryptoMode(CryptoMode.AuthenticatedEncryption))
.withEncryptionMaterialsProvider(new StaticEncryptionMaterialsProvider(new EncryptionMaterials(keyPair)))
.build();
s3EncryptionClientV2.putObject(bucketName, s3ObjectKey, s3ObjectContent);
s3EncryptionClientV2.shutdown();
return s3EncryptionClientV2.getObjectAsString(bucketName, s3ObjectKey);
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-core</artifactId>
<version>1.11.877</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-kms</artifactId>
<version>1.11.877</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.877</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-ext-jdk15on</artifactId>
<version>1.66</version>
</dependency>
</dependencies>
Hi,
Regrading to the object lifecycle rules, is it possible to set the lifetime of object into hours?
such as, apply Expiration action to remove objects in bucket per hour, does it take effect?
Expiration Rule
tax/
Enabled
1
I readed the page of https://github.com/awsdocs/amazon-s3-developer-guide/blob/master/doc_source/intro-lifecycle-rules.md, it is mentioned that, there are two kinds of rules below, does it only support to make the lifetime as midnight? That's great to get your response, thanks.
1, Lifecycle rules: Based on an object's age
2. Lifecycle rules: Based on a specific date
Im uploading a video to S3 from Java, I need permission as shown in the image below.
i have tried with few option like :-
request.setMetadata(metadata); //request is object of PutObjectRequest
option 1--> request.setCannedAcl(CannedAccessControlList.PublicRead);
option 2--> AccessControlList acl = new AccessControlList();
acl.grantPermission(GroupGrantee.AllUsers, Permission.FullControl);
request.setAccessControlList(acl);
However Im not getting expected result as shown in image, let me know if i can get any solution for the same.
On the HostingWebsiteOnS3Setup page there is the section "Step 2: Adding a Bucket Policy That Makes Your Bucket Content Publicly Available". This section explains how to add a Bucket Policy that grants public access.
The instructions do not work for a bucket created using the defaults. One must first UNcheck the "Block new public bucket policies" in the "Public access settings" before you can add a new bucket policy. At that point, you can then add the bucket policy.
This point needs to be clarified.
In the given example with Account A and Account B in the same region, how can one ensure the data copying traffic stay within AWS network? Can you clarify this aspect as part of this documentation?
Please update the documentation to explicitly call out that CopyObject is not a supported operation when using AccessPoints. This is not a stated limitation https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points-restrictions-limitations.html and is also not explicitly stated in https://docs.aws.amazon.com/AmazonS3/latest/dev/using-access-points.html#access-points-service-api-support
On the page https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html, under the section "Restricting Access to a Specific HTTP Referer", the second policy mentioned has a caution notice stating "Consider adding a third Sid that grants the root user s3:* actions.". This is also located at https://github.com/awsdocs/amazon-s3-developer-guide/blob/master/doc_source/example-bucket-policies.md
This guidance is incorrect. First, I assume they mean Statement, not Sid, as you can't add any additional Sids. More importantly, no statement could be added that would grant access to the root user, because the deny in the second statement would take precedence. You would instead need to replace "Principal": "*"
with "NotPrincipal": { "AWS": "arn:aws:iam::123456789012:root" }
. Further, you cannot reference the root user specifically without also not denying access to all IAM principals in the account.
The example at https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-4 will lock all users (including the root user) out of managing bucket policies etc if applied as-is. The docs should at least give a warning to this effect, or alternatively provide a better example that includes permissions for s3:*
for the root user.
There is a typo in the first line of "Troubleshooting Amazon S3 by Symptom" section:
The line goes as...
The following topics lists symptoms to help ...
instead of "The following topics list symptoms to help..."
The following statements are incorrect:
"However, the request must not originate from the range of IP addresses specified in the condition."
"The condition in this statement identifies the 54.240.143.* range of disallowed Internet Protocol version 4 (IPv4) IP addresses."
The bucket policy evaluates to denying all actions from all users except those that originate from the range of IP addresses specified in the condition.
The condition in this statement identifies the 54.240.143.* range of ALLOWED Internet Protocol version 4 (IPv4) IP addresses."
The doc says
How Do I Enable Object-Level Logging for an S3 Bucket with CloudWatch Data Events?
However it's linked to CloudTrail Data Event page. Should change the link text to
How Do I Enable Object-Level Logging for an S3 Bucket with CloudTrail Data Events?
In this page I can see a s3:PutObjectRetention
action, as well as a s3:object-lock-remaining-retention-days
condition key.
However, the ARC page for S3 doesn't mention either of those nor any of the associated IAM actions or condition keys.
Furthermore, nor does the S3 actions mapping page, or the condition keys page.
Given the importance of these object locks for compliance, it seems good to get some pretty detailed permissions spelled out for it.
In the heading "Querying Amazon S3 access logs for requests using Amazon Athena" in the first note box, the explanation should be "target bucket name and target prefix". As the source bucket's logs are present in the target bucket and we are querying target bucket in Athena.
This error occur in page: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-s3-access-logs-to-identify-requests.html#querying-s3-access-logs-for-requests
"AWS SDK for C++;" (including semicolon typo) in production: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
absent from the corresponding source:
In new S3 console, CORS configuration must be JSON and we can't use old XML configuration.
But we can't find this in document except for English.
And can't find it in this repository.
What should we do to fix the document?
Hi there's a typo at:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
"There are costs associated with the lifecycle transistion requests. For pricing information, see Amazon S3 Pricing."
Notice transistion > transition
But this text isn't on github, rest of page is.
In this doc: https://github.com/awsdocs/amazon-s3-developer-guide/blob/master/doc_source/crr-walkthrough-4.md
For replication.json
, The schema is currently invalid as is,
DeleteMarkerReplication
will become invalid.The document should either be updated for the new schema and encourage users to upgrade to new schema or call out that, it is using older schema
https://github.com/awsdocs/amazon-s3-developer-guide/blob/master/doc_source/access-control-block-public-access.md#the-meaning-of-public lists what makes a policy considered non-public. It seems that aws:PrincipalOrgId is missing from the list of condition keys that make a bucket considered non-public.
`
try
{
// 1. Put object-specify only key name for the new object.
var putRequest1 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName1,
ContentBody = "sample text"
};
PutObjectResponse response1 = await client.PutObjectAsync(putRequest1);
// 2. Put the object-set ContentType and add metadata.
var putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName2,
FilePath = filePath,
ContentType = "text/plain"
};
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
}`
The second put request putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
is never called invoked with the client.
I think AWS should provide more details about the INTELLIGENT_TIERING algorithm ( https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html#sc-dynamic-data-access )
What kind of operations count as object access? (see https://forums.aws.amazon.com/thread.jspa?threadID=295363)
For example, do metadata read/write operations count?
https://docs.aws.amazon.com/AmazonS3/latest/dev/selecting-content-from-objects.html says:
For a list of error codes and descriptions, see the Special Errors section of the SELECT Object Content page in the Amazon Simple Storage Service API Reference.
That link says:
For a list of special errors for this operation, see Selecting Content from Objects.
So both documents are linking to each other for the errors section, but neither has the content.
SLA of RTC is 99.9 according to the SLA plage
https://aws.amazon.com/s3/sla-rtc/
not 99.99 - can you please double check and verify which sla is correct.
Thanks
The documentation at https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html is incompatible with S3 Get behaviour for a deleted object which resides in versioning enabled bucket (with Delete Marker). It returns 403 Forbidden rather than 404
The following screenshot shows access to the current version (which is marked as deleted) is returning 403 (and not 404) while the previous version is accessible as expected.
There's a diagram of the supported lifecycle transitions on this page: https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html
I've never found this diagram especially easy to follow – apparently objects can move up, down, and sideways?
(Additionally, the white text with light background colours has too low a contrast ratio for AA or AAA accessibility guidelines; DEEP_ARCHIVE excluded of course.)
I recently redrew the diagram for my own purposes:
Objects can only move downwards, following the order of the tiers.
Would you be interested in using this diagram? (Or the general idea of it, I'm not picky.)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.