Giter VIP home page Giter VIP logo

amazon-s3-security-settings-and-controls's Introduction

Amazon S3 Security Settings and Controls

© 2019 Amazon Web Services, Inc. and its affiliates. All rights reserved. This sample code is made available under the MIT-0 license. See the LICENSE file.

Errors or corrections? Contact [email protected].

Workshop Summary

In this workshop you will use IAM, S3 Bucket Policies, S3 Block Public Access and AWS Config to demonstrate multiple strategies for securing a S3 Bucket.

Deploy AWS resources using CloudFormation

  1. Click one of the launch links in the table below to deploy the resources using CloudFormation. Use a control click or right click to open in a new tab to prevent losing your Github page.
Region Code Region Name Launch
us-west-1 US West (N. California) Launch in us-west-1
us-west-2 US West (Oregon) Launch in us-west-2
us-east-1 US East (N. Virginia) Launch in us-east-1
us-east-2 US East (Ohio) Launch in us-east-2
ca-central-1 Canada (Central) Launch in ca-central-1
eu-central-1 EU (Frankfurt) Launch in eu-central-1
eu-west-1 EU (Ireland) Launch in eu-west-1
eu-west-2 EU (London) Launch in eu-west-2
eu-west-3 EU (Paris) Launch in eu-west-3
eu-north-1 EU (Stockholm) Launch in eu-north-1
ap-east-1 Asia Pacific (Hong Kong) Launch in ap-east-1
ap-northeast-1 Asia Pacific (Tokyo) Launch in ap-northeast-1
ap-northeast-2 Asia Pacific (Seoul) Launch in ap-northeast-2
ap-northeast-3 Asia Pacific (Osaka-Local) Launch in ap-northeast-3
ap-southeast-1 Asia Pacific (Singapore) Launch in ap-southeast-1
ap-southeast-2 Asia Pacific (Sydney) Launch in ap-southeast-2
ap-south-1 Asia Pacific (Mumbai) Launch in ap-south-1
me-south-1 Middle East (Bahrain) Launch in me-south-1
sa-east-1 South America (São Paulo) Launch in sa-east-1
  1. Click Next on the Create Stack page.
  2. Click Next.
  3. Click Next Again. (skipping the Options and Advanced options sections)
  4. On the Review page, scroll to the bottom and check the boxes to acknowledge that CloudFormation will create IAM resources, then click Create stack.

6. Click Events. Events will not auto refresh. You will need to manually refresh the page using the refresh button on the right side of the page. 7. Watch for S3SecurityWorkshop and a status of CREATE_COMPLETE

8. Click Output.
9. Copy and paste the name of Bucket01 into a document on your computer.

Note: Instances that are launched as part of this CloudFormation template may be in the initializing state for few minutes.

Connect to the EC2 Instance using EC2 Instance Connect

  1. From the AWS console, click Services and select EC2.
  2. Select Instances from the menu on the left.
  3. Wait until the state of the S3_Workshop_Instance01 instance shows as running and all Status Checks have completed (i.e. not in Initializing state).
  4. Right-click on the S3_Workshop_Instance01 instance and select Connect from the menu.
  5. From the dialog box, select the EC2 Instance Connect option, as shown below:

  1. For the User name field, enter "ec2-user", then click Connect.

A new dialog box or tab on your browser should appear, providing you with a command line interface (CLI). Keep this open - you will use the command line on the instance throughout this workshop.

Note: The SSH session will disconnect after a period of inactivity. If your session becomes unresponsive, close the window and repeat the steps above to reconnect.

Setup AWS CLI

  1. In the CLI for the instance, run the following commands to setup the AWS CLI

    $ aws configure

    Leave Access Key and Secret Key blank, set the region to the region you deployed your CloudFormation template in , output format leave default.

  1. Create a credentials file to be used by the AWS CLI. This will allow you to switch between two different users easily.

    $ cd ~/.aws
    $ vi credentials
    Copy and paste the following credentials file template into your vi session.

[user1]
aws_access_key_id =
aws_secret_access_key =
[user2]
aws_access_key_id =
aws_secret_access_key =
  1. From the AWS console, click Services and select IAM.
  2. Click Users in the left pane.
  3. Click s3_security_lab_user1.
  4. Click Security credentials tab.
  5. Click Create access key.
  6. Copy the Access key ID and Secret access key into the credentials file under User1.
  7. Click Close.
  8. Click Users in the left pane.
  9. Click s3_security_lab_user2.
  10. Click Create access key.
  11. Copy the Access key ID and Secret access key into the credentials file under User2.
  12. Compare you credentials file to the one below and ensure your formatting is the same.

  1. Save the file

Exercise #1- Require HTTPS

In this exercise we will create a S3 Bucket Policy that requires connections to be secure.

  1. From the AWS console, click Services and select S3.
  2. Click the bucket name. (Copied from CloudFormation Outputs previously.)
  3. Click on the Permissions tab.
  4. Click Bucket Policy.
  5. Copy the bucket policy below and paste into the Bucket Policy Editor.
{
"Statement": [
{
   "Action": "s3:*",
   "Effect": "Deny",
   "Principal": "*",
   "Resource": "arn:aws:s3:::BUCKET_NAME/*",
   "Condition": {
       "Bool": {
        "aws:SecureTransport": false
        }
    }
    }
  ]
}
  1. Replace BUCKET_NAME with the bucket name. Sample bucket policy below.

  1. Click Save

  2. In your SSH session run the following command. The command should return a 403 error since the endpoint-url is HTTP.

    $ aws s3api head-object --key app1/file1 --endpoint-url http://s3.amazonaws.com --profile user1 --bucket ${bucket}

  3. In your SSH session run the following command. This command should succeed since it is using HTTPS.

    $ aws s3api --endpoint-url https://s3.amazonaws.com --profile user1 head-object --key app1/file1 --bucket ${bucket}

Exercise #2- Require SSE-S3 Encryption

In this exercise we will create a S3 Bucket Policy that requires data at rest encryption. We will also look at Default Encryption.

  1. From the AWS console, click Services and select S3.
  2. Click the bucket name. (Copied from CloudFormation Outputs previously.)
  3. Click on the Permissions tab.
  4. Click Bucket Policy.
  5. Click Delete, click Delete to confirm.
  6. Copy the bucket policy below and paste into the Bucket Policy Editor.
{
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::BUCKET_NAME/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        }
    ]
}
  1. Replace BUCKET_NAME with the bucket name. Sample bucket policy below.


8. Click Save
9. Go to your SSH session and create a small text file using the following command. $ cd ~
$ echo "123456789abcdefg" > textfile
10. Attempt to PUT an object without encryption. The request should fail. $ aws s3api put-object --key text01 --body textfile --profile user1 --bucket ${bucket}
11. PUT an object using SSE-S3. The request should succeed. $ aws s3api put-object --key text01 --body textfile --server-side-encryption AES256 --profile user1 --bucket ${bucket}
12. From the AWS console, click Services and select S3.
13. Click the bucket name. (Copied from CloudFormation Outputs previously.)
14. Click on the Properties tab.
15. Default Encryption for AES-256(SSE-S3) is enabled.

Note
Bucket Policies are enforced based on how the request from the client is sent. In this case the Bucket Policy denied the first attempt to PUT an object. Since Default Encryption is enabled the first attempt would have ended up encrypted anyway, however, Default Encryption doesn't override encryption flags. For example, if Default Encryption is set to AWS-KMS and a request is sent with AES-256(SSE-S3) the request will be written as AES-256(SSE-S3). Default Encryption behaves like a default not an override. If a customer has a requirement that all objects have a certain type of encryption, then the only way to meet that requirement is with a bucket policy.

Exercise #3- Block Public ACLs using Bucket Policy

In this exercise we will create a S3 Bucket Policy that prevents users from assigning public ACLs to objects.

  1. From the AWS console, click Services and select S3.
  2. Click the bucket name. (Copied from CloudFormation Outputs previously.)
  3. Click on the Permissions tab.
  4. Click Bucket Policy.
  5. Click Delete, click Delete to confirm.
  6. Copy the bucket policy below and paste into the Bucket Policy Editor.
{
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::BUCKET NAME/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "private"
                }
            }
        }
    ]
}
  1. Replace BUCKET NAME with the bucket name. Sample bucket policy below.
  2. Click Save
  3. Go to your SSH session, run the following command. The request should succeed since the default for an object ACL is private.
    $ aws s3api put-object --key text01 --body textfile --profile user1 --bucket ${bucket}
  4. Run the following command, the request will also succeed even though this isn’t the behavior we are expecting.
    $ aws s3api put-object --key text01 --body textfile --acl public-read --profile user1 --bucket ${bucket}

Note
The current bucket policy allows ACLs that are private but doesn't DENY anything. It is important to write policies that prevent actions, not allow it when trying to restrict actions against a bucket. The current bucket policy also allows Public access to the bucket unintentionally due to the principal being a wildcard.

  1. Remove the existing bucket policy. Copy the bucket policy below and paste into the Bucket Policy Editor.
{
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": [
                    "s3:PutObject",
                    "s3:PutObjectAcl"
                    ],
            "Resource": "arn:aws:s3:::BUCKET NAME/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": [
                           "public-read",
                           "public-read-write",
                           "authenticated-read"
                    ]
                }
            }
        }
    ]
}
  1. Replace BUCKET NAME with the bucket name. Sample bucket policy below.
  2. Click Save
  3. Go to your SSH session, run the following command. The request should succeed since the default for an object ACL is private.
    $ aws s3api put-object --key text01 --body textfile --profile user1 --bucket ${bucket}
  4. Run the following command, the request should fail as the bucket policy will restrict the public-read ACL.
    $ aws s3api put-object --key text01 --body textfile --acl public-read --profile user1 --bucket ${bucket}

Exercise #4- Configure S3 Block Public Access

In this exercise we will configure S3 Block Public Access, an easy way to prevent public access to your bucket.

  1. From the AWS console, click Services and select S3.

  2. Click the bucket name. (Copied from CloudFormation Outputs previously.)

  3. Click on the Permissions tab.

  4. Click Bucket Policy.

  5. Click Delete, click Delete to confirm.

  6. Click Block public access

  7. Click Edit

  8. Select Block public access to buckets and objects granted through new access control lists (ACLs)

  9. Click Save

  1. Type confirm
  2. Click Confirm
  3. Go to your SSH session, run the following command. The request should succeed since the default for an object ACL is private.
    $ aws s3api put-object --key text01 --body textfile --profile user1 --bucket ${bucket}
  4. Run the following command, the request should fail as the bucket policy will restrict the public-read ACL.
    $ aws s3api put-object --key text01 --body textfile --acl public-read --profile user1 --bucket ${bucket}
  5. From the AWS console, click Services and select S3.
  6. Click the bucket name. (Copied from CloudFormation Outputs previously.)
  7. Click on the Permissions tab.
  8. Click Block public access
  9. Click Edit
  10. Uncheck Block public access to buckets and objects granted through new access control lists (ACLs)
  11. Click Save
  12. Type confirm.
  13. Click Confirm.

Exercise #5- Restrict Access to a S3 VPC Endpoint

In this exercise we will configure a S3 VPC Endpoint and a bucket policy to limit access to only requests that pass through the VPC Endpoint. This is an easy way to limit access to only clients in your VPC.

  1. In the AWS Console go to VPC.
  2. Click Endpoints.
  3. Click Create Endpoint.
  4. Select the S3 service name.
  5. Select the S3SecurityWorkshopVPC from the drop down menu.
  6. Do not select any route tables for now.
  7. Leave Policy set to Full Access
  8. Click Create endpoint.
  9. Click Close
  10. Record the Endpoint ID.
  11. From the AWS console, click Services and select S3.
  12. Click the bucket name. (Copied from CloudFormation Outputs previously.)
  13. Click on the Permissions tab.
  14. Click Bucket Policy.
  15. Copy the bucket policy below and paste into the Bucket Policy Editor.
{
    "Statement": [
        {
            "Action": "s3:*",
            "Effect": "Deny",
            "Resource": "arn:aws:s3:::BUCKET_NAME/*",
            "Condition": {
                "StringNotEquals": {
                    "aws:sourceVpce": "VPC_ENDPOINT_ID"
                }
            },
            "Principal": "*"
        }
    ]
}
  1. Replace BUCKET_NAME with the bucket name and VPC_ENDPOINT_ID with the Endpoint ID. Sample bucket policy below.
  2. Click Save
  3. Go to your SSH session, the request will fail since the S3 VPCE isn't associated with a route table.
    $ aws s3api head-object --key app1/file1 --profile user1 --bucket ${bucket}
  4. In the AWS Console go to VPC.
  5. Click Endpoints.
  6. The VPC Endpoint should be selected. Select Actions, then click Manage Route Tables.
  7. Select the Route Table that is associated with S3SecurityWorkshopSubnet
  8. Click Modify Route Tables
  9. Go to your SSH session, run the following command. The request should now succeed.
    $ aws s3api head-object --key app1/file1 --profile user1 --bucket ${bucket}
  10. From the AWS console, click Services and select S3.
  11. Click the bucket name. (Copied from CloudFormation Outputs previously.)
  12. Click on the Permissions tab.
  13. Click Bucket Policy.
  14. Click Delete, click Delete to confirm.

Exercise #6- Use AWS Config to Detect a Public Bucket

  1. From the AWS console, click Services and select Config.
  2. If you haven't used AWS Config previously you will be brought to the Get started page. If you have already used AWS Config jump to step
  3. Go to bottom of page, under AWS Config role, select Create AWS Config service-linked role. If you have already used AWS Config in another region, instead select Use an existing AWS Config service-linked role.

4. Click Next. 5. Click Skip.
6. Click Confirm.
Note
If you receive an error regarding S3, AWS Config was used previously in another region. Click Previous, Previous, under Amazon S3 Bucket, select Choose a bucket from your account. Bucket name will start with config-bucket. Click Next, click Skip, click Confirm.

7. Click Rules.
8. Click Add Rule. 9. Filter rules by typing S3 into search box.
10. Click s3_bucket_public_write_prohibited.
11. Click Next,Confirm.
12. Click Rules, in the left pane.
13. The rule needs time to evaluate. Refresh the page until you see Compliant.
14. From the AWS console, click Services and select S3.
15. Click the bucket name. (Copied from CloudFormation Outputs previously.)
16. Click on the Permissions tab.
17. Click on Access Control List.
18. Under Public access, select Everyone.
19. Check Write objects in the pop up window.
20. Click Save.

21. From the AWS console, click Services and select Config.
22. Click Rules.
23. Click s3_bucket_public_write_prohibited.
24. Click Re-evaluate.
25. You will need to refresh the screen. Your bucket should be Noncompliant. If everything is still compliant, wait a few minutes and Re-evaluate a second time.
26. From the AWS console, click Services and select S3.
27. Click the bucket name. (Copied from CloudFormation Outputs previously.)
28. Click on the Permissions tab.
29. Click on Access Control List.
30. Under Public access, select Everyone.
31. Uncheck Write objects in the pop up window.
32. Click Save.

Exercise #7- Restrict Access to an IP Address

Create a S3 Bucket Policy that will restrict access to your S3 Bucket to only the IP address of the EC2 Instance.

Exercise #8- Restrict Access to an IP Address and User Restrictions

Add to your S3 Bucket Policy from Exercise #7.
s3_security_lab_user1 should only be able to read objects.
s3_security_lab_user2 should be able to read and write objects.

Clean Up Resources

To ensurer you don't continue to be billed for services in your account from this workshop follow the steps below to remove all resources created ruing the workshop.

  1. In your SSH session run the following command.
    $ aws s3 rm s3://${bucket} --recursive --profile user1
  2. From the AWS console, click Services and select Config.
  3. Click Rules.
  4. Click s3_bucket_public_write_prohibited.
  5. Click Edit.
  6. Click Delete Rule.(Must scroll down)
  7. Click Delete
  8. In the AWS Console go to VPC.
  9. Click Endpoints.
  10. Select the Endpoint created earlier, select Actions, click Delete Endpoint.
  11. Click Yes,Delete.
  12. From the AWS console, click Services and select CloudFormation.
  13. Select S3SecurityWorkshop.
  14. Click Delete.
  15. Click Delete stack.
  16. It will take a few minutes to delete everything. Refresh the page to see an updated status. S3SecurityWorkshop will be removed from the list if everything has been deleted correctly.

amazon-s3-security-settings-and-controls's People

Contributors

amazon-auto avatar mburbey avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.