Giter VIP home page Giter VIP logo

salesforce / cloudsplaining Goto Github PK

View Code? Open in Web Editor NEW
1.9K 32.0 177.0 42.98 MB

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report.

Home Page: https://cloudsplaining.readthedocs.io/

License: BSD 3-Clause "New" or "Revised" License

Python 20.33% Shell 0.07% Ruby 0.43% JavaScript 72.71% Vue 6.05% HTML 0.11% Makefile 0.23% Dockerfile 0.06%
aws aws-iam cloud security salesforce aws-security cloud-security iam hacktoberfest

cloudsplaining's Introduction

NOTE: This repo/project has been archived by Salesforce. We encourage the collaborators to fork this code to a new home.

Cloudsplaining

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized HTML report.

Tests Documentation Status Join the chat at https://gitter.im/cloudsplaining Twitter PyPI Python Version Downloads

Documentation

For full documentation, please visit the project on ReadTheDocs.

Overview

Cloudsplaining identifies violations of least privilege in AWS IAM policies and generates a pretty HTML report with a triage worksheet. It can scan all the policies in your AWS account or it can scan a single policy file.

It helps to identify IAM actions that do not leverage resource constraints. It also helps prioritize the remediation process by flagging IAM policies that present the following risks to the AWS account in question without restriction:

  • Data Exfiltration (s3:GetObject, ssm:GetParameter, secretsmanager:GetSecretValue)
  • Infrastructure Modification
  • Resource Exposure (the ability to modify resource-based policies)
  • Privilege Escalation (based on Rhino Security Labs research)

Cloudsplaining also identifies IAM Roles that can be assumed by AWS Compute Services (such as EC2, ECS, EKS, or Lambda), as they can present greater risk than user-defined roles - especially if the AWS Compute service is on an instance that is directly or indirectly exposed to the internet. Flagging these roles is particularly useful to penetration testers (or attackers) under certain scenarios. For example, if an attacker obtains privileges to execute ssm:SendCommand and there are privileged EC2 instances with the SSM agent installed, they can effectively have the privileges of those EC2 instances. Remote Code Execution via AWS Systems Manager Agent was already a known escalation/exploitation path, but Cloudsplaining can make the process of identifying theses cases easier. See the sample report for some examples.

You can also specify a custom exclusions file to filter out results that are False Positives for various reasons. For example, User Policies are permissive by design, whereas System roles are generally more restrictive. You might also have exclusions that are specific to your organization's multi-account strategy or AWS application architecture.

Motivation

Policy Sentry revealed to us that it is possible to finally write IAM policies according to least privilege in a scalable manner. Before Policy Sentry was released, it was too easy to find IAM policy documents that lacked resource constraints. Consider the policy below, which allows the IAM principal (a role or user) to run s3:PutObject on any S3 bucket in the AWS account:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject"
      ],
      "Resource": "*"
    }
  ]
}

This is bad. Ideally, access should be restricted according to resource ARNs, like so:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::my-bucket/*"
    }
  ]
}

Policy Sentry makes it really easy to do this. Once Infrastructure as Code developers or AWS Administrators gain familiarity with the tool (which is quite easy to use), we've found that adoption starts very quickly. However, if you've been using AWS, there is probably a very large backlog of IAM policies that could use an uplift. If you have hundreds of AWS accounts with dozens of policies in each, how can we lock down those AWS accounts by programmatically identifying the policies that should be fixed?

That's why we wrote Cloudsplaining.

Cloudsplaining identifies violations of least privilege in AWS IAM policies and generates a pretty HTML report with a triage worksheet. It can scan all the policies in your AWS account or it can scan a single policy file.

Installation

Homebrew

brew tap salesforce/cloudsplaining https://github.com/salesforce/cloudsplaining
brew install cloudsplaining

Pip3

pip3 install --user cloudsplaining
  • Now you should be able to execute cloudsplaining from command line by running cloudsplaining --help.

Shell completion

To enable Bash completion, put this in your .bashrc:

eval "$(_CLOUDSPLAINING_COMPLETE=source cloudsplaining)"

To enable ZSH completion, put this in your .zshrc:

eval "$(_CLOUDSPLAINING_COMPLETE=source_zsh cloudsplaining)"

Scanning a single IAM policy

You can also scan a single policy file to identify risks instead of an entire account.

cloudsplaining scan-policy-file --input-file examples/policies/explicit-actions.json

The output will include a finding description and a list of the IAM actions that do not leverage resource constraints.

The output will resemble the following:

Issue found: Data Exfiltration
Actions: s3:GetObject

Issue found: Resource Exposure
Actions: ecr:DeleteRepositoryPolicy, ecr:SetRepositoryPolicy, s3:BypassGovernanceRetention, s3:DeleteAccessPointPolicy, s3:DeleteBucketPolicy, s3:ObjectOwnerOverrideToBucketOwner, s3:PutAccessPointPolicy, s3:PutAccountPublicAccessBlock, s3:PutBucketAcl, s3:PutBucketPolicy, s3:PutBucketPublicAccessBlock, s3:PutObjectAcl, s3:PutObjectVersionAcl

Issue found: Unrestricted Infrastructure Modification
Actions: ecr:BatchDeleteImage, ecr:CompleteLayerUpload, ecr:CreateRepository, ecr:DeleteLifecyclePolicy, ecr:DeleteRepository, ecr:DeleteRepositoryPolicy, ecr:InitiateLayerUpload, ecr:PutImage, ecr:PutImageScanningConfiguration, ecr:PutImageTagMutability, ecr:PutLifecyclePolicy, ecr:SetRepositoryPolicy, ecr:StartImageScan, ecr:StartLifecyclePolicyPreview, ecr:TagResource, ecr:UntagResource, ecr:UploadLayerPart, s3:AbortMultipartUpload, s3:BypassGovernanceRetention, s3:CreateAccessPoint, s3:CreateBucket, s3:DeleteAccessPoint, s3:DeleteAccessPointPolicy, s3:DeleteBucket, s3:DeleteBucketPolicy, s3:DeleteBucketWebsite, s3:DeleteObject, s3:DeleteObjectTagging, s3:DeleteObjectVersion, s3:DeleteObjectVersionTagging, s3:GetObject, s3:ObjectOwnerOverrideToBucketOwner, s3:PutAccelerateConfiguration, s3:PutAccessPointPolicy, s3:PutAnalyticsConfiguration, s3:PutBucketAcl, s3:PutBucketCORS, s3:PutBucketLogging, s3:PutBucketNotification, s3:PutBucketObjectLockConfiguration, s3:PutBucketPolicy, s3:PutBucketPublicAccessBlock, s3:PutBucketRequestPayment, s3:PutBucketTagging, s3:PutBucketVersioning, s3:PutBucketWebsite, s3:PutEncryptionConfiguration, s3:PutInventoryConfiguration, s3:PutLifecycleConfiguration, s3:PutMetricsConfiguration, s3:PutObject, s3:PutObjectAcl, s3:PutObjectLegalHold, s3:PutObjectRetention, s3:PutObjectTagging, s3:PutObjectVersionAcl, s3:PutObjectVersionTagging, s3:PutReplicationConfiguration, s3:ReplicateDelete, s3:ReplicateObject, s3:ReplicateTags, s3:RestoreObject, s3:UpdateJobPriority, s3:UpdateJobStatus

Scanning an entire AWS Account

Downloading Account Authorization Details

We can scan an entire AWS account and generate reports. To do this, we leverage the AWS IAM get-account-authorization-details API call, which downloads a large JSON file (around 100KB per account) that contains all of the IAM details for the account. This includes data on users, groups, roles, customer-managed policies, and AWS-managed policies.

  • You must have AWS credentials configured that can be used by the CLI.

  • You must have the privileges to run iam:GetAccountAuthorizationDetails. The arn:aws:iam::aws:policy/SecurityAudit policy includes this, as do many others that allow Read access to the IAM Service.

  • To download the account authorization details, ensure you are authenticated to AWS, then run cloudsplaining's download command:

cloudsplaining download
  • If you prefer to use your ~/.aws/credentials file instead of environment variables, you can specify the profile name:
cloudsplaining download --profile myprofile

It will download a JSON file in your current directory that contains your account authorization detail information.

Create Exclusions file

Cloudsplaining tool does not attempt to understand the context behind everything in your AWS account. It's possible to understand the context behind some of these things programmatically - whether the policy is applied to an instance profile, whether the policy is attached, whether inline IAM policies are in use, and whether or not AWS Managed Policies are in use. Only you know the context behind the design of your AWS infrastructure and the IAM strategy.

As such, it's important to eliminate False Positives that are context-dependent. You can do this with an exclusions file. We've included a command that will generate an exclusions file for you so you don't have to remember the required format.

You can create an exclusions template via the following command:

cloudsplaining create-exclusions-file

This will generate a file in your current directory titled exclusions.yml.

Now when you run the scan command, you can use the exclusions file like this:

cloudsplaining scan --exclusions-file exclusions.yml --input-file examples/files/example.json --output examples/files/

For more information on the structure of the exclusions file, see Filtering False Positives

Scanning the Authorization Details file

Now that we've downloaded the account authorization file, we can scan all of the AWS IAM policies with cloudsplaining.

Run the following command:

cloudsplaining scan --exclusions-file exclusions.yml --input-file examples/files/example.json --output examples/files/

It will create an HTML report like this:

It will also create a raw JSON data file:

  • default-iam-results.json: This contains the raw JSON output of the report. You can use this data file for operating on the scan results for various purposes. For example, you could write a Python script that parses this data and opens up automated JIRA issues or Salesforce Work Items. An example entry is shown below. The full example can be viewed at examples/files/iam-results-example.json
{
    "example-authz-details": [
        {
            "AccountID": "012345678901",
            "ManagedBy": "Customer",
            "PolicyName": "InsecureUserPolicy",
            "Arn": "arn:aws:iam::012345678901:user/userwithlotsofpermissions",
            "ActionsCount": 2,
            "ServicesCount": 1,
            "Actions": [
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Services": [
                "s3"
            ]
        }
    ]
}

See the examples/files folder for sample output.

Filtering False Positives

Resource constraints are best practice - especially for system roles/instance profiles - but sometimes, these are by design. For example, consider a situation where a custom IAM policy is used on an instance profile for an EC2 instance that provisions Terraform. In this case, broad permissions are design requirements - so we don't want to include these in the results.

You can create an exclusions template via the following command:

cloudsplaining create-exclusions-file

This will generate a file in your current directory titled exclusions.yml.

The default exclusions file looks like this:

# Policy names to exclude from evaluation
# Suggestion: Add policies here that are known to be overly permissive by design, after you run the initial report.
policies:
  - "AWSServiceRoleFor*"
  - "*ServiceRolePolicy"
  - "*ServiceLinkedRolePolicy"
  - "AdministratorAccess" # Otherwise, this will take a long time
  - "service-role*"
  - "aws-service-role*"
# Don't evaluate these roles, users, or groups as part of the evaluation
roles:
  - "service-role*"
  - "aws-service-role*"
users:
  - ""
groups:
  - ""
# Read-only actions to include in the results, such as s3:GetObject
# By default, it includes Actions that could lead to Data Exfiltration
include-actions:
  - "s3:GetObject"
  - "ssm:GetParameter"
  - "ssm:GetParameters"
  - "ssm:GetParametersByPath"
  - "secretsmanager:GetSecretValue"
# Write actions to include from the results, such as kms:Decrypt
exclude-actions:
  - ""
  • Make any additions or modifications that you want.
    • Under policies, list the path of policy names that you want to exclude.
    • If you want to exclude a role titled MyRole, list MyRole or MyR* in the roles list.
    • You can follow the same approach for users and groups list.

Now when you run the scan command, you can use the exclusions file like this:

cloudsplaining scan --exclusions-file exclusions.yml --input-file examples/files/example.json --output examples/files/

Scanning Multiple AWS Accounts

If your IAM user or IAM role has sts:AssumeRole permissions to a common IAM role across multiple AWS accounts, you can use the scan-multi-account command.

This diagram depicts how the process works:

Diagram for scanning multiple AWS accounts with Cloudsplaining

Note: If you are new to setting up cross-account access, check out the official AWS Tutorial on Delegating access across AWS accounts using IAM roles. That can help you set up the architecture above.

  • First, you'll need to create the multi-account config file. Run the following command:
cloudsplaining create-multi-account-config-file \ 
    -o multi-account-config.yml
  • This will generate a file called multi-account-config.yml with the following contents:
accounts:
  default_account: 123456789012
  prod: 123456789013
  test: 123456789014

Note: Observe how the format of the file above includes account_name: accountID. Edit the file contents to match your desired account name and account ID. Include as many account IDs as you like.

For the next step, let's say that:

  • We have a role in the target accounts that is called CommonSecurityRole.
  • The credentials for your IAM user are under the AWS Credentials profile called scanning-user.
  • That user has sts:AssumeRole permissions to assume the CommonSecurityRole in all your target accounts specified in the YAML file we created previously.
  • You want to save the output to an S3 bucket called my-results-bucket

Using the data above, you can run the following command:

cloudsplaining scan-multi-account \
    -c multi-account-config.yml \
    --profile scanning-user \
    --role-name CommonSecurityRole \ 
    --output-bucket my-results-bucket

Note that if you run the above without the --profile flag, it will execute in the standard AWS Credentials order of precedence (i.e., Environment variables, credentials profiles, ECS container credentials, then finally EC2 Instance Profile credentials).

Cheatsheet

# Download authorization details
cloudsplaining download
# Download from a specific AWS profile
cloudsplaining download --profile someprofile

# Scan Authorization details
cloudsplaining scan --input-file default.json
# Scan Authorization details with custom exclusions
cloudsplaining scan --input-file default.json --exclusions-file exclusions.yml

# Scan Policy Files
cloudsplaining scan-policy-file --input-file examples/policies/wildcards.json
cloudsplaining scan-policy-file --input-file examples/policies/wildcards.json  --exclusions-file examples/example-exclusions.yml

# Scan Multiple Accounts
# Generate the multi account config file
cloudsplaining create-multi-account-config-file -o accounts.yml
cloudsplaining scan-multi-account -c accounts.yml -r TargetRole --output-directory ./

FAQ

Will it scan all policies by default?

No, it will only scan policies that are attached to IAM principals.

Will the download command download all policy versions?

Not by default. If you want to do this, specify the --include-non-default-policy-versions flag. Note that the scan tool does not currently operate on non-default versions.

I followed the installation instructions but can't execute the program via command line at all. What do I do?

This is likely an issue with your PATH. Your PATH environment variable is not considering the binary packages installed by pip3. On a Mac, you can likely fix this by entering the command below, depending on the versions you have installed. YMMV.

export PATH=$HOME/Library/Python/3.7/bin/:$PATH

I followed the installation instructions, but I am receiving a ModuleNotFoundError that says No module named policy_sentry.analysis.expand. What should I do?

Try upgrading to the latest version of Cloudsplaining. This error was fixed in version 0.0.10.

References

cloudsplaining's People

Contributors

acknosyn avatar actions-user avatar amityadav2026 avatar dependabot-preview[bot] avatar dependabot[bot] avatar dgwhited avatar fabaff avatar fruechel avatar gruebel avatar henryhoggard avatar iann0036 avatar ismailyenigul avatar jacobappleton-orbis avatar jhutchings1 avatar jimjag avatar kmcquade avatar meghasfdc avatar mrpool404 avatar nitrocode avatar njgibbon avatar pyup-bot avatar reetasingh avatar rohanshenoy96 avatar saikirankv avatar schosterbarak avatar svc-scm avatar verkaufer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudsplaining's Issues

all_allowed_actions returns denied actions

Calling something like RoleDetail.all_allowed_actions will return denied statements, too.

https://github.com/salesforce/cloudsplaining/blob/master/cloudsplaining/scan/role_details.py#L153 -> https://github.com/salesforce/cloudsplaining/blob/master/cloudsplaining/scan/policy_document.py#L49 -> https://github.com/salesforce/cloudsplaining/blob/master/cloudsplaining/scan/statement_detail.py#L144

I think there are two possible places to fix this:

  1. In PolicyDocument because its property name is very explicit
  2. In StatementDetail because the expanded_actions docstring says "Expands the full list of allowed actions from the Policy/"

Because it's not obvious which of those is better, I didn't make a PR so let me know what's better and I'll make a fix.

FWIW the reason I stumbled on this was a policy that denies based on token timestamp:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "*"
            ],
            "Resource": [
                "*"
            ],
            "Condition": {
                "DateLessThan": {
                    "aws:TokenIssueTime": "<time>"
                }
            }
        }
    ]
}```

Bug in scan-policy-file

Command:

cloudsplaining scan-policy-file --input ~/Code/GitHub/kmcquade/cloudsplaining/test/files/test_policy_file.json

Result:


Traceback (most recent call last):
  File "/usr/local/bin/cloudsplaining", line 33, in <module>
    sys.exit(load_entry_point('cloudsplaining==0.1.7', 'console_scripts', 'cloudsplaining')())
  File "/usr/local/Cellar/cloudsplaining/0.1.7/libexec/lib/python3.8/site-packages/cloudsplaining/bin/cli.py", line 32, in main
    cloudsplaining()
  File "/usr/local/Cellar/cloudsplaining/0.1.7/libexec/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/Cellar/cloudsplaining/0.1.7/libexec/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/Cellar/cloudsplaining/0.1.7/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/Cellar/cloudsplaining/0.1.7/libexec/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/Cellar/cloudsplaining/0.1.7/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/Cellar/cloudsplaining/0.1.7/libexec/lib/python3.8/site-packages/cloudsplaining/command/scan_policy_file.py", line 80, in scan_policy_file
    if finding["PrivilegeEscalation"]:
TypeError: string indices must be integers

Exclude by SID

Add ability to exclude by SID. This will help in the per-policy self service validation wizard we are working on.

Identify roles that are assumable by Compute Services

It will be very helpful in triage/prioritization as well as pentesting/config reviews to identify which roles are used as instance profiles, lambda roles, ECS task roles, etc. We can start with Instance Profiles, or at least looking at the AssumeRole Policy to see if the role can be passed to an AWS Service.

Then later we can determine if it is really at the edge

Report should have IAM Principals in separate tab

That's more relevant than "these are used" - which is the current approach. Right now, findings will only show up for compute instances if they have the AssumeRole to an AWS Compute service AND they have a customer-managed policy (should show up in the RolePolicyList).

I'd have to evaluate the AWS managed policies first, have a list of the AWS managed policies that fail, and if that shows up in a principal's AttachedManagedPolicies block, apply the finding to that principal.

It also could lead to more findings of overpermissive policies that can be attached to compute instances.

Bug: If a principal with an inline policy is excluded, but the inline policy is not, the inline policy still shows up in the report

That is because the is_excluded attribute of Inline Policies only checks to see if the policy name or policy ID (a SHA256 hash of its contents) was specifically marked as excluded:

self.exclusions = exclusions
self.is_excluded = self._is_excluded(exclusions)
def _is_excluded(self, exclusions):
"""Determine whether the policy name or policy ID is excluded"""
return bool(
exclusions.is_policy_excluded(self.policy_name)
or exclusions.is_policy_excluded(self.policy_id)
)

It does not currently check to see if the principals that it is attached to are excluded. That is an issue. If all the principals that are attached to the policy are excluded, the policy should not show up in the report.

We can either address this through modifying the python above somehow, or modifying the javascript utility functions in the report. I would prefer the former.

Also need to look into this for managed policies.

iam-results-default.json is missing policies despite an empty exclusions file

Hi,
I had a question about the absence of some policies in the iam-results-default.json file.

Some policies, although present in the IAM Principals tab (in my case as an inline user policy) and in the default.json file, are not detailed in either the "Customer Policies" tab or the iam-results-default.json file.
Additional Details:

  • I am using cloudsplaining version 0.0.10
  • The issue has been replicated in two different environments
  • I am using an exclusions file with empty strings and have attached it with this message.
  • Some other inline policies do show up in iam-results-default.json file and the "Customer Policies" tab, so I don't think the issue stems from it being an inline policy.

For now, as a workaround, I am running a python script to scrape the policy details on all the missing policies, the only downside being that we then miss a direct threat assessment on those policies.

Would you have any suggestions on how we can address this issue?
Please let me know if I can provide more details.

exclusions.txt

TypeError: 'NoneType' object is not iterable

Traceback (most recent call last):
  File "/usr/local/bin/cloudsplaining", line 30, in <module>
    cloudsplaining()
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/command/scan.py", line 82, in scan
    scan_account_authorization_file(input, exclusions_cfg, output, all_access_levels, skip_open_report)
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/command/scan.py", line 118, in scan_account_authorization_file
    exclusions_cfg, modify_only=True
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/scan/authorization_details.py", line 233, in missing_resource_constraints
    return self.findings.json
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/output/findings.py", line 41, in json
    these_findings.append(finding.json)
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/output/findings.py", line 183, in json
    "PermissionsManagementActions": self.permissions_management_actions_without_constraints,
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/output/findings.py", line 140, in permissions_management_actions_without_constraints
    for action in self.policy_document.permissions_management_without_constraints:
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/scan/policy_document.py", line 99, in permissions_management_without_constraints
    if statement.permissions_management_actions_without_constraints:
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/scan/statement_detail.py", line 195, in permissions_management_actions_without_constraints
    self.expanded_actions, "Permissions management"
  File "/usr/local/lib/python3.7/site-packages/policy_sentry/querying/actions.py", line 285, in remove_actions_not_matching_access_level
    for action in actions_list:
TypeError: 'NoneType' object is not iterable

UI Refactoring and improvements

There are a lot of UI related bugs/enhancements that I have queued up. I'm addressing a lot of those bugs in a major refactoring process I am undergoing right now. I'm still going with the single-file HTML approach, but I'm leveraging Vue.js and some other things to display the results in a more efficient way for readers.

If anyone is interested in helping out or collaborating, please ping me.

Excluded actions show up in results

Try adding the following actions to the exclusions list:

[
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup"
]

Notice how they will still show up in the results.

I will fix this next week.

Support for roles

Hi Team,
wondering if 'cloudsplaining download' support Roles or account ids for accessing different accounts.
my company policy doesn't allow credentials in .aws/credentials file if I want to set up a single instance for all my environments

Thanks for your time

Add new service wildcard finding

Some other tools flag service_prefix:* as a finding. That's pretty basic and our tool goes way beyond that. However, we should probably add this one as a separate finding, since it can be helpful when prioritizing which policies to remediate.

Automation contributions wanted: GitHub actions testing to ensure that no console errors appear in generated VueJS Report

Hi all,

I know that many are interested in contributing to this tool. I wanted to highlight a specific opportunity for contributions.

With the upcoming 0.2.0 release, we are leveraging Vue.js, and injecting our webpack generated JavaScript files into the HTML report template via Jinja2. This information is covered in the documentation here.

We do have a GitHub action that runs the NodeJS tests with npm install and runs the build with npm build - however, there is currently no testing to ensure that the built report will be functional.

I speculate the best approach to engineering this properly would include:

  • Run GitHub action that runs Selenium
  • Selenium Webdriver test checks to make sure that the compiled example report contains some basic elements. If it does, then we assume that the report compiles successfully.

(Disclaimer: I have not written Selenium tests before)

This will have a huge impact on the ability for others to collaborate on this project. If you are interested in helping out, please ping me and I'm happy to connect and collaborate!

Allow for more granularity in exclusions

Right now, you can either AllowList or BlockList in an exclusions file - either entire principal names, entire actions, or entire policy names. However, this can have some unintended side effects. For example:

If we exclude AdministratorAccess under policies, this excludes AdministratorAccess from showing up *anywhere - including if AdministratorAccess is assigned to an Ec2 instance profile (yikes).

policies:
- "AdministratorAccess"
roles:
- ""
users:
- ""
groups:
- ""
include-actions:
  - "s3:GetObject"
exclude-actions:
- ""

Instead, we should allow for some granularity, where we can still exclude entire policies from the scan, but we can also specify which roles/groups/users are allowed to have those policies. For example:

policies:
- ""
roles:
- "MyRole": AdministratorAccess
users:
- "Obama":
  - "AdministratorAccess"
  - "PlzComeBack2Us"
groups:
- ""
include-actions:
  - "s3:GetObject"
exclude-actions:
- ""

Also, I might want to consider making the values under each one of these top-level keys object types instead of list types. There's no need for it to be list types, I think. Then it would look like:

policies:
    ""
roles:
    MyRole: AdministratorAccess
users:
    Obama:
        AdministratorAccess
        PlzComeBack2Us
groups:
    ""
include-actions:
    "s3:GetObject"
exclude-actions:
    ""

Problem with download function

I try to do a cloudsplaning download after i have assume export AWS_PROFILE=XXXX a role to my account.

But i get this output:

download --profile ss-privat Found credentials in shared credentials file: ~/.aws/credentials Enter MFA code for arn:aws:iam::XXXXXXX:mfa/XXXX: Refreshing temporary credentials failed during mandatory refresh period. Traceback (most recent call last): File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 516, in _protected_refresh metadata = self._refresh_using() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 264, in __call__ return self._refresh() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 657, in fetch_credentials return self._get_cached_credentials() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 667, in _get_cached_credentials response = self._get_credentials() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 800, in _get_credentials return client.assume_role(**kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/client.py", line 635, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the AssumeRole operation: The security token included in the request is invalid. Traceback (most recent call last): File "/usr/local/bin/cloudsplaining", line 33, in <module> sys.exit(load_entry_point('cloudsplaining==0.1.4', 'console_scripts', 'cloudsplaining')()) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/cloudsplaining/bin/cli.py", line 32, in main cloudsplaining() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/cloudsplaining/command/download.py", line 71, in download for page in paginator.paginate(Filter=["User"]): File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/paginate.py", line 255, in __iter__ response = self._make_request(current_kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/paginate.py", line 332, in _make_request return self._method(**current_kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/client.py", line 622, in _make_api_call operation_model, request_dict, request_context) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/client.py", line 641, in _make_request return self._endpoint.make_request(operation_model, request_dict) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request return self._send_request(request_dict, operation_model) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/endpoint.py", line 132, in _send_request request = self.create_request(request_dict, operation_model) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in create_request operation_name=operation_model.name) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit return self._emitter.emit(aliased_event_name, **kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit return self._emit(event_name, kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit response = handler(**kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/signers.py", line 90, in handler return self.sign(operation_name, request) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/signers.py", line 152, in sign auth = self.get_auth_instance(**kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/signers.py", line 232, in get_auth_instance frozen_credentials = self._credentials.get_frozen_credentials() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 605, in get_frozen_credentials self._refresh() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 500, in _refresh self._protected_refresh(is_mandatory=is_mandatory_refresh) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 516, in _protected_refresh metadata = self._refresh_using() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 264, in __call__ return self._refresh() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 657, in fetch_credentials return self._get_cached_credentials() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 667, in _get_cached_credentials response = self._get_credentials() File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/credentials.py", line 800, in _get_credentials return client.assume_role(**kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/Cellar/cloudsplaining/0.1.4/libexec/lib/python3.7/site-packages/botocore/client.py", line 635, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the AssumeRole operation: The security token included in the request is invalid.

purpose of triage worksheet

Hello Team, sorry for asking a very basic question.

https://github.com/salesforce/cloudsplaining/blob/master/examples/files/iam-triage-example.csv

I was trying to understand the field's services and actions in the triage worksheet. I assume they refer to the count of services for a given policy or a role and a total number of actions for all the services respectively.

if this is the correct assumption. for some of my policies, they are not reflected correctly. I can share examples once I get confirmation

Thanks for this amazing security assessment tool

TypeError: 'bool' object is not iterable

Just having a play with this tool, getting the following error after executing the scan command.

cloudsplaining download
cloudsplaining scan --input default.json
Traceback (most recent call last):
  File "/usr/local/bin/cloudsplaining", line 30, in <module>
    cloudsplaining()
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/command/scan.py", line 82, in scan
    scan_account_authorization_file(input, exclusions_cfg, output, all_access_levels, skip_open_report)
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/command/scan.py", line 118, in scan_account_authorization_file
    exclusions_cfg, modify_only=True
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/scan/authorization_details.py", line 233, in missing_resource_constraints
    return self.findings.json
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/output/findings.py", line 41, in json
    these_findings.append(finding.json)
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/output/findings.py", line 181, in json
    "PrivilegeEscalation": self.privilege_escalation,
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/output/findings.py", line 152, in privilege_escalation
    return self.policy_document.allows_privilege_escalation
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/scan/policy_document.py", line 80, in allows_privilege_escalation
    x.lower() for x in self.all_allowed_unrestricted_actions
  File "/usr/local/lib/python3.7/site-packages/cloudsplaining/scan/policy_document.py", line 53, in all_allowed_unrestricted_actions
    allowed_actions.extend(statement.expanded_actions)
TypeError: 'bool' object is not iterable

AWS SSO Inline Policy name to role name mapping

when using AWS SSO, i see below under customer polices(we have tons of those, inline policy with same Naming convention)... the issue is, all of the policy are showing up as "AwsSSOInlinePolicy" ... any way to map the actual role name for it ?

image

Bug: "Assumable by Compute Role" is only applied for inline policies

Right now, Assumable by Compute Role only shows up if it is an inline policy for a role. However, if it is a Managed Policy that is attached to an IAM role that can be run by a compute service, "Assumable by Compute Role" will not show up. This is a problem and can lead to a false sense of security.

Additionally, it does not provide pentesters with the proper information on which risky policies they could exploit that already have roles that they don't need to modify.

Improve scanning speed

The scanning speed could be improved.

I ran this on 30-40 accounts the other day, and it took 16 minutes. Granted, these were fairly bloated accounts - but still, there has to be room for improvement.

This problem is compounded when there are more permissive policies - either Admin-like or PowerUser-like since it has to call expand_actions many times.

IIRC, the AuthorizationDetails class is scanning all of the policies, even excluded ones, and then only filtering out the exclusions when AuthorizationDetails.missing_resource_constraints(exclusions) is called. I might be wrong about that, but if that is the case, perhaps we can optionally supply the exclusions at instantiation time.

I'll have to dive into this a bit more.

Add cross-account download functionality

download currently only works for a single account / profile. If there is setup in which multiple roles can be assumed to download this data, we could extend the functionality to perform a download of all configured accounts without having to wrap cloudsplaining in a script.

My current implementation - which does, in fact, wrap cloudsplaining in a script - works like this:

  1. Assume role into org master and fetch list of accounts
  2. Assume a role with the same name (but different account ID, obviously) to download the data for a particular account
  3. Store the result of each download in a separate file (I'm not actually using the download command right now)
  4. Call cloudsplaining via the script to perform the analysis

My idea for how to integrate this: Add a new command, maybe download_from_org or so which will perform the above steps. I'm unsure how other people's setup looks so in an effort to make it more generic I'd need to understand whether most folks have orgs or are using profiles (which I don't use at all). Do you have multiple orgs? Does the target role have the same name in every account?

Should we expand the download command or make a new one?

remove checks against '/aws-service-role/'

I initially excluded that because I assumed that AWS created roles would show up there, which would pollute the results. I do exclude Service Linked Roles (aws-service-role) but I am not sure if service-role is exclusively reserved for users if it is shared by both AWS and Customers, or encouraged for just customers.

Input from anyone is appreciated.

More granular exclusions

Currently still in development. This excel sheet maps out the current shortcomings pretty well.

image

Basically, some AWS managed policies will still show up, even if you exclude all the users that use them.

I have the underlying logic working with tests. Just need to build it into everything else, since that does involve moving around the current exclusions mechanism. Glad I'm doing this now rather than later.

Deployment through lambda?

Hey, have you made this code callable through a python script?

I find value in this tool, but would love to deploy a lambda that runs weekly and dumps these reports in an S3 bucket, since we have lots of AWS accounts, going into each account and running is really a pain.

Full Access policy breaks the scan.

The following policy breaks the cloudsplaining scan_policy_file on https://github.com/salesforce/cloudsplaining/blob/master/cloudsplaining/output/policy_finding.py#L116

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}

trowing the following error:

File "/usr/local/lib/python3.6/dist-packages/cloudsplaining/output/policy_finding.py", line 116, in service_wildcard
    service, action = action.split(":")
ValueError: not enough values to unpack (expected 2, got 1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.