Giter VIP home page Giter VIP logo

aws-samples / aws-secure-environment-accelerator Goto Github PK

View Code? Open in Web Editor NEW
709.0 42.0 232.0 79.03 MB

The AWS Secure Environment Accelerator is a tool designed to help deploy and operate secure multi-account, multi-region AWS environments on an ongoing basis. The power of the solution is the configuration file which enables the completely automated deployment of customizable architectures within AWS without changing a single line of code.

License: Apache License 2.0

TypeScript 29.73% Shell 0.28% JavaScript 0.36% PowerShell 0.19% Python 0.90% Handlebars 0.11% HTML 66.96% SCSS 0.02% CSS 1.45% Makefile 0.01%
aws security landingzone security-automation accelerator networking solution aws-accelerator customizable customized-architectures

aws-secure-environment-accelerator's Introduction

ATTENTION

The Landing Zone Accelerator (LZA) on AWS solution is now the recommended solution for organizations seeking to automate the deployment of a new high compliance AWS Environment.

The LZA v1.3 release (03/2023) focused on delivering AWS Secure Environment Accelerator (ASEA) feature parity and delivered both CCCS Cloud Medium and Trusted Secure Enclave Sensitive Edition sample configuration files. These samples deliver similar outcomes to the ASEA sample configuration file.

The LZA team is currently developing a semi-automated upgrade from ASEA to LZA. Upgrades from ASEA to LZA must occur before Q2 2025. Please monitor this site for a future LZA release that will support the ASEA to LZA semi-automated upgrade capability here.

Please reach out to your AWS Account Team with any questions.

AWS Secure Environment Accelerator

The AWS Accelerator is a tool designed to help deploy and operate secure multi-account, multi-region AWS environments on an ongoing basis. The power of the solution is the configuration file that drives the architecture deployed by the tool. This enables extensive flexibility and for the completely automated deployment of a customized architecture within AWS without changing a single line of code.

While flexible, the AWS Accelerator is delivered with a sample configuration file which deploys an opinionated and prescriptive architecture designed to help meet the security and operational requirements of many governments around the world. Tuning the parameters within the configuration file allows for the deployment of customized architectures and enables the solution to help meet the multitude of requirements of a broad range of governments and public sector organizations.

The installation of the provided prescriptive architecture is reasonably simple, deploying a customized architecture does require extensive understanding of the AWS platform. The sample deployment specifically helps customers meet NIST 800-53 and/or CCCS Medium Cloud Control Profile (formerly PBMM).

Diagram

What specifically does the Accelerator deploy and manage?

A common misconception is that the AWS Secure Environment Accelerator only deploys security services, not true. The Accelerator is capable of deploying a complete end-to-end hybrid enterprise multi-region cloud environment.

Additionally, while the Accelerator is initially responsible for deploying a prescribed architecture, it more importantly allows for organizations to operate, evolve, and maintain their cloud architecture and security controls over time and as they grow, with minimal effort, often using native AWS tools. While the Accelerator helps with the deployment of technical security controls, it’s important to understand that the Accelerator is only part of your security and compliance effort. We encourage customers to work with their AWS account team, AWS Professional Services or an AWS Partner to determine how to best meet the remainder of your compliance requirements.

The Accelerator is designed to enable customers to upgrade across Accelerator versions while maintaining a customer’s specific configuration and customizations, and without the need for any coding expertise or for professional services. Customers have been able to seamlessly upgrade their AWS multi-account environment from the very first Accelerator beta release to the latest release (across more than 50 releases), gaining the benefits of bug fixes and enhancements while having the option to enable new features, without any loss of existing customization or functionality.

Specifically the accelerator deploys and manages the following functionality, both at initial accelerator deployment and as new accounts are created, added, or onboarded in a completely automated but customizable manner:

Creates AWS Account

  • Core Accounts - as many or as few as your organization requires, using the naming you desire. These accounts are used to centralize core capabilities across the organization and provide Control Panel like capabilities across the environment. Common core accounts include:
    • Shared Network
    • Operations
    • Perimeter
    • Log Archive
    • Security Tooling
  • Workload Accounts - automated concurrent mass account creation or use AWS organizations to scale one account at a time. These accounts are used to host a customer's workloads and applications.
  • Scalable to 1000's of AWS accounts
  • Supports AWS Organizations nested OU's and importing existing AWS accounts
  • Performs 'account warming' to establish initial limits, when required
  • Automatically submits limit increases, when required (complies with initial limits until increased)
  • Leverages AWS Control Tower

Creates Networking

  • Transit Gateways and TGW route tables (incl. inter-region TGW peering)
  • Centralized and/or Local (bespoke) VPC's
  • Subnets, Route tables, NACLs, Security groups, NATGWs, IGWs, VGWs, CGWs
  • NEW Outpost, Local Zone and Wavelength support
  • VPC Endpoints (Gateway and Interface, Centralized or Local)
  • Route 53 Private and Public Zones, Resolver Rules and Endpoints, VPC Endpoint Overloaded Zones
  • All completely and individually customizable (per account, VPC, subnet, or OU)
  • Layout and customize your VPCs, subnets, CIDRs and connectivity the way you want
  • Static or Dynamic VPC and subnet CIDR assignments
  • Deletes default VPC's (worldwide)
  • AWS Network Firewall

Cross-Account Object Sharing

  • VPC and Subnet sharing, including account level re-tagging (Per account security group 'replication')
  • VPC attachments and peering (local and cross-account)
  • Zone sharing and VPC associations
  • Managed Active Directory sharing, including R53 DNS resolver rule creation/sharing
  • Automated TGW inter-region peering
  • Populate Parameter Store with all user objects to be used by customers' IaC
  • Deploy and share SSM documents (4 provided out-of-box, ELB Logging, S3 Encryption, Instance Profile remediation, Role remediation)
    • customer can provide their own SSM documents for automated deployment and sharing

Identity

  • Creates Directory services (Managed Active Directory and Active Directory Connectors)
  • Creates Windows admin bastion host auto-scaling group
  • Set Windows domain password policies
  • Set IAM account password policies
  • Creates Windows domain users and groups (initial installation only)
  • Creates IAM Policies, Roles, Users, and Groups
  • Fully integrates with and leverages AWS SSO for centralized and federated login

Cloud Security Services

  • Enables and configures the following AWS services, worldwide w/central designated admin account:
    • GuardDuty w/S3 protection
    • Security Hub (Enables designated security standards, and disables individual controls)
    • Firewall Manager
    • CloudTrail w/Insights and S3 data plane logging
    • Config Recorders/Aggregator
    • Conformance Packs and Config rules (95 out-of-box NIST 800-53 rules, 2 custom rules, customizable per OU)
    • Macie
    • IAM Access Analyzer
    • CloudWatch access from central designated admin account (and setting Log group retentions)

Other Security Capabilities

  • Creates, deploys and applies Service Control Policies
  • Creates Customer Managed KMS Keys (SSM, EBS, S3), EC2 key pairs, and secrets
  • Enables account level default EBS encryption and S3 Block Public Access
  • Configures Systems Manager Session Manager w/KMS encryption and centralized logging
  • Configures Systems Manager Inventory w/centralized logging
  • Creates and configures AWS budgets (customizable per OU and per account)
  • Imports or requests certificates into AWS Certificate Manager
  • Deploys both perimeter and account level ALB's w/Lambda health checks, certificates and TLS policies
  • Deploys & configures 3rd party firewall clusters and management instances (leverages marketplace)
    • Gateway Load Balancer w/auto-scaling and VPN IPSec BGP ECMP deployment options
  • Protects Accelerator deployed and managed objects
  • Sets Up SNS Alerting topics (High, Medium, Low, Blackhole priorities)
  • Deploys CloudWatch Log Metrics and Alarms
  • Deploys customer provided custom config rules (2 provided out-of-box, no EC2 Instance Profile/Permissions)

Centralized Logging and Alerting

  • Deploys an rsyslog auto-scaling cluster behind a NLB, all syslogs forwarded to CloudWatch Logs
  • Centralized access to "Cloud Security Service" Consoles from designated AWS account
  • Centralizes logging to a single centralized S3 bucket (enables, configures and centralizes)
    • VPC Flow logs w/Enhanced metadata fields (also sent to CWL)
    • Organizational Cost and Usage Reports
    • CloudTrail Logs including S3 Data Plane Logs (also sent to CWL)
    • All CloudWatch Logs (includes rsyslog logs)
    • Config History and Snapshots
    • Route 53 Public Zone Logs (also sent to CWL)
    • GuardDuty Findings
    • Macie Discovery results
    • ALB Logs
    • SSM Inventory
    • Security Hub findings
    • SSM Session Logs (also sent to CWL)
    • Resolver Query Logs (also sent to CWL)
  • Email alerting for CloudTrail Metric Alarms, Firewall Manager Events, Security Hub Findings incl. GuardDuty Findings
  • NEW Optionally collect Organization and ASEA configuration and metadata in a new restricted log archive bucket

Relationship with AWS Landing Zone Solution (ALZ)

The ALZ was an AWS Solution designed to deploy a multi-account AWS architecture for customers based on best practices and lessons learned from some of AWS' largest customers. The AWS Accelerator draws on design patterns from the Landing Zone, and re-uses several concepts and nomenclature, but it is not directly derived from it, nor does it leverage any code from the ALZ. The Accelerator is a standalone solution with no dependence on ALZ.

Relationship with AWS Control Tower

The AWS Secure Environment Accelerator now leverages AWS Control Tower!

With the release of v1.5.0, the AWS Accelerator adds the capability to be deployed on top of AWS Control Tower. Customers get the benefits of the fully managed capabilities of AWS Control Tower combined with the power and flexibility of the Accelerators Networking and Security orchestration.

Accelerator Installation Process (Summary)

This summarizes the installation process, the full installation document can be found in the documentation section below.

  • Create a config.json (or config.yaml) file to represent your organizations requirements (several samples provided)
  • Create a Secrets Manager Secret which contains a GitHub token that provides access to the Accelerator code repository
  • Create a unique S3 input bucket in the management account of the region you wish to deploy the solution and place your config.json and any additional custom config files in the bucket
  • Download and execute the latest release installer CloudFormation template in your management accounts preferred 'primary' / 'home' region
  • Wait for:
    • CloudFormation to deploy and start the Code Pipeline (~5 mins)
    • Code Pipeline to download the Accelerator codebase and install the Accelerator State Machine (~10 mins)
    • The Accelerator State Machine to finish execution (~1.25 hrs Standalone version, ~2.25 hrs Control Tower Version)
  • Perform required one-time post installation activities (configure AWS SSO, set firewall passwords, etc.)
  • On an ongoing basis:
    • Use AWS Organizations to create new AWS accounts, which will automatically be guardrailed by the Accelerator
    • Update the config file in CodeCommit and run the Accelerator State Machine to:
      • deploy, configure and guardrail multiple accounts at the same time (~25 min Standalone, ~50 min/account Control Tower)
      • change Accelerator configuration settings (~25 min)

Documentation

The latest version of the Accelerator documentation can be found here.


aws-secure-environment-accelerator's People

Contributors

amrib24 avatar antoineaws avatar archikierstead avatar awstacey avatar brian969 avatar brianfanning avatar charliejllewellyn avatar colinl2021 avatar crissupb avatar dependabot[bot] avatar dliggat avatar dustinhaws avatar fredbonin avatar hickeydh-aws avatar iamtgray avatar ipnextgen avatar jblaplace avatar jh-kainos avatar joeldesaulniers avatar joshvmaws avatar khaws avatar michaeldavie-amzn avatar nachundu-amzn avatar naveenkoppula avatar rgd11 avatar rjjaegeraws avatar ryanjwaters avatar rycerrat avatar supergillis avatar vic614 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-secure-environment-accelerator's Issues

[OTHER] Items requiring SDE investigation

  • investigate process to create new account directly in a sub-ou using SM (i.e. /Dev/My-sub-ou/account) - non-issue
    • this is working, simply a mis-understanding
  • review mock test cases and snapshot inclusions
    • some updates unnecessarily require snapshot updates, can we refine?
  • remove missed/remaining tslint references (do this with the CDK upgrade PR needed for z125) - Done in PR#520
    • SSM branch was development when tslint was implemented and was missed, tslint.json in root of repo, others?
  • investigate multi-part config files (may be issues when used within a list/array) - Done in PR#521

[FEATURE] Enhancement - Automated KMS CMK rotation capabilities

Automated KMS CMK rotation capabilities

  • the ASEA only uses Customer Managed Keys (CMK) within KMS
  • Different CMK's are created for each distinct purpose within each account
    • SSM, Secrets, ConfigBucket, EBS, S3Buckets
  • Currently these keys are being created without automatic key rotation enabled

TASK:

  • all new installations and all new KMS CMK creations moving forward shall have automatic rotation enabled when the key is created
  • all existing installations shall shall have all ASEA created keys updated to enable key rotation

Details:

[BUG][OTHER] Suspended accounts index change

Required Basic Info

  • Accelerator Version: v1.2.3
  • Install Type: Both
  • Install Branch: All

Describe the bug

  • Suspending accounts causes SM failures due to logic ID changes (per Naveen)
    • specifically with security groups in shared vpc in workload accounts

Expected behavior

  • Suspending accounts has no impact

[DOCS] [ENHANCEMENT] 8.35 - Enhance ASEA Sensitive Sample Architecture document

Enhance, cleanup and improve the PBMM Accelerator Deployed architecture design document

  • beef up and expand on current content
  • add better justifications around design decisions

Summary:

  • Includes detailed network ITSG-22/38 compliant diagrams
  • Must enable the deployment of both traditional 3-tier IaaS applications and Cloud Native Serverless applications
  • Interim design - pre-SCED which allows easy transition to end-state design
  • End state design w/SCED GCTIP and GCCAP connectivity (physical and logical)
  • Excludes all SCED architectural components
  • Incorporates SCED architecture v1 and v2 options

Design document should NOT contain:

  • detailed, implementation specific CONFIGURATION details.
    • (exact IP addresses, email addresses, ASN's, MAD domain names, etc. these belong in a companion configuration document)
    • For the purposes of exposition, it is acceptable to use sample values so long as they are clearly annotated as such; e.g. example-dept.gc.ca
  • details as to HOW the design was deployed
    • no references to accelerator steps, accelerator coding methodology, or architecture
    • the FOREWORD or INTRO, it should indicate that this design can be implemented using the sample configuration file using the Accelerator.

[BUG][INSTALL] Docker throttling causes SM install failure

Required Basic Info

  • Accelerator Version: All versions pre 2020-11-19
  • Install Type: Both Clean and Upgrade
  • Install Branch: N/A
  • Upgrade from version: Any
  • Which State did the Main State Machine Fail in: Pipeline failed in "Deploy" phase

Describe the bug

  • As of Nov 2, 2020 Docker has implemented rate limiting
    (https://docs.docker.com/docker-hub/download-rate-limit/)
  • Rate limiting for anonymous users (i.e. the Accelerator) is based on IP address
  • As Codebuild uses amazon IP's, other user Codebuild activity is likely to impact our throttling threshold
    (i.e. I was doing a single upgrade and had not done any other upgrades today)

Failure Info

  • What error messages have you identified, if any: "You have reached your pull rate limit. https://www.docker.com"
  • What symptoms have you identified, if any: Codepipeline fails in Deply phase

image

Steps To Reproduce

  • random

Expected behavior

  • Add exponential back-off calls and better error handling in the SM installation codebase

[FEATURE] 7.65 - Deploy Config Rules w/Remediation using Non-Compliant ALB Logging as template

Summary:

  • Deploy AWS Config Managed Rules in all accounts and all regions as defined per the config file
  • Enable rule auto-remediation if defined in the config file
  • Use the ELB_LOGGING_ENABLED managed config rule as the sample / template for an auto-remediated rule
  • Use the SSM documents which are either AWS managed (pre-existing) or deployed via task 7.75 SSM Managed document Deployment as the remediation mechanism
  • In future it expected we will also need to trigger a Lambda from the SSM document
  • AWS supports up to 150 rules per account/region we must be able to support this scale (and more when that limit is increased in the near future)
  • Must be able to:
    • Update rules (i.e. add future remediation, tweak remediation)
    • Add more rules (to all account in an ou)
    • toggle rules on and off (i.e. if an exclusion is removed, the rule should be enabled)

Ensure code placement in SM to enable the following order of operations in future:

  1. Deploy Lambda (if required, in future)
  2. Deploy SSM Document and SSM Document Execution Permissions in Shared Account (may depend on Lambda)
  3. Deploy Config rule (May depend on SSM document)

Sample Rule: AWS Config elb-logging-enabled

  • Checks whether the Application Load Balancer and the Classic Load Balancer have logging enabled. The rule is NON_COMPLIANT if the access_logs.s3.enabled is false or access_logs.S3.bucket is not equal to the s3BucketName that you provided.

(manual deployments steps defined in attachment)

Acceptance Criteria:

  • AWS Config Rules configured and enabled in every account and region as defined per ou
    • specific rules not deployed to specific accounts as defined
      STEPS to TEST:
  • Create an ALB with no logging attributes
  • AWS Config detects Non Compliant ALB
  • Remediation Executes SSM Document
  • ALB logging attributes configured
  • AWS Config detects ALB as Compliant

Config File

  "global-options": {
      ...
      "aws-config": {
         "managed-rules": {
            defaults: {
                 "remediation": false,
                 "remediation-attempts": 5,
                 "remediation-retry-seconds": 60,
                 "remediation-concurrency": 10
            },              
            rules: [
             {
                "managed-rule-name": "ELB_LOGGING_ENABLED",   ----Name will be PBMMAccel-ELB_LOGGING_ENABLED
                "parameters": [
                    "s3BucketNames": "${SEA::LogArchiveBucket}"
                ],
                "remediation": true,
                "remediation-attempts": 5,
                "remediation-retry-seconds": 60,
                "remediation-concurrency": 10,
                "remediation-action": "PBMMAccel-SSM-ELB-Enable-Logging",
                "remediation-params": {
                    "LoadBalancerArn": "RESOURCE_ID"
                    "LogDestination": "${SEA::LogArchiveBucket}",
                    "LogPrefix": "elb"
                }
            },
            {
                "managed-rule-name": "MNGD_RULE_2",
                "parameters": [
                    "abc_parameter-s3BucketNames": "${SEA::LogArchiveBucket}"
                ]
            }
           ]
        }
      }
  }
  ...
     "my-org-unit": {
         ...
         "aws-config": [
            {
              "excl-regions": ["us-east-2"],
              "rules": ["PBMMAccel-ELB_LOGGING_ENABLED","PBMMAccel-ConfigRule2","PBMMAccel-ConfigRuleN],
              "remediate-regions": ["ca-central-1"]   --->if config rule in global-options has remediate=true, do it in these regions only
            },
            {
              "excl-regions": [],
              "rules": ["PBMMAccel-ConfigRule2","PBMMAccel-ConfigRuleN]
            }
         ]
     "my-account": {
         ....
         "aws-config": [
            {
              "regions": ["ca-central-1"],
              "excl-rules": ["PBMMAccel-ConfigRule2","PBMMAccel-ConfigRule5]
            },
            {
              "regions": ["us-east-1"],
              "excl-rules": ["PBMMAccel-ELB_LOGGING_ENABLED"]
            }
         ]
  • global-options defines the config rules, which deploys nothing
  • organizational-units deploys the rules to all accounts in the ou, and all regions, except the listed excluded regions
  • my-account excludes certain rules from certain accounts and regions
  • aws-config section in ou's and in accounts is optional
    • if not defined in ou's, no config rule deployed
    • if not defined in account, all ou config rules deployed
      NOTE: We will likely have 100 to 150 rules per account, per region, the current limit {plan for higher when the service team increases limits
      NOTE: We should likely change SSM targeting slightly to be consistent

Sample Script

app.ts

const configRules: Array<ConfigRule> = new Array<ConfigRule>();
const configElbEnableLogging: ConfigRule = {
    name: 'PBMM-elb-logging-enabled-rule',
    description: 'Checks whether the Application Load Balancers and the Classic Load Balancers have logging enabled.',
    managedRuleName: 'ELB_LOGGING_ENABLED',
    parameters: new Map<string,string>([
        ["s3BucketNames", ""]
    ]),
    remediationConfigDocument:  fs.readFileSync(path.join(__dirname, "../lib/config-rules/PBMMAccel-config-elb-enable-logging.json"), "utf-8"),
    remediationParams: new Map<string,string>([
        ["SSMDocumentName", "PBMMAccel-alb-enable-logging"],       
        ["AutomationAssumeRoleName", "PBMMAccel-SSM-ELB-Enable-Logging-Role"],
        ["LogDestination", "${SEA::LogArchiveBucket}"],
        ["LogDestinationPrefix", "elb"]
    ]),
    ssmDocumentOwnerId: '144940249523'
};

configRules.push(configElbEnableLogging);

new ConfigRulesStack(app, 'ConfigRules', {
    env: {
        region: 'ca-central-1'
    },
    configRules: configRules
})

config-rules-stack.ts

import * as cdk from '@aws-cdk/core';
import * as iam from '@aws-cdk/aws-iam';
import * as awsconfig from '@aws-cdk/aws-config';
import * as ssm from '@aws-cdk/aws-ssm';
import { AwsSdkCall, AwsCustomResource } from '@aws-cdk/custom-resources';
import console = require('console');
import { config } from 'process';


export interface ConfigRule {
    readonly name: string;
    readonly description: string,
    readonly managedRuleName?: string;
    readonly parameters: Map<string, string>;
    readonly remediationConfigDocument: string;
    readonly remediationParams: Map<string, string>;   
    readonly ssmDocumentOwnerId: string 
}

interface ConfigRulesStackProps extends cdk.StackProps {
    readonly configRules: Array<ConfigRule>;
}

export class ConfigRulesStack extends cdk.Stack {
    constructor(scope: cdk.Construct, id: string, props?: ConfigRulesStackProps) {
        super(scope, id, props);        
        
        if (props?.configRules != null) {
        
            for (const configRule of props?.configRules) {
                   
                if (configRule.managedRuleName) {
        
                    const managedRule = new awsconfig.ManagedRule(this, configRule.name, {
                        configRuleName: configRule.name,
                        description: configRule.description,
                        inputParameters: configRule.parameters,
                        identifier: configRule.managedRuleName
                    });

                    const remediationConfig = JSON.parse(configRule.remediationConfigDocument)
                    const ssmDocumentName = configRule.remediationParams.get("SSMDocumentName");
                    const roleName = configRule.remediationParams.get("AutomationAssumeRoleName");
                   
                    for (let [key, value] of configRule.remediationParams) {                        
                        if (key == "AutomationAssumeRoleName") {
                            continue;
                        }                        

                        if (key in remediationConfig.Parameters) {
                         
                            if ('StaticValue' in remediationConfig.Parameters[key] && 'Values' in remediationConfig.Parameters[key]['StaticValue']) {
                                
                                let tmpValue = value;

                                const supported_tokens: Array<string> = new Array<string>('${SEA::LogArchiveBucket}', '${SEA::AWSAccountId}');
                                for (const supported_token of supported_tokens) {
                                    if (tmpValue.indexOf(supported_token) >= 0) {
                                        switch (supported_token) {                                                  
                                            case '${SEA::LogArchiveBucket}':
                                                tmpValue = tmpValue.replace(supported_token, '517328343847-elblogs'/*Central Log Archive Bucket*/)                                               
                                                break;
                                            case '${SEA::AWSAccountId}':
                                                tmpValue = tmpValue.replace(supported_token, this.account);                                                
                                                break;
                                        }
                                    }
                                }
                                
                                remediationConfig.Parameters[key]['StaticValue']['Values'] = tmpValue == "" ? [] : [tmpValue];

                            }
                        }

                    }
                    
                    if (roleName && ssmDocumentName) {
                        const roleArn = this.lookup_iam_role(roleName).roleArn;
                        
                        remediationConfig.Parameters.AutomationAssumeRole = {
                            StaticValue: {
                                Values: [roleArn]
                            }
                        };
                        
                        new awsconfig.CfnRemediationConfiguration(this, configRule.name+"Remediation", {
                            configRuleName: managedRule.configRuleName,
                            targetId: this.lookup_ssm_document(ssmDocumentName, configRule.ssmDocumentOwnerId),
                            targetType: 'SSM_DOCUMENT',
                            parameters: remediationConfig.Parameters,
                            automatic: remediationConfig.Automatic,
                            maximumAutomaticAttempts: remediationConfig.MaximumAutomaticAttempts,
                            retryAttemptSeconds: remediationConfig.RetryAttemptSeconds
                        });
                    }
                }   
            }
        }
  }

    lookup_ssm_document(ssmDocumentName: string, ownerAccountId: string) {
        const documentArn = `arn:aws:ssm:${this.region}:${ownerAccountId}:document/${ssmDocumentName}`;
        return documentArn;
    }

    lookup_iam_role(roleName: string) {
        const accountId = this.account;        
        const role_arn = `arn:aws:iam::${accountId}:role/${roleName}`
        const role = iam.Role.fromRoleArn(this, roleName + "ConfigRole", role_arn);
        return role;
    }
}

Design Considerations

  • There is a dependency on the SSM component (7.75 - Deploy AWS Systems Manager Automation Documents)
    • This must be deployed into the AWS Account before this.
    • In the sample code, the 'ssmDocumentOwnerId' was used to simulate this. There should be a more elegant way to lookup the SSM Document share owner.
  • There is a need for a Token Service that is able to replace values at the time of deployment. Here is one token needed for this Config Rule:
    1. "${SEA:LogArchiveBucket}" - "SEA" could be used to implement a provider pattern, and "LogArchiveBucket" is the value to lookup using the provider. In this context, the expected result is the centralized s3 bucketname from the log-archive account.
  • The lookup of the shared SSM document is difficult to get. The API doesn't return it as part of the regular list-documents command. It must have the the DocumentFilterList: key=Owner,value=Private for it to return. In my sample code, it uses a naming convention instead of looking up the SSM Document.
  • The sample code assumes a target type of SSM_DOCUMENT.
  • Assumes that the permissions are properly configured for ELB to write logs to the specified bucket.

[BUG][Uninstall] z148 - Phase 2 Stack Delete hangs and fails.

Required Basic Info

  • Accelerator Version: v1.2.2
  • Which State did the Main State Machine Fail in: Uninstall

Short Problem Description

  • Attempting to delete Phase2 stack results a DELETE_FAILED after a lengthy timeout waiting for custom lambda resources. There is a missing implementation of DELETE that causes it to hang.

I believe the following lambdas are missing a delete implementation:

  • cdk-guardduty-enable-admin
  • cdk-guardduty-get-detector

Expected Behavior

  • The Phase2 deletes and cleans up resources it deployed.

Actual Behavior

  • Phase2 stacks fail with the following error:
    The following resource(s) failed to delete: [CloudWatchCrossAccountDataSharingRole26E385B8, GuardDutyPublish1B878E87, GuardDutyPublishDetector2E43EF7D].

From the template:
GuardDutyPublish1B878E87:
Type: Custom::GuardDutyCreatePublish

GuardDutyPublishDetector2E43EF7D:
Type: Custom::GetDetectorId

CloudWatchCrossAccountDataShaingRole26E385B8:
Type: Custom::IAMCreateRole

  • Resources are not cleaned up: ex: arn:aws:iam::accountid:role/CloudWatch-CrossAccountSharingRole

[BUG] [SM] SM fails with an ultra lite configuration w/single IAM policy definition

(already fixed, opening ticket for record keeping).

Required Basic Info

  • Accelerator Version: v1.2.3
  • Install Type: ALL
  • Install Branch: ALL
  • Upgrade from version: N/A
  • Which State did the Main State Machine Fail in: Phase 1

Describe the bug
We found an issue when creating an ultra-lite configuration, when a policy file is used by multiple accounts, we require the policy to be referenced multiple times in any one account or the SM fails.

Failure Info

  • What error messages have you identified, if any:
SyntaxError: Unexpected token u in JSON at position 0, 
    at JSON.parse (<anonymous>)
    at createIamPolicy (/app/src/deployments/cdk/src/common/iam-assets.ts:33:34)
   at new IamAssets (/app/src/deployments/cdk/src/common/iam-assets.ts:174:11)
   at createIamAssets (/app/src/deployments/cdk/src/apps/phase-1.ts:361:25)
    at deploy (/app/src/deployments/cdk/src/apps/phase-1.ts:386:11)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at Object.deploy (/app/src/deployments/cdk/src/app.ts:70:3)
    at main (/app/src/deployments/cdk/cdk.ts:36:16)

Using an ultra-light config with most things stripped out.

Expected behavior
SM fails

[FEATURE] Read-only access to Log Archive for IAM roles

Required Basic Info
To properly assess the enhancement request, we require information on the version of the Accelerator you based this request upon:

  • Accelerator Version: 1.2.3
  • Install Type: (Clean or Upgrade) Clean/Upgrade
  • Install Branch: (ALZ or Standalone) Standalone
  • Upgrade from version: (N/A or v1.x.y) N/A

Is your feature request related to a problem? Please describe.
IAM roles created through the ASEA can use the 'ssm-log-archive-access' flag to give a role read + write access to the Log Archive account. However there is no equivalent read-only version of this flag. I'm currently trying to deploy an SIEM solution which uses a Lambda function to fetch objects from the Log Archives centralized logging bucket. The Lambda role needs read-only access to the bucket. Currently my only option is to give the role 'ssm-log-archive-access' which is too permissive.

Describe the solution you'd like
An equivalent of the ssm-log-archive-access flag, but for read-only access to the Log Archive centralized logging bucket. The flag could be called 'ssm-log-archive-read-only-access', and potentially rename the existing 'ssm-log-archive-access' flag to 'ssm-log-archive-full-access' to clearly differentiate the two flags.

Describe alternatives you've considered
For my use case I could use the existing 'ssm-log-archive-access' flag, but I'd be breaking principle of least privilege. If the Lambda function is compromised, files in the log archive could be deleted/modified.

[BUG] [SM] Zones - when no zones deployed, must provide account and vpc or SM fails

Required Basic Info

  • Accelerator Version: v1.2.3
  • Install Type: All
  • Install Branch: Standalone
  • Upgrade from version: N/A
  • Which State did the Main State Machine Fail in: Phase 1

Describe the bug
While we can deploy no zones by leaving the public and private arrays empty, if we remove all zone entries from the array, SM fails. Currently we require an account with a VPC, even if we don't deploy the zones.

Works:

    "zones": [
      {
        "account": "shared-network",
        "resolver-vpc": "Endpoint",
        "region": "ca-central-1",
        "names": {
          "public": [],
          "private": []
        }
      }
    ], 

Fails:

    "zones": [    ], 

Failure Info

  • What error messages have you identified, if any:
    • Phase 1 Codebuild:
Cannot find account with key "undefined"
Error: Cannot find account stack for account undefined

Expected behavior
If we are not deploying any zones, we should not need to provide an account name or a VPC name.

NOTE

  • Something else is wrong in the codebase - changing the zones account breaks other R53 functionality like resolver rules, which should be completely unrelated. If we deploy NO ZONES, but point the ZONES at a different account/vpc
  • (I should have caught it , under zones, the variable is "resolver-vpc", when it should just be vpc - zone vpc's have nothing to do with resolver vpc's)
  • resolver endpoints and resolver rules appear to no longer exist,
  • the resolver security groups have issues creating - CREATE_FAILED | AWS::EC2::SecurityGroup | SharedNetworkPhase3ResolverEndpoints_1/ResolverEndpoints-shared-network-Endpoint/OutboundSecurityGroup (ResolverEndpointssharednetworkEndpointOutboundSecurityGroupEAD27F10) Endpoint_outbound_endpoint_sg already exists in stack arn:aws:cloudformation:ca-central-1:642462841348:stack/PBMMAccel-SharedNetwork-Phase3-CentralVpcResolverEndpoints/2f76b8a0-0d70-11eb-a466-063829308218

[FEATURE] 8.45 - Enhance Developer Guide

Enhance, cleanup and improve the developer guide

Summary:

This document builds on the Operations and Troubleshooting guide
This document is intended for programmers
This document details basic Accelerator coding 1st principals

  • all code native CDK, in typescript where possible
  • then wrapped CFN
  • then custom resources
  • last non-preferred option is API's, still always Typescript
  • nothing is hard-coded, everything comes from config file
    This document details CDK/CFN limitations
  • don't move things around in the SM
  • don't mess with object naming
    Structure of code repository
    Documentation of custom resource library
    It also details:
  • how to extend the code
  • where should functionality be introduced
  • what not to do, how do this without breaking upgrades
    Any and all guidelines, such that a new programmer can quickly get up to speed and not de-stabilize the code base, whether a new EE team member, a new resource from India SDE team, or a customer.

[ENHANCEMENT] Move to org permission from per account permissions

Required Basic Info

  • Accelerator Version: v1.2.3

Describe the solution you'd like

  • today the log-archive s3 bucket policies, log-archive KMS keys, and SSM automation documents are updated with permissions per account
  • do we have any other permissions which require access from all accounts in the organization? (CWL?)
  • switch to using organization permissions, rather than updating the policies per account
  • do any objects not support leveraging org level permissions?

Why

  • We will run out of space in each of the respective policies as we scale to 1000's of accounts

[FEATURE] z125 - Enhancement - Centralize Accelerator CDK buckets - one per region

History for this issue exists on the AWS internal ticketing system, ticket #z125, as it predates the move to GitHub issues. As this issue is in progress, it is being duplicated here and may not fully represent final impementation details.

Update

  • This ticket has 3 phases now:
    • z125 (CDK part 1) - submit PR to update CDK code
    • z125 (CDK part 2) - use default synthesizer rather than previous installer mechanism.
    • z125 (CDK part 3) - after new CDK release available, finish ticket

CDK Part 1 Status: (near complete)

CDK Part 2 Status: (near complete)

CDK Part 3 Status:

  • Pending CDK Part 1

Short Problem Description

  • See summary below - essentially we have 16 CDK buckets in every AWS account in the Accelerator (16 * 1000 accounts = 16000 buckets, yikes)
  • This is unacceptable from an end user experience perspective, a customer has troubles finding customer buckets inside an account and needs to search through the 16 CDK buckets for their own stuff
  • CDK has a requirement for a bucket per region, but the bucket can be shared across accounts (i.e. a total of 16 buckets)
  • bucket needs to be specified on each CDK call or it reverts to the standard CDK bucket (which will not exist)
  • central regional CDK buckets will be locked down to the Accelerator roles for access
  • customers, if desired, can use their own CDK buckets we will not impact them, they will not impact us. Customers cannot/will not use our CDK bucket

Steps:

  • leave master account account alone / as is (it will use cdk default buckets, as sub-accounts do not yet exist)
  • delete everything in the current CDK bucket in all aws sub-accounts (all prior code releases only)
  • uninstall the existing CDK stack in all aws accounts/all regions (all prior code releases only)
  • if not deleted, delete the current CDK bucket in all accounts/all regions (all prior code releases only)
    ("all prior releases" code should be included in a manner it is easy to remove in future)
  • create new CDK folder in each region in designated OPS account only
  • redeploy CDK stack in all accounts/all regions pointing to the central Ops bucket
  • See these details:

Expected Behavior

  • No more CDK buckets in workload accounts

Actual Behavior

  • 16 CDK buckets in every AWS account

SUMMARY OF ISSUE

We presented the following to the CDK team. They responded that a centralized bucket per region was likely best approach.


As you are aware, we are using CDK to deploy AWS features and guardrails in AWS accounts

  • We do this for administrative accounts and more importantly customer workload accounts
  • We do this in every region around the world
  • This means we need to deploy CDKToolkit in every region, in every account (which is fine)

What's the problem?

  • Every AWS accounts S3 console is now a mess, with a minimum of 16 CDK buckets (see below screenshot)
  • Customers are complaining that this is confusing for their users and makes it hard to find their own buckets
    Other notes:
  • Customers deleting CDK staging files does NOT cause issues
  • Customers deleting CDK staging bucket BREAKS the CDK and our automation, so we protect these bucket with SCP's
  • Deleted CDK staging buckets requires removing CDK stack and redeploying CDK stack, no other fix known
  • Need to be able to scale to 1000's of accounts per org
  • Currently 1000 accounts * 16 regions (minimum) = 16,000 CDK staging buckets in an organization, just for CDK

Why don't you just uninstall the CDK stack?

  • On every state machine execution, we would need to redeploy it, in every account and every region around the world
  • Meaning, on every state machine execution, we would need to also uninstall it, in every account and every region around the world
  • Resulting in large waste of time / very inefficient
  • Additionally, this does not solve the problem, the state machine could be executed at any time
  • Customers would see 16+ buckets appear and disappear from their accounts while using them, causing even more confusion, leaving the buckets at least gives customers consistency (we need to validate/reapply/change guardrails in accounts)
  • Not to mention a customer may also depend on CDK and we will break their automation, removing their CDK deployment

What option(s) do we have to resolve this issue?

  • Can we drop the need for a CDK staging bucket?
  • Can we create one bucket per region in a central account and have all accounts in the org use a single regional central CDK staging bucket?
    o i.e. assuming 16 regions, 16 CDK buckets total no matter the number of accounts in the org
  • Can we create one bucket per account in a single region and have all regions in that account access the bucket from a single central region?
    o i.e. assuming 1000 aws accounts, 1000 cdk staging buckets, one per account, in a single designated region
  • Can we add a 'secret hidden bucket' flag to hide the display of the cdktoolkit buckets in S3? 
  • Other options?

[FEATURE] Add support to install ASEA on top of AWS Control Tower

  • The ASEA is designed to leverage as many capabilities of native AWS services or offerings as possible
  • It has always been planned to add an option to install the ASEA on top of Control Tower, when appropriate (i.e. service available in the Canadian region, assessed for PBMM compliance by the Canadian Cyber Center, etc.)
  • We are actively working with the Control Tower team to ensure our roadmaps are aligned and we have a path forward which will allow deploying the ASEA on top of Control Tower, while still allowing customers to meet the Gov't of Canada's PBMM security control requirements

NEW Installs on CT: Code Complete

SCP consolidation to fit in CT free space: Complete

BLOCKED: Upgrade from SEA to CT+SEA (requires new CT functionality) - Upgrades from SEA to CT+SEA delayed until 22H1 - will ship without this feature.

[FEATURE] Config Remediation - Add additional (priority) auto-remediating config rules

This task is dependent on tasks: a) deploy config rules at scale (#476), b) deploy SSM remediation capabilities (#477), AND c) the addition of ~100 NIST 800-53 config rules (#487):

  • if no existing config rule exists for below capability, first create one
  • otherwise update existing config rule
  • add remediation capabilities for the below specific priority non-compliant rules
  • This MAY require adding the capability to also deploy customer provided Lambda's (in addition to SSM documents), we would prefer to stick with pure SSM Automation documents.
  1. Auto-remediate unencrypted S3 buckets w/ KMS CMK --> MERGED TO MAIN BRANCH in v1.2.4 release
    • allow customer to provide either a config file variable which points to
      • the PBMMAccel-Bucket-Key-xxxxxxxx key ALIAS in each account
      • a customer provided key ALIAS (which the customer will ensure is in every account)
  2. Auto-remediate missing role on all new and existing EC2 instances (EC2-Default-SSM-AD-Role) --> Shipping in v1.3.0
    • WAITING FOR 50/500 AWS managed config rule to publish: EC2_INSTANCE_PROFILE_ATTACHED
    • Existing Managed SSM document exists to remediate
  3. Auto-remediate missing permissions on all new and existing EC2 instances (PBMMAccel-SSMAccessPolicy) --> Shipping in v1.3.0
  4. Add variables to pass to SSM from config --> Complete - more may be required in future
    • we currently support a) central AES bucket, b) local bucket s3 key
    • please add support for:
      • a) central bucket (non-AES), b) local bucket, c) central S3 bucket key, d) local EBS key, e) local SSM key
      • other variables you think someone may need to remediate a config rule

[BUG][Functional] AWS Account Names beginning with a number will fail the State Machine

Bug reports which fail to provide the required information will be closed without action.

Required Basic Info

  • Accelerator Version: v1.2.2
  • Install Type: Upgraded
  • Install Branch: Standalone
  • Upgrade from version: v1.1.9

Describe the bug
The Main StateMachine fails in Phase1 if the AWS Account name starts with a number.

Failure Info

  • What error messages have you identified, if any:
    Error: Resolution error: Logical ID must adhere to the regular expression: /^[A-Za-z][A-Za-z0-9]{1,254}$/, got ....

CloudFormation LogicalID have rules for a valid value: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resources-section-structure.html

  • What symptoms have you identified, if any:

Failed state machine where subsequent runs fail.

Required files

  • Please provide a copy of your config.json file (sanitize if required)

Steps To Reproduce

  1. Create an AWS Account that starts with a number.
  2. Move the AWS Account into a SEA 'Sandbox' managed OU. (unknown if 'Sandbox' is part of the issue)
  3. Watch the Main State Machine.

Expected behavior
If the AWS Account name is supported by its create API, the SM should be able to handle it.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

[FEATURE] Add option to leverage AWS Network Firewall Service in perimeter

Add option to leverage AWS Firewall Service

  • Dependent on AWS Firewall Service GA'ing in the Canadian region

Design principal: Add to any VPC just like a NATGW

Current NATGW Config – EXAMPLE WITH SINGLE AZ:

 {
  "name": "Sandbox",
...
  "natgw": {
    "subnet": {
      "name": "Web",
      "az": "a"
    }
  },
  "subnets": [
    {
      "name": "Web",
...
  ],
  "route-tables": [
    {
      "name": "SandboxVPC_Common",
      "routes": [
        {
          "destination": "0.0.0.0/0",
          "target": "NATGW_Web_azA"
        } ] } ]

Current NATGW Config - EXAMPLE WITH PER AZ NATGW

    "subnet": {
      "name": "Web"             		<----- No az’s means all az’s with subnets
    }
...
  ],
  "route-tables": [
    {
      "name": "SandboxVPC_a",
      "routes": [
        {
          "destination": "0.0.0.0/0",
          "target": "NATGW_Web_azA"
        } ] },
    {
      "name": "SandboxVPC_b",
      "routes": [
        {
          "destination": "0.0.0.0/0",
          "target": "NATGW_Web_azB"
        } ] } ]

New NFW Config Summary:

Add a new identical option named nfw instead of natgw to config

  • New section like “natgw”:, renamed to: “nfw”:
  • Supports one specific az, multiple specific az’s, all az’s w/subnets like above
  • Route tables support routing to the NFW replacing NATGW above w/”NFW_Web_azA”
    o NFW_Subnet_AZ
  • Add new policy object with policy name and file path
    • We should push something into this first policy to allow traffic/basic oob functionality (yaml attached)

New partial sample config file:

 {
  "name": "Sandbox",
...
  "natgw": {
    "subnet": {
      "name": "Web"
    }
  },
  "nfw": {
    "subnet": {
      "name": "App",
       "az": "a"
    },
    “policy”: {
         "name": "my-first-rule", 
          "path": “nfw-sample-rule.yaml”
     }
  },
…
  "route-tables": [
...
        {
          "destination": "10.0.0.0/8",
          "target": "NFW_App_azA"
        }
      ]
    },
    {
      "name": "SandboxVPC_a",
      "routes": [
        {
          "destination": "0.0.0.0/0",
          "target": "NATGW_Web_azA"
        }
      ]
    },
    {
      "name": "SandboxVPC_b",
      "routes": [
        {
          "destination": "0.0.0.0/0",
          "target": "NATGW_Web_azB"
        }
      ]
    },
,
    {
      "name": "SandboxVPC_TGWa",
      "routes": [
        {
          "destination": "0.0.0.0/0",
          "target": "NFW_App_azA"
        },
        {
          "destination": "10.0.0.0/8",
          "target": "TGW"
        }
      ]
    },
    {
      "name": "SandboxVPC_TGWb",
      "routes": [
        {
          "destination": "0.0.0.0/0",
          "target": "NFW_App_azB"
        },
        {
          "destination": "10.0.0.0/8",
          "target": "TGW"
        }  ]  }  ]

Challenge: How do we route inbound traffic?

  • ALB does not support DNS targets, backend ALB only has DNS address

Solution:

Part 1 - New Customer populated Perimeter account DDB Table:

  • perim-alb-identifier – i.e. Public-DevTest-perimeter (mandatory)
    • alb stored in: /ASEA/lelb/alb/1/name
  • public-hosts – i.e. [www.briantest.ca, briantest.ca] (mandatory)
  • public-paths – i.e. brianapp1 (optional)
  • target-alb-dns-name: i.e. internal-Core-mydevacct1-alb-882478976.eu-west-1.elb.amazonaws.com
    • stored in /ASEA/elb/alb/1/dns
  • target-group-name – i.e. briantest
  • target-group-port: “HTTPS:443”
  • target-port: 443
  • target-alb-dns-addresses: [list of many ip addresses from last alb tg UPDATE]
    • ???acm cert??? what do we need to handle? anything?

Part 2 - Create a new perimeter Lambda, triggered every 1min

  • scans the ddb table
  • updates each alb with appropriate rules and target groups
  • updates target group addresses by adding new addresses and then removing old addresses
    • do not replace list or remove first, must add then remove

Lambda REQUIREMENTS:
If new account, create new target group (up to 100 per alb) ---> place in 2nd Lambda triggered on DDB stream?

  • TG, Ip Based, Name, HTTPS, 443, local VPC, HTTP1, /health-check, https,
  • Name=host+path replacing special characters and trimming to X
    Other private addresses, 1.1.1.1, 443, 2.2.2.2, 443, etc, etc
  • populate based on nslookup of backend alb to a max of 10
  • do not set TG stickiness
    Forward to: IP based Target Group (up to 99 TG per ALB)
  • Based on host/path combo to specific TG
  • Do NOT set group stickiness (only 1 group)
  • Rules with paths – add to top
  • Rules with urls only add to bottom

Default rules should point to a target group defined by a DEFAULT DDB entry as well
Defined by placing ‘DEFAULT’ in public-host field

New account based config file variable: deploy-alb-stitching: true

  • causes DDB tables and 2 Lambda's to be properly deployed

[FEATURE] Accelerator Wizard based GUI interface (to abstract/hide the configuration file) (ALPHA ONLY)

This feature will ship with DRAFT documentation for the Accelerator configuration file.

  • top two levels of object descriptions should be decent, lower levels not yet validated.

Accelerator Wizard based GUI interface (to abstract/hide the configuration file)

  • Easy mode (limited selections) and Advanced mode (extreme customization)
  • Deployed in Ops account, permissions based on IAM credentials
    • Org Admin can make any configuration change
    • Account Admin creates accounts, makes minor changes, approves user requests
    • End Users can request a 'set of accounts' of a certain type, request account changes, a flow, or a custom vpc
    • approval workflows for end-user requests
  • Phased delivery (Org Admin Wizard on day 1, slowly add additional workflows)

NOTE: When initially released, this will be an OPTIONAL, ALPHA or BETA feature used to attain feedback and direction.

[BUG] [SM] z145 - GuardDuty occasional failures just won't go away

Short Problem Description

  • GuardyDuty retry code still not working as desired
  • Occasional state machine failures, never receive signal, SM times out
  • Believe this requires another ticket to the Gurdduty service team
GuardDutyPublishDetector2E43EF7D
----------------------------------------------------------------------------------------------------
|   timestamp   |
|---------------|-----------------------------------------------------------------------------------
| 1603844087725 | START RequestId: 890a2bee-cc1c-44db-b2b7-ce48045e72a3 Version: $LATEST
| 1603844087736 | 2020-10-28T00:14:47.726Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 INFO GuardDuty Delegated Admin Account Setup...
| 1603844087736 | 2020-10-28T00:14:47.736Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 INFO {   "RequestType": "Update",   "ServiceToken": "arn:aws:lambda:eu-west-1:068580847099:function:PBMMAccel-Master-Phase2-CustomGetDetectorIdLambdaD-C4GRG88GFWLN",   "ResponseURL": "https://cloudformation-custom-resource-response-euwest1.s3-eu-west-1.amazonaws.com/......snip |
| 1603844088357 | 2020-10-28T00:14:48.357Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 INFO [AWS guardduty 200 0.562s 0 retries] listDetectors({})
| 1603844088359 | 2020-10-28T00:14:48.358Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 DEBUG Sending successful response 
| 1603844088359 | 2020-10-28T00:14:48.358Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 DEBUG {   "physicalResourceId": "GaurdGetDetoctorId",   "data": {     "DetectorId": "1cb9f6c217f85534b6c4b555d480cc51"   } } 
| 1603844088457 | 2020-10-28T00:14:48.456Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 INFO Status code: 403 
| 1603844088457 | 2020-10-28T00:14:48.457Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 INFO Status message: Forbidden 
| 1603844088457 | 2020-10-28T00:14:48.457Z 890a2bee-cc1c-44db-b2b7-ce48045e72a3 INFO Found status code 403 - retrying: Forbidden   
| 1603844088477 | END RequestId: 890a2bee-cc1c-44db-b2b7-ce48045e72a3
| 1603844088477 | REPORT RequestId: 890a2bee-cc1c-44db-b2b7-ce48045e72a3 Duration: 752.64 ms Billed Duration: 800 ms Memory Size: 128 MB Max Memory Used: 84 MB Init Duration: 277.30 ms 
------------------------------------
  • Why on an attempted retry does it say forbidden???
  • anything we can do to fix this?

Expected Behavior

  • retries and SM succeeds

Actual Behavior

  • Forbidden message, no signal to CFN
  • SM times out after an hour when credentials expire

##Screenshots
GD-Re-occur3.txt

GD-Re-occur

GD-Re-occur1

[FEATURE] Move AWS SEA roadmap to GitHub Projects, Move internal PFR's/defects to GitHub Issues

Current State

  • Product roadmap is published w/codebase as a markdown document
  • Sprint activity, active feature development, and outstanding issues (collectively the backlog) is tracked on an internal AWS tool
  • All new issues (excluding security vulnerabilities) and feature requests will be tracked here in GitHub Issues
  • Internal backlog (less than 10 items) will continue to be tracked in the internal tool, until delivered

Future State

  • Product roadmap migrates to GitHub Projects
  • All existing and new backlog items managed through GitHub Issues and GitHub Projects

[BUG][SM] z147 - Managed AD fails to deloy to GCWide when three AZs are enabled

Bug reports which fail to provide the required information will be closed without action.

Required Basic Info

  • Accelerator Version: 1.2.2
  • Install Type: Clean
  • Install Branch: Standalone
  • Upgrade from version: N/A

Describe the bug
Managed AD fails to deploy to CentralVPC_GCWide when third AZ is activated.
"GCWide" {
"name": "GCWide",
"share-to-ou-accounts": false,
"share-to-specific-accounts": [
"operations"
],
"definitions": [
{
"az": "a",
"route-table": "CentralVPC_GCWide",
"cidr2": "100.96.252.0/25"
},
{
"az": "b",
"route-table": "CentralVPC_GCWide",
"cidr2": "100.96.252.128/25"
},
{
"az": "d",
"route-table": "CentralVPC_GCWide",
"cidr2": "100.96.253.0/25",
"disabled": false
}
]
}

When third AZ is disabled, MAD deploys fine.

Failure Info

  • MicrosoftADMicrosoftADFC7F6466
    Invalid subnet ID(s). They must correspond to two subnets in different Availability Zones. : RequestId: 5bf4f88b-f3f6-4fee-9c4c-a52218e7c138 (Service: AWSDirectoryService; Status Code: 400; Error Code: InvalidParameterException; Request ID: 5bf4f88b-f3f6-4fee-9c4c-a52218e7c138; Proxy: null)

Required files

  • Please provide a copy of your config.json file (sanitize if required)

Steps To Reproduce

  1. Enable "az" : d in GCWide (this is a third enabled az)
  2. Deploy MAD it will fail
  3. See error Invalid subnet ID(s). They must correspond to two subnets in different Availability Zones.
  4. Disable "az": d (leave two az enabled)
  5. Deploy MAD is will work

Expected behavior
I expect MAD to deploy even when three "az" are enabled.

Screenshots
N/A

Additional context
N/A

[FEATURE] Enhancement - SCP Enhancements

  • Streamline SCP's from 4 files to 3 files
  • max file size is 5120 bytes after minified
  • move all ASEA protection SCP's together (2 files) and all security SCP's together (1 file)
  • move any core-ou only SCP's into a different file
  • review overlap with Control Tower SCP's (result: virtually none, all SCP's tightly scoped)
  • tweak to support control tower (i.e. add ${ORG_ADMIN_ROLE} to exclusion lists where required)
  • determine how to apply SCP's to Security OU, given Control Tower uses 2 files today
    • for the 2 accounts in the security ou (log-archive and security), directly apply the 3rd SCP file at the account level in the ASEA config file.
  • NOTE: Today CT uses 2 SCP's on the security OU and 1 on any workload OU. 1 slot is also consumed with the Full Access SCP.
  • Add protection for asea deployed acm certs, nfw, config rules, etc. Scope if missing. Fix cdk bucket protection.

[FEATURE] CIDR management capabilities, enabling at-scale spoke VPCs w/central routing

  • Today ASEA supports:
    • deploying VPC's either: a) centrally in a shared account or accounts, or b) locally in each spoke account
    • these VPC's CAN be attached to a central TGW enabling central routing, if individually defined on a per account basis
    • if defined per ou, they cannot be TGW attached, as IP's would be identical per spoke account, preventing routing
  • Add the capability to:
    • define VPC's per ou, deployed locally in each spoke account AND attached to the TGW
    • this requires the addition of a CIDR management component
    • vpc and subnet definition should continue to have existing flexibility/definition in the config file
    • allow replacing VPC and subnet CIDRs in existing config file with dynamic environment variables (i.e. %block1_24%. %block2_16%)
    • customer stores a large IP CIDR block or blocks in a DDB table, marked available
    • on vpc provisioning:
      • break the specified block size out of the defined range, update available and assigned CIDRs back into DDB table (recording account/vpc assignment)
      • on vpc provisioning, replace environment variable with assigned range for the vpc/subnets

[FEATURE] Enhancement - Email Security Hub findings/alerts based on risk rating

Email Security Hub findings/alerts based on risk rating

Security Hub: Forward from Eventbridge to SNS

  • eventbridge rule to email findings
  • "LOW" events to low topic, "MEDIUM" events to medium topic, "CRITICAL + HIGH" to high topic ---> send each to the respective SNS topic (cross account to the Ops account)
  • https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cwe-all-findings.html
  • we already have code for event rules, these will be in security account
  • add central-security-services\security-hub-findings-sns: "None || Low || Medium || High || Critical"
  • this new variable will only turn this feature on for events of the listed priority AND above
  • none does as today, no rule/deletes existing rule.
  • Low enables rules for low, med, high + critical
  • med enables rules for med, high+critical
  • high enables for high+critical
  • critical only enables for critical level events
  • i.e.
    {"source": ["aws.securityhub"],
    "detail-type": ["Security Hub Findings - Imported"],
    "detail": {
    "findings": {
    "Severity": {
    "Label": ["CRITICAL", "HIGH"] }}}}

[FEATURE] Enhancement - Config Rule Enhancements (NIST800-53 Conformance pack, SCPs)

Following the addition of the capabilities to deploy config rules at scale:

  • migrate the NIST800-53 conformance pack into the Accelerator configuration file (~100 rules)
  • add SCP's to protect deployed config rules (PBMMAccel- prefixed or tagged) - already protected
  • add script to convert AWS Conformance pack into ASEA config file format for deployment
    NOTES:
  • Conformance packs are essentially just a deployment engine, for which the ASEA also is
  • Today, organization conformance packs are limited in rule count, do not allow targeting different packs per OU, and have limits on the number of supported parameters - for these reasons we are deploying config rules directly.

[FEATURE] 7.85 Enable new S3 ownership flag on all Accelerator buckets

Summary:

STEPS

ACCEPTANCE CRITERIA

  • This new flag is set on all accelerator created buckets on new installs
  • This new flag is set on all upgraded accelerator created buckets
  • New files placed into the bucket have ownership of the local account
  • This is CRITICAL for the log-archive account buckets
  • BUCKET OWNER has ownership of all new objects

[BUG] [SM] OUValidation Lambda Times Out During MainStateMachine_sm run

Bug reports which fail to provide the required information will be closed without action.

Required Basic Info

  • Accelerator Version: (eg. v1.1.6) * release/v1.2.1b
  • Install Type: (Clean or Upgrade) * clean
  • Install Branch: (ALZ or Standalone) * Standalone
  • Upgrade from version: (N/A or v1.x.y) * N/A
  • Which State did the Main State Machine Fail in: (e.g. Phase 0) * MainStateMachine_sm

Describe the bug
(A clear and concise description of what the bug is.)
The OU validation lambda (PIpelineOuValidationHandler) times out after 900 seconds (15 minute lambda timeout) after confirming that OUs have not changed. This issue happens regularly in both Isengard test labs as well as customer environments.

Failure Info

  • What error messages have you identified, if any:
    2020-11-18T15:09:38.243Z c8ae0c01-b1a8-4a4f-846c-5dfd60baa02c Task timed out after 901.14 seconds

  • What symptoms have you identified, if any:
    Pipeline fails, deployments do not proceed

Required files

  • Please provide a copy of your config.json file (sanitize if required)
    config.txt

  • If a CodeBuild step failed- please provide the full CodeBuild Log

  • If a Lambda step failed - please provide the full Lambda CloudWatch Log
    log-events-viewer-result.txt

  • In many cases it would be helpful if you went into the failed sub-account and region, CloudFormation, and provided a screenshot of the Events section of the failed, deleted, or rolled back stack including the last successful item, including the first couple of error messages (bottom up)

Steps To Reproduce

  1. Go to Mainstatemachine_sm
  2. Click on Start Execution
  3. Wait for OU Validation to time out (happens about 50% of the time recently), usually the first re-run fixes the issue

Expected behavior
That OU Validation will pass/complete after determining no OU changes have been made

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

[FEATURE] AL2 docker image, move to a prebuilt image

Required Basic Info

  • Accelerator Version: v1.2.4

FEATURES

  1. Consider switching to an AL2 base image (https://gallery.ecr.aws/amazonlinux/amazonlinux)
    • requires installation of node runtimes and dependencies during current ECR image build process in ASEA installer
    • will increase installation time (one time thing), may decrease SM time moving to leaner image
    • provides tighter dependency control
  2. Consider moving to prebuilt ASEA public image (per release), instead of private ECR image built during customer install
    • will reduce ASEA installation time, remove task duplication per customer
    • adds overhead to ASEA release process requiring image creation and publication during release process
    • remove ASEA customer installer dependencies on npm
    • code already exists in ASEA installer, simply moving to a separate ASEA process instead of a customer install time process

Describe alternatives you've considered

  • previously used alpine/node image pulled from docker
  • currently using bitnami/node image pulled from ecr public
  • worth consideration - has various pros/cons

[FEATURE] Enhancement - Improve Fortigate sample configuration

Purpose:

  • bring up second firewall tunnel, even though not needed for availability, given AWS corporate nag to customers
    • this is an active/active firewall cluster, both firewalls always active and can tolerate a tunnel failure

Firewall config details:

  • forced a route priority from AWS to always use tunnel1, so long as tunnel1 is up (using aspath prepending)
  • same in the inverse direction from FGT -> TGW by forcing traffic through tgw-vpn1 by using the “local preference” tied to the route-map.
  • add any additional Fortinet best practice configuration options

Required code changes:

  • Today we support the following variables be replaced in the provided firewall config file:
  • Requires adding the following additional environment variable replacement code to start the second tunnel
    • ${PublicVpnTunnel2InsideAddress1}
    • ${PublicVpnTunnel2OutsideAddress1}
    • ${PublicPreSharedSecret2-1}
    • ${PublicCgwTunnel2InsideAddress1}
  • these variable names change if the subnet names change.
  • the values for each variable comes from the TGW when you create the site-to-site VPN on it
  • Essentially, look at the code for ${PublicVpnTunnelInsideAddress1} - and see how it gets its value, their are 2 values actually provided and we are only using the first value today, you need to get the 2nd value and put it in ${PublicVpnTunnel2InsideAddress1}, same for the other 3
  • Once code complete, test with \reference-artifacts\Third-Party\firewall-example-FUTURE.txt
    • this config file has been manually tested and should work once new variables added

[FEATURE] Simplify perimeter FW design incorporating AWS Gateway Load Balancer

  • dependent on AWS Gateway Load Balancer GA'ing in the Canadian region

Change Load Balancers to support a new additional type: GWLB

  • This will be created in perimeter account

  • We will add a second VPC to config config file, named Firewall VPC and Firewall subnet(s)

    • It will be a /23 VPC with two /24 subnets (could be more az’s for customers)
    • All below from options should be changeable via config file
  • Create a GWLB using the ALB config template (GWLB does not need certs, security-policy, scheme, support target-type of both “instance” and “IP”

  • Create the GWLB in a defined subnet and all az’s that exist for that subnet

  • Create a TG

    • Instances (allow IP selection as well)
    • Tg_name=config file
    • Leave it Geneve protocol
    • Assoc w/FW VPC
    • HC = TCP (config file)
    • Targets=empty at this point
  • Set GWLB listener to above TG

  • Create GWLB endpoints in designated subnet (each AZ)

    • Integrated services tab of GWLB
    • i.e. a GWLB is CREATED in a VPC, but then has endpoints dropped into config file defined subnet and AZ’s i.e. it is associated with 2 VPC’s and 2 sets of subnets.
  • Update routes to point to GWLB endpoints

    • Need per az route tables in perimeter VPC
    • Need to update each rt to point to appropriate endpoint for that az
  • Deletion protection on like other ALB’s

  • Customer resource - We need to update TGW to allow setting CreateTransitGatewayVpcAttachmentRequestOptions - ApplianceModeSupport: enabled on each attachment (if specified for any attachment). Since we are adding this custom resource, we should also add the ability to set Ipv6Support: enabled based on config file ON NEW ATTACHMENTS ONLY (as changing this setting not supported) (set in same place as DNSSupport setting)

  • We then need to spin up FW’s in this target group, but we want to use an autoscaling group which updates the GWLB target group

  • Let’s use a combination of the FW config/code from the current config file PLUS RDGW autoscaling group config/code to come up with a new fw deployment capability

  • Create an auto-scaling group

    • Create a launch template
    • Customer provided AMI
  • On-demand instances

    • Instance type
    • Min, max, desired,
    • VPC, Subnets
  • Unlike current FW, allow customer to define what is in USERDATA, rather than current fixed fields

Validate implementation:

For the auto-scaling group – Checkpoint has sample CFN templates with GWLB support - use them as a baseline example of how to configure/setup. On that page, scroll to: Gateway Load Balancer (GWLB) Auto Scaling Group - likely the second example is closest to what we want to do. Ensure also continues to support Fortinet capabilities.

[FEATURE] 7.80 Deploy new Guardduty S3 capabilities worldwide

** This ticket builds upon, or requires duplicating the "results" of what we did in internal ticket 7.25 but for the new S3 features added to Guardduty **

Info:

Summary

  • Using AWS Organizations API enable Guardduty S3 functionality (all accounts, all regions)
  • Ensure we capture existing accounts created before turning on the S3 feature
  • Ensure all new accounts are auto-enabled by organizations
  • Ensure customers upgrading get full functionality/enablement
  • Enable using new parameters in the config file:
  • add new variables
    • "guardduty-s3": true,
    • "guardduty-s3-excl-regions": [],
  • occurs no matter the status of alz-baseline
  • use central security services account and region for the "GD admin account" (don't think this can be different from GD settings anyway).
  • NOTE: If "guardduty": false, then "guardduty-s3" will also be FALSE, regardless of config file setting. guardduty-s3 is only used if guardduty:true.

BELOW is info from ticket 7.25 (reference only)

Summary:

  • This task is performed ONCE for EACH region, in the ROOT account only
  • This task enables guardduty for all existing and future new accounts in the AWS Organization, setting a designated account as the master account
  • Occurs if alz-baseline=false and global-options/central-security-services/guard-duty=true
  • Reference: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_organizations.html
  • set the Guardduty master account to: global-options/central-security-services/account (security in this example)
  • fix spelling of guardduty in paraemeters file, removing -
  • all non-suspended accounts in the organization should be added

This ticket uses these parameters:
"central-security-services": {
"account": "security",
"region": "ca-central-1",
"security-hub": true,
"guard-duty": true,

STEPS

  • See instructions in: Reference: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_organizations.html

  • For each region: Register a GuardDuty delegated administrator for your organization, per global-options/central-security-services/account (From the Org Master Account)

  • For each region: Add existing organization accounts as members, use type: via Organizations, select enable in the banner bar to select all existing accounts, and then choose Add Member (THIS IS DONE IN THE DESIGNATED GD MASTER ACCOUNT)

  • For each region: Automate the addition of new organization accounts as members - Settings, choose Accounts, Turn on Auto-enable. (MAY BE DONE BY PREVIOUS STEP)

  • Can we do this in parallel, rather than sequentially?

ACCEPTANCE CRITERIA

  • if alz-baseline=false and global-options/central-security-services/guard-duty=true
  • Guardduty master account is set to the security account (based on config file) in all regions which have guardduty
  • All existing accounts have Guardduty enabled in all regions which have guardduty
  • Auto-enable is set for all new accounts, in all regions which have guardduty

[ENHANCEMENT] Replace CDK modules pending deprecation

Required Basic Info

  • Accelerator Version: v1.2.3

Is your feature request related to a problem? Please describe.

  • CDK deprecated features/functionality (currently still supported)
  • These 2 CDK modules used in 3 ASEA modules are in the process of being deprecated, replace as appropriate:
    // eslint-disable-next-line deprecation/deprecation
    context.done = resolve;
    // eslint-disable-next-line deprecation/deprecation
    return cdk.Lazy.stringValue({
  • If other modules are pending deprecation, replace functionality/move to new mechanism so we have no pending deprecation features in codebase.

Describe the solution you'd like

  • replace above CDK modules with replacement code/CDK functionality

Describe alternatives you've considered

  • N/A

[BUG][Functional] Duplicate entries for account name being to the config file by the accelerator

Bug reports which fail to provide the required information will be closed without action.

Required Basic Info

  • Accelerator Version: v1.2.2
  • Install Type: Both
  • Install Branch: Standalone
  • Upgrade from version: N/A

Describe the bug
Workload accounts created by the UI or API/CLI and moved into a SEA managed OU has entries dynamically added to the config file. A spelling typo '' is causing erroneous json attributes to the config file.

Example:
"myworkloadaccount": {
"account-name": "....",
"accoount-name": "...t"
},

Failure Info

  • What error messages have you identified, if any: None
  • What symptoms have you identified, if any: None

Required files

Steps To Reproduce

  1. Create an AWS Account
  2. State Machines Runs
  3. Move it to a managed OU

Expected behavior
A clear and concise description of what you expected to happen.

Only see "account-name" attribute.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

[BUG][OTHER] S3 'consistency' Issues

Required Basic Info

  • Accelerator Version: v1.2.3
  • Install Type: Clean
  • Install Branch: Standalone
  • Which State did the Main State Machine Fail in: Verify Files

Describe the bug

  • we are running into occasional issues where the files that exist in s3, are not found
    • by verification logic, or ec2 instances attempting to access them
  • In this case, "Verify Files" fails the SM because a file fails to exist in S3 which was successfully copied in Phase0
  • screenshots shows the file does actually exist
  • screenshots show the file creation timestamp is before the verify file failure timestamp
  • screenshots show an aws s3 ls using the exact cut/paste filename succeed
  • Additionally, we have had occasionally similar issues with the firewall config file failing to be read by the firewalls at boot/userdata from s3 - files are all proven to exist in this case as well
  • thought it was a timing eventual consistency issue, but s3 is now strongly consistent
  • rerun of the state machine succeed with no changes/updates

Failure Info

  • Screenshot of error, including timestamp:
    image

  • screenshot of aws s3 ls on exact cut/paste filename output, including timestamp:
    image

  • screenshot of s3 console info of file:
    image

Please validate

  • please validate all accelerator verify files logic is accurate and up
  • please check for cdk/sdk library updates that may include a fix in later release
  • if nothing found after above, ticket the s3 service team and tag me

[BUG][Config Validation] z141 - Some VPC removal protections dropped

Short Problem Description

  • while changes to a VPC configuration appear to still be blocked, one of my collegues removed (xx'd) out a VPC and the state machine allowed the change, executing and then failing when it could not remove the VPC
  • ticket z002 (https://issues.amazon.com/issues/PBMM-83) was supposed to prevent this from happening along with other unsupported changes
  • sometimes nacl changes are allowed sometimes blocked.
  • we need to go back through and re-assess ALL blocks and make sure they are preventing all changes as originally intended
  • we need to go back through and re-assess ALL "scoped overrides" and make sure they overide the check as intended
    • 'ov-global-options': false,
    • 'ov-del-accts': false,
    • 'ov-ren-accts': false,
    • 'ov-acct-email': false,
    • 'ov-acct-ou': false,
    • 'ov-acct-vpc': false,
    • 'ov-acct-subnet': false,
    • 'ov-tgw': false,
    • 'ov-mad': false,
    • 'ov-ou-vpc': false,
    • 'ov-ou-subnet': false,
    • 'ov-share-to-ou': false,
    • 'ov-share-to-accounts': false,
    • 'ov-nacl': false
  • we need to add any new blocks required since we did z002.

Example

FYI - For the ADDITIONALLY/OVERIDES ISSUE - Proserve was doing a deployment today. They turned on az-d which fails due to nacl updates (expected), they then incremented the nacl id's by one i.e. 100 becomes 101, and the comparison blocks, blocked the change. They then set {"ov-nacl": true} and it still failed. but {"overrideComparison": true} succeeded. Just more evidence on the "SCOPED" overides not functioning properly. Error was: {\"errorType\":\"Error\",\"errorMessage\":\"There were errors while comparing the configuration changes:\nConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/1/rule\\"\nConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/0/rule\\"\",\"trace\":[\"Error: There were errors while comparing the configuration changes:\",\"ConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/1/rule\\"\",\"ConfigCheck: blocked changing config path \\"organizational-units/Dev/vpc/0/subnets/3/nacls/0/rule\\"\",\" at Runtime.Xn [as handler] (/var/task/index.js:329:271518)\",\" at processTicksAndRejections (internal/process/task_queues.js:97:5)\"]}

Expected Behavior

  • Accelerator fails with "ConfigCheck: blocked changing..." error (and should NOT continue)
  • As z002 was a long time ago, can we also take a quick review to make sure other blocks remain in place. Are their any additional new blocks we need to add?

Actual Behavior

  • Accelerator does NOT block customer from making unsupported VPC removal change

[ENHANCEMENT] Add AWS Security Hub integration with AWS Organizations

  • Accelerator Version: (eg. v1.2.3)
  • Install Type: (Upgrade)
  • Install Branch: (Standalone)
  • Upgraded from version: (v1.2.2)

This feature is to simplify the ASEA's codebase and reduce technical complexity by integrating Security Hub with the organization (just announced, more info at https://aws.amazon.com/about-aws/whats-new/2020/11/aws-security-hub-integrates-with-aws-organizations-for-simplified-security-posture-management/).

Implementation:

  1. Designate a Security Hub administrator account (this API call is done in the organization root account across all regions):
    AWS CLI: aws securityhub enable-organization-admin-account --admin-account-id <security account ID>

  2. Automatically enable new organization accounts for Security Hub (this API call is done in the security account across all regions):
    AWS CLI: aws securityhub update-organization-configuration --auto-enable

  3. Ensure that ALL accounts in the organization are enabled as Security Hub member accounts (this API call is done in the security account across all regions):
    AWS CLI: aws securityhub create-members --account-details '[{"AccountId": "123456789111"}, {"AccountId": "123456789222"}]'

Verification:
The "describe-organization-configuration" AWS CLI command can be performed in the security account to confirm that AutoEnable is to "True". Also, the "aws securityhub list-members --no-only-associated" AWS CLI command can be performed in the security account to confirm that all of the organization's accounts have MemberStatus set to "Enabled".

[FEATURE] 7.75 - Deploy AWS Systems Manager Automation Documents

Summary

The AWS System Manager (SSM) Automation document defines the actions that Systems Manager performs on your managed instances and other AWS resources when an automation execution runs. A document contains one or more steps that run in sequential order. Each step is built around a single action. Output from one step can be used as input in a later step.

One immediate use case is to leverage SSM Automation Documents as a remediation action for AWS Config Rules. AWS Config allows you to remediate noncompliant resources that are evaluated by AWS Config Rules. AWS Config applies remediation using AWS Systems Manager Automation documents. These documents define the actions to be performed on noncompliant AWS resources evaluated by AWS Config Rules.

In a multi-account environment, it is possible to 'share' Automation documents with other accounts allowing a central location to manage them. It is possible so use an Automation document created in Account 'A' and shared with Account 'B' in an AWS Config rule.

Note: This cannot be configured in the Console UI; only possible via the CLI/API

Here is a sample AWS Config remediation action:

[
    {
        "ConfigRuleName": "elb-logging-enabled-test",
        "TargetType": "SSM_DOCUMENT",
        "TargetId": "arn:aws:ssm:ca-central-1:144940249523:document/PBMMAccel-alb-enable-logging",
        "Parameters": {
            "AutomationAssumeRole": {
                "StaticValue": {
                    "Values": [
                        "arn:aws:iam::517328343847:role/PBMMAccel-SSM-ALB-Remediation-Role_Test"
                    ]
                }
            },
            "LoadBalancerArn": {
                "ResourceValue": {
                    "Value": "RESOURCE_ID"
                }
            },
            "LogDestination": {
                "StaticValue": {
                    "Values": [
                        "517328343847-elblogs"
                    ]
                }
            },
            "LogDestinationPrefix": {
                "StaticValue": {
                    "Values": [
                        "elblogs3"
                    ]
                }
            }
        },
        "Automatic": false,
        "MaximumAutomaticAttempts": 5,
        "RetryAttemptSeconds": 60
        
    }
]

Note: The TargetID is using the ARN of the Automation document in account '144940249523', but this is deployed in account '517328343847' as an example

---> QUESTION - several of the above items are parameters that need to be search/replaced. How do you suggest this been implemented? SSMAD input parameters, search replace using a unique string variant?

Design

SSM Automation Documents (SSMAD) can be stored in the github repository:

root
`-- reference-artificts
    |-- ssm
        |-- automation-documents
        |   `-- PBMMAccel-ssm-elb-enable-logging.yaml
        |
        `-- iam
            `-- PBMMAccel-ssm-elb-enable-logging-exec-perms.json

Config File

{
  "global-options": {
      "scps": [
          ...
          ...
          ...
      ],
      "ssm-automation": [{
          "account": "operations",
          "regions": ["ca-central-1", "us-east-1"],
          "documents": [
            {
                "name": "PBMMAccel-SSM-ELB-Enable-Logging",
                "description": "Calls the AWS CLI to enable access logs on a specified ELB.",
                "automation-document": "PBMMAccel-ssm-elb-enable-logging.[yaml|json]",
                "automation-exec-perms": "PBMMAccel-ssm-elb-enable-logging-exec-perms.json"
            }
          ]
      }]
  }
  ...
  ...
  "organizational-units": {
     "my-org-unit": {
         ....
         ....
         "ssm-automation": [{
            "account": "operations",
            "regions": ["ca-central-1", "us-east-1"],
            "documents": ["PBMMAccel-SSM-ELB-Enable-Logging","PBMMAccel-SSM-document 2","PBMMAccel-SSM-document 2"]
         }]
     }
  }
}

The Global "ssm-automation" section defines the list of known SSMADs and are provisioned in the AWS account "account".

The OU json can define a "ssm-automation" object with a "documents" array defining which SSMADs should be shared. In other words, for all AWS Accounts in or below the 'my-org-unit' Organization Unit, the "ssm-automation" owner account will share the specified SSMAD. The IAM role and permissions defined by "automation-exec-perms" must be provisioned in each shared account.

Sample Script

ssm-stack.ts

import * as cdk from '@aws-cdk/core';
import * as iam from '@aws-cdk/aws-iam';
import * as ssm from '@aws-cdk/aws-ssm';
import { AwsSdkCall, AwsCustomResource } from '@aws-cdk/custom-resources';

import * as yaml from 'yaml';

export interface SSMDocument {
    readonly name: string;
    readonly description: string;
    readonly automationDocument: string;
    readonly automationExecParams: string;
    readonly sharedWith: Array<string>;
}

interface SSMStackProps extends cdk.StackProps {
    readonly ssmDocuments: Array<SSMDocument>;
}

export class SSMStack extends cdk.Stack {
    constructor(scope: cdk.Construct, id: string, props?: SSMStackProps) {
        super(scope, id, props);

        if (props?.ssmDocuments != null) {
            for (const ssmDocument of props?.ssmDocuments) {

                const doc = new ssm.CfnDocument(this, ssmDocument.name, {
                    name: ssmDocument.name,
                    documentType: 'Automation',
                    content: yaml.parse(ssmDocument.automationDocument)
                })

                const ssmShare = new SSMDocumentShare(this, ssmDocument.name + "Share",
                    {
                        documentName: ssmDocument.name,
                        sharedWith: ssmDocument.sharedWith
                    });
                ssmShare.node.addDependency(doc);

            }
        }
    }
}


interface SSMDocumentShareProps {
    readonly documentName: string;
    readonly sharedWith: Array<string>;
}

export class SSMDocumentShare extends AwsCustomResource {
    constructor(scope: cdk.Construct, name: string, props: SSMDocumentShareProps) {

        const ssmAwsSdkCall: AwsSdkCall = {
            service: 'SSM',
            action: 'modifyDocumentPermission',
            parameters: {
                'Name': props.documentName,
                'PermissionType': 'Share',
                'AccountIdsToAdd': props.sharedWith
            },
            physicalResourceId: { id: Date.now().toString() } // Update physical id to always fetch the latest version
        };

        super(scope, name,
            {
                onUpdate: ssmAwsSdkCall,
                policy: {
                    statements: [new iam.PolicyStatement(
                        {
                            resources: ['*'],
                            actions: ['ssm:ModifyDocumentPermission'],
                            effect: iam.Effect.ALLOW,
                        }
                    )]
                }
            });
    }
}

app.ts

#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from '@aws-cdk/core';
import { SSMStack, SSMDocument } from '../lib/ssm-stack';
import * as fs from 'fs';
import * as path from 'path';


const app = new cdk.App();

const ssmDocuments: Array<SSMDocument> = new Array<SSMDocument>();

const ssmElbEnableLogging1: SSMDocument = 
{
    name: 'PBMMAccel-SSM-ELB-Enable-Logging_Dev1',
    description: 'Calls the AWS CLI to enable access logs on a specified ELB.',
    automationDocument: fs.readFileSync(path.join(__dirname, "../lib/automation-documents/PBMMAccel-ssm-elb-enable-logging.yaml"), "utf-8"),
    automationExecParams: fs.readFileSync(path.join(__dirname, "../lib/iam/PBMMAccel-ssm-elb-enable-logging-permissions.json"), "utf-8"),
    sharedWith:["866470196944"]    
};

const ssmElbEnableLogging2: SSMDocument = 
{
    name: 'PBMMAccel-SSM-ELB-Enable-Logging_Dev2',
    description: 'Calls the AWS CLI to enable access logs on a specified ELB.',
    automationDocument: fs.readFileSync(path.join(__dirname, "../lib/automation-documents/PBMMAccel-ssm-elb-enable-logging.yaml"), "utf-8"),
    automationExecParams: fs.readFileSync(path.join(__dirname, "../lib/iam/PBMMAccel-ssm-elb-enable-logging-permissions.json"), "utf-8"),
    sharedWith:["866470196944"]    
};


ssmDocuments.push(ssmElbEnableLogging1);
ssmDocuments.push(ssmElbEnableLogging2);

new SSMStack(app, 'SSMDocuments',
{
    env: {
        region: 'ca-central-1'
    },
    ssmDocuments: ssmDocuments
});

[BUG][Functional] Security Hub - Unable to automatically disable security framework

Required Basic Info

  • Accelerator Version: v1.2.3
  • Install Type: ALL
  • Install Branch: Standalone
  • Upgrade from version: N/A

Describe the bug

  • the Accelerator properly enables and configures security hub
  • the Accelerator properly deploys the selected security frameworks
  • no ability to disable a security framework automatically, once enabled
  • Specifically customer enables AWS, PCI and CIS frameworks
  • customer then wishes to disable PCI frame work
  • PCI framework is removed from ASEA control, but remains activated in all accounts

Failure Info

  • No error messages

Expected behavior
Changing this:

    "security-hub-frameworks": {
      "standards": [
        {
          "name": "AWS Foundational Security Best Practices v1.0.0",
          "controls-to-disable": ["IAM.1"]
        },
        {
          "name": "PCI DSS v3.2.1",
          "controls-to-disable": ["PCI.IAM.3", "PCI.KMS.1", "PCI.S3.3", "PCI.EC2.3", "PCI.Lambda.2"]
        },
        {
          "name": "CIS AWS Foundations Benchmark v1.2.0",
          "controls-to-disable": ["CIS.1.20", "CIS.1.22", "CIS.2.8"]
        }
      ]
    },

to this

    "security-hub-frameworks": {
      "standards": [
        {
          "name": "AWS Foundational Security Best Practices v1.0.0",
          "controls-to-disable": ["IAM.1"]
        },
        {
          "name": "CIS AWS Foundations Benchmark v1.2.0",
          "controls-to-disable": ["CIS.1.20", "CIS.1.22", "CIS.2.8"]
        }
      ]
    },
  • should disable the PCI DSS framework (or any/all frameworks removed)
  • would it be easier if we added a "disabled": true instead? i.e.
        {
          "name": "PCI DSS v3.2.1",
          "controls-to-disable": [],
          "disabled": true
        },

Additionally, while not originally a requirement

  • Could we do the same for individual controls (assuming not an extensive amount of extra work?
    • i.e. "controls-to-disable": ["CIS.1.20", "CIS.1.22", "CIS.2.8"] changing to "controls-to-disable": ["CIS.1.20", "CIS.1.22"] would re-enable controls "CIS.2.8"

QUESTION

  • Would it be easier to simply update the codebase to use this feature and simplify the codebase all at the same time:
    #516

[FEATURE] Deploy customer provided Service Catalog Items

Deploy customer provided Service Catalog Items

  • Deploy to central account and share to appropriate accounts by OU's (allow excluding accounts)
  • Could be used to provision a SC item which enables requesting a custom local account VPC using the CIDR mgmt capabilities
  • see related ticket #714

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.