Giter VIP home page Giter VIP logo

awslabs / aws-deployment-framework Goto Github PK

View Code? Open in Web Editor NEW
639.0 33.0 222.0 4.26 MB

The AWS Deployment Framework (ADF) is an extensive and flexible framework to manage and deploy resources across multiple AWS accounts and regions based on AWS Organizations.

License: Apache License 2.0

Makefile 1.06% Python 97.85% Shell 0.86% Jinja 0.21% JavaScript 0.02%
aws deployment multi-account devops

aws-deployment-framework's Introduction

AWS Deployment Framework

Build Status

MegaLinter

The AWS Deployment Framework (ADF) is an extensive and flexible framework to manage and deploy resources across multiple AWS accounts and regions within an AWS Organization.

ADF allows for staged, parallel, multi-account, cross-region deployments of applications or resources via the structure defined in AWS Organizations while taking advantage of services such as AWS CodePipeline, AWS CodeBuild, and AWS CodeCommit to alleviate the heavy lifting and management compared to a traditional CI/CD setup.

ADF allows for clearly defined deployment and approval stages which are stored in a centralized configuration file. It also allows for account based bootstrapping, by which you define an AWS CloudFormation template and assign it to a specific Organization Unit (OU) within AWS Organizations. From there, any account you move into this OU will automatically apply this template as its baseline.

Quick Start

  • Refer to the Installation Guide for Installation steps.
  • Refer to the Admin Guide on guidance how-to manage and administrate ADF.
  • Refer to the User Guide for using ADF to generate and manage pipelines.
  • Refer to the Technical Guide if you want to learn more about the inner workings of ADF. For example in case you want to contribute or build on top of ADF.
  • Refer to the Samples Guide for a detailed walk through of the provided samples.
  • Refer to the Multi-Organization ADF Setup to use ADF in an enterprise-grade setup.

aws-deployment-framework's People

Contributors

abhi1094 avatar adamcoxon avatar alexevansigg avatar alfred-nsh avatar amimarkels avatar andreasaugustin avatar andyefaa avatar at7925 avatar avolip avatar benbridts avatar bundyfx avatar cattz avatar dependabot[bot] avatar igordust avatar ivan-aws avatar javydekoning avatar kalleeh avatar lasv-az avatar pozeus avatar rickardl avatar sbkok avatar shendriksen avatar stemons avatar stewartw avatar thiezn avatar thomasmcgannon avatar triha74 avatar tylergohl avatar yvthepief avatar zscholl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-deployment-framework's Issues

Flexible pipeline stages

Hi,

We are using ADF pipelines for almost all of our multi-account deployments but are increasingly finding scenario's where it would be beneficial to be able to define the various pipeline stages ourselves.

For instance, we would like to build a pipeline for managing containers in an ECR repository. We would like to roll out the ECR instance using CloudFormation and have a codebuild step next to it using the docker image that builds and uploads the image to the ECR repository.

Here's an example deployment_map showing what the interface could possibly look like:

pipelines:
  - name: myfreefullyflexiblepipeline
    type: cc-freeforall 
    params:
      - SourceAccountId: 342596870
      - NotificationEndpoint: [email protected]
      - RestartExecutionOnUpdate: true
    stages:
      - type: adf-codebuild
        # defaults to python container and buildspec for parsing params
      - type: codebuild
        # This can be anything you want to do yourself after the adf-codebuild
        image: docker
        buildspec: buildspec2.yml
        targets:
          - path: 123456789
            regions: eu-west-1
      - approval
      - type: cloudFormation
        # templatefile defaults to template.yml 
        # templateparams defaults to params/ folder
        targets:
          - path: 123456789
            regions: eu-west-1
      - type: CloudFormation
        templatefile: template2.yml
        templateparams: params2/
        targets:
          - path: 123456789
            regions: eu-west-1

I suspect it's going to be quite difficult to make a change like this without breaking backwards compatibility. Also, I read somewhere that you're thinking of switching out jinja2 in favour of AWS Cloud Development Kit (CDK) which might provide other options to accomplish this.

I'm not sure if I fully like my proposed solution yet as it can get a bit unwieldy. Perhaps there is a better way of accomplishing this.

Having flexibility like this should also resolve issues like #89

Cheers,
Thiezn

Support optional resolve and import params in global.json file

Feature Request:
It would be good if the resolve: and import: functionality in the param.json files could be configured to return a default Null or empty string value if the parameter can not be resolved.

This can be useful as sometimes CloudFormation templates have conditional blocks and in some cases a particular parameter is not used or even expected to exist. At the moment the template would fail in this scenario.

Suggestion:

Add a question mark ? to the end of the import value. If the question mark is present and the value can't be resolved, return an empty string "", otherwise return the value.
If the question mark is not present, just fail as it does now.

{
    "Parameters": {
        "ExportedResourceId": "import:123456789101:eu-west-1:stack_name:export_key?"
    }
}

Insecure bucket/key policy?

Just noticed some aws:PrincipalOrgID use which I think might be unintended:

The way I understand this condition when used on a bucket/key policy, it actually gives any account in the organisation said privileges. Which means that all accounts in the org is given s3:Put* (which includes s3:PutBucketPolicy and friends) on the bucket and kms:Decrypt on the key.

If I've understood this correctly it should probably be fixed?

Pipeline completion triggers

Pipeline#1 should be able to trigger Pipeline#2 or any other resource that is mentioned in the target after execution succeed. This can be done by CloudWatchEvents triggering on completion of the source pipeline to start execution of the target pipeline.

pipelines:
  - name: sample-vpc  # The name of your pipeline (This will match the name of your repository)
    type: cc-cloudformation  # The pipeline_type you wish to use for this pipeline
    completion-trigger:
      pipeline:
        - sample-iam
    params:
      - SourceAccountId: 558421765047  # The source account that will hold the codebase
    targets:  # Deployment stages
      - /banking/testing
      - approval
      - /banking/production

SSM Parameter Store Values updating with same values

SSM Parameters Store values are updated in situations when they don't need to be updated. We can add a pre-flight check method to ensure the value is actually changing before updating to make this experience better.

Target accounts in deployment_map.yml could be mistakenly parsed as octal numbers

When setting up a pipeline with a target account which is a valid octal number (e.g. 020222070707) it seems that somehow it is converted from octal to decimal (e.g. 2185785799) when processed by ADF. I was able to work around this problem by surrounding the account number in quotes.

pipelines:
  - name: vpc-baseline
    type: cc-cloudformation
    params:
        - SourceAccountId: 123456789098
        - NotificationEndpoint: [email protected] 
    targets:
        - 020222070707

This is the error I received when ADF tried to generate the new pipeline.

[Container] 2019/06/22 11:21:42 Running command python ./adf-build/generate_pipelines.py 
2019-06-22 11:21:43,228 | INFO | __main__ | ADF Version 1.0.5 | (generate_pipelines.py:99) 
2019-06-22 11:21:43,228 | INFO | __main__ | ADF Log Level is INFO | (generate_pipelines.py:100) 
Traceback (most recent call last): 
  File "./adf-build/generate_pipelines.py", line 173, in <module> 
    main() 
  File "./adf-build/generate_pipelines.py", line 140, in main 
    pipeline_target.fetch_accounts_for_target() 
  File "/codebuild/output/srcxxxxxxxx/src/adf-build/target.py", line 112, in fetch_accounts_for_target 
    raise InvalidDeploymentMapError("Unknown defintion for target: {0}".format(self.path)) 
errors.InvalidDeploymentMapError: Unknown defintion for target: 2185785799 
 
[Container] 2019/06/22 11:21:44 Command did not exit successfully python ./adf-build/generate_pipelines.py exit status 1 
[Container] 2019/06/22 11:21:44 Phase complete: BUILD State: FAILED 
[Container] 2019/06/22 11:21:44 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python ./adf-build/generate_pipelines.py. Reason: exit status 1 
[Container] 2019/06/22 11:21:45 Entering phase POST_BUILD 
[Container] 2019/06/22 11:21:45 Phase complete: POST_BUILD State: SUCCEEDED 
[Container] 2019/06/22 11:21:45 Phase context status code:  Message:  

Idea: Adding optional schedule to ADF pipelines

Hi,

It would be helpful to be able to add a schedule to trigger a pipeline on a certain time. Would it be possible to integrate this into ADF so we can do something like the following in the deployment_map.yml:

  # Creates, prepares and moves accounts in the AWS organization
  - name: accounts
    type: cc-buildonly
    params:
      - Schedule: cron(0 12 * * ? *)
      - SourceAccountId: 1122334455
      - NotificationEndpoint: [email protected]
      - RestartExecutionOnUpdate: true

The Schedule parameter would take a Cloudwatch ScheduledEvent

Cheers,
Thiezn

aws-deployment-framework-bootstrap-pipeline failed for v1.0.0.

After launching ADF via the Serverless Application Repository, stage "UploadAndUpdateBaseStacks" in pipeline "aws-deployment-framework-bootstrap-pipeline" failed, here is the log:
...
[Container] 2019/06/06 22:40:44 Running command python adf-build/main.py
2019-06-06 22:40:44,925 | INFO | main | ADF Version 1.0.0 | (main.py:206)
2019-06-06 22:40:44,925 | INFO | main | ADF Log Level is INFO | (main.py:207)
2019-06-06 22:40:46,303 | INFO | organizations | SCPs are currently enabled within the Organization | (organizations.py:46)
2019-06-06 22:41:08,880 | INFO | cloudformation | 355189840595 - Executing Cloudformation Change Set with name: adf-global-base-adf-build | (cloudformation.py:245)
2019-06-06 22:41:09,071 | INFO | cloudformation | 355189840595 - Waiting for CloudFormation stack: adf-global-base-adf-build in us-west-2 to reach stack_create_complete | (cloudformation.py:130)
2019-06-06 22:42:02,117 | INFO | cloudformation | 182360788898 - Executing Cloudformation Change Set with name: adf-regional-base-deployment | (cloudformation.py:245)
2019-06-06 22:42:02,389 | INFO | cloudformation | 182360788898 - Waiting for CloudFormation stack: adf-regional-base-deployment in us-east-1 to reach stack_update_complete | (cloudformation.py:130)
2019-06-06 22:44:47,921 | INFO | cloudformation | 182360788898 - Executing Cloudformation Change Set with name: adf-global-base-deployment | (cloudformation.py:245)
2019-06-06 22:44:48,251 | INFO | cloudformation | 182360788898 - Waiting for CloudFormation stack: adf-global-base-deployment in us-west-2 to reach stack_update_complete | (cloudformation.py:130)
2019-06-06 22:47:35,636 | WARNING | organizations | Account 486349564477 is not an Active AWS Account | (organizations.py:127)
2019-06-06 22:47:35,766 | INFO | main | 728858254016 Is in the Root of the Organization, it will be skipped. | (main.py:171)
2019-06-06 22:47:36,198 | INFO | main | | (main.py:202)
Traceback (most recent call last):
File "adf-build/main.py", line 315, in
main()
File "adf-build/main.py", line 292, in main
thread.join()
File "/codebuild/output/src431224951/src/adf-build/shared/python/thread.py", line 30, in join
raise self.exc
File "/codebuild/output/src431224951/src/adf-build/shared/python/thread.py", line 22, in run
**self._kwargs
File "adf-build/main.py", line 199, in worker_thread
cloudformation.create_stack()
File "/codebuild/output/src431224951/src/adf-build/shared/python/cloudformation.py", line 256, in create_stack
create_change_set = self._create_change_set()
File "/codebuild/output/src431224951/src/adf-build/shared/python/cloudformation.py", line 194, in _create_change_set
ChangeSetType=self._get_change_set_type())
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationError) when calling the CreateChangeSet operation: Unable to fetch parameters [deployment_account_id,kms_arn] from parameter store for this account.
Exception ignored in: <module 'threading' from '/usr/local/lib/python3.7/threading.py'>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/threading.py", line 1281, in _shutdown
t.join()
File "/codebuild/output/src431224951/src/adf-build/shared/python/thread.py", line 30, in join
raise self.exc
File "/codebuild/output/src431224951/src/adf-build/shared/python/thread.py", line 22, in run
**self._kwargs
File "adf-build/main.py", line 199, in worker_thread
cloudformation.create_stack()
File "/codebuild/output/src431224951/src/adf-build/shared/python/cloudformation.py", line 256, in create_stack
create_change_set = self._create_change_set()
File "/codebuild/output/src431224951/src/adf-build/shared/python/cloudformation.py", line 194, in _create_change_set
ChangeSetType=self._get_change_set_type())
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationError) when calling the CreateChangeSet operation: Unable to fetch parameters [kms_arn] from parameter store for this account.

[Container] 2019/06/06 22:47:38 Command did not exit successfully python adf-build/main.py exit status 1
[Container] 2019/06/06 22:47:38 Phase complete: BUILD State: FAILED
[Container] 2019/06/06 22:47:38 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python adf-build/main.py. Reason: exit status 1

CloudFormation: An error occurred while validating the artifact bucket

Hello,

I was able to setup ADF and create our cross-account roles in about 20 sub accounts - works great, thank you so much about your initiative here!

Now I got this error message when i was trying to enable AWS Config in multiple regions.
Does it mean that I have to create regional.yml in the bootstrap repo?
If yes, can you give me some simple example or i should use DeploymentFrameworkRegionalS3Bucket from deployment/regional.yml

Thanks


CloudFormation error message

Pipeline: CREATE_FAILED	An error occurred while validating the artifact bucket 'adf-regional-base-deploy-deploymentframeworkregio-webjfbrobbfn': No bucket with the name adf-regional-base-deploy-deploymentframeworkregio-webjfbrobbfn was found. Choose a valid artifact bucket in 'eu-west-1', or create a new artifact bucket to use in your pipeline. (Service: AWSCodePipeline; Status Code: 400; Error Code: InvalidStructureException; ....)

CloudFormation parameters Tab

  S3Bucketeucentral1	/cross_region/s3_regional_bucket/eu-central-1	adf-regional-base-deploy-deploymentframeworkregio-ihpqtyaj1yk5
  S3Bucketeuwest1	/cross_region/s3_regional_bucket/eu-west-1	adf-regional-base-deploy-deploymentframeworkregio-webjfbrobbfn
  S3Bucketuseast1	/cross_region/s3_regional_bucket/us-east-1	adf-global-base-deployment-pipelinebucket-193xtdikqnt96

cat adfconfig.yml

  roles:
    cross-account-access: OrganizationAccountAccessRole  # The role by ADF to assume cross account access

  regions:
    deployment-account: us-east-1 
    targets: 
      - eu-central-1
      - eu-west-1
  config:
    main-notification-endpoint:
      - type: email # slack or email
        target: [email protected]
    moves:
      - name: to-root
        action: safe
    scp:
      keep-default-scp: enabled 
    protected:
      - ou-ejucxxx

cat deployment_maps/adf-config.yml

  - name: adf-config
    type: cc-cloudformation
    params:
      - SourceAccountId: XXXXXXXX
      - NotificationEndpoint: [email protected]
    targets:
      - path: /customers
        regions:
          - us-east-1
          - eu-central-1
          - eu-west-1

Allow adf to deploy in the master account

Hi,

Forgive me for asking for all these new shiny features, I DO like the ADFramework :)

Would it be possible to raise an enhancement request for allowing us to deploy resources through adf in the master account? This would make life easier for us for example to:

  • Enable organisational wide cloudtrail
  • Enable organisational wide config
  • Enable guardduty
  • etc..

Cheers,
Mathijs

replace Jinja2 with CDK

Jinja2 can become complex to manage as templates grow and contain more logic. It would be easier to have CDK applications for each pipeline type that work in the same way Jinja2 does now.

Support for pipelines with build phase only

Hi,

We have a few lambda functions that we want to share between different accounts. To avoid code duplication we would like to have a single dedicated repository and pipeline for the lambda code.

We would like to leverage ADF for the creation of the repository and pipeline but are only interested in the build stage of the pipeline. In the buildspec.xml we will package the code and upload to an S3 bucket.

I've tried using the cc-cloudformation type and providing an empty target list to the deployment_map.yml file similar to this:

  - name: my-shared-lambda-function
    type: cc-cloudformation
    params:
      - SourceAccountId: 11223344
      - NotificationEndpoint: [email protected]
      - RestartExecutionOnUpdate: true
    targets: []

Unfortunately it fails during the template validation step:

cloudformation.validate_template()
 File "/codebuild/output/src223970246/src/adf-build/shared/python/cloudformation.py", line 130, in validate_template
 raise InvalidTemplateError(error.response['Error']['Message'])
errors.InvalidTemplateError: [/Resources/Pipeline/Type/ArtifactStores] 'null' values are not allowed in templates

note I did see another adf type called cc-s3 but haven't investigated yet if we can use this for our use case. However, I can imagine there are other use cases for this functionality not involving an S3 bucket.

Another solution might be to introduce a pipeline_type that allows you to define your own pipeline deployment stages. Not sure what that would look like but could give flexibility to someone wanting to leverage adf for anything they want to build.

Cheers,
Thiezn

PS. I realize you can leverage Lambda layers for code de-duplication but that only allows you to share modules. It won't allow you to share the full lambda itself for re-use in other templates.

Create CodeCommit repo from deployment_map entry

I would really like to simplify creation of a new repo to make it "one click to get repo and pipe".

By adding a "create-repo: true" attribute ADF should be able to create a code commit repo by launching a CF in the SourceAccount. The CF could be a template provided by the pipeline generator repo so that users can extend it.

pipelines:
  - name: my-sample-app
    create-repo: true
    type: cc-customdeploy
    params:
      - SourceAccountId: 9876543210
    targets:
      - path: 12345667899

IAM role separation for master bootstrap

Currently the IAM role created during the bootstrap gives the deployment account access to the master account. For our usecase this isn't needed and we would like this to be separated into two roles, one for organization access and one for deployment access to the master account. It would also be good if this could be added as a flag in adfconfig.yml so it can easily be switched off.

ADF Pipeline creation fails if target account starts with zero

aws-deployment-framework-pipelines fails with error "Unknown definition for target” if target account Id starts with 0 (zero)

Following example fails for account 011111111111

deployment_map.yml

---
pipelines:
  - name: org-vpc
    type: cc-cloudformation
    params:
      - SourceAccountId: 555555555555
    targets:
      - 999999999999 # Sandbox009
      - 011111111111 # Sandbox001

Log:

Traceback (most recent call last):
File "./adf-build/generate_pipelines.py", line 155, in <module>
main()
File "./adf-build/generate_pipelines.py", line 122, in main
pipeline_target.fetch_accounts_for_target()
File "/codebuild/output/src340903710/src/adf-build/target.py", line 118, in fetch_accounts_for_target
raise InvalidDeploymentMapError("Unknown defintion for target: {0}".format(self.path))
errors.InvalidDeploymentMapError: Unknown defintion for target: 6102444452
[Container] 2019/04/24 19:02:22 Command did not exit successfully python ./adf-build/generate_pipelines.py exit status 1
[Container] 2019/04/24 19:02:22 Phase complete: BUILD State: FAILED

Please also note: the error message "Unknown definition for target: 6102444452" does not reference to 011111111111 but some other number, it makes difficult to find a problem when deployment_map.yml has may pipelines

Support for deploying SCPs to Organizational Units in ADF

It would be useful if you could deploy SCPs through the ADF.
I think as the bootstrap repository already supports an OU structure for deploying the bootstrap templates it could be enhanced to look for SCP policies in this structure and also deploy these to the OU.

I would see the new folder structure like:

.
├── adf-build/
├── adfconfig.yml
├── deployment/
├── example-adfconfig.yml
├── global.yml
├── readme.md
├── scp
│   ├── scprule1.scp
│   ├── scprule2.scp
│   └── scprule3.scp
├── team1
│   ├── global.yml
│   ├── lambda_codebase/
│   ├── regional.yml
│   ├── dev
│   │   ├── global.yml
│   │   ├── lambda_codebase/
│   │   ├── regional.yml
│   │   └── rules
│   └── rules

The SCP's would be stored separate to the org structure so that they can be re-used. Either in this repository as I have it in the scp directory, or perhaps a separate repository.

Then within each OU a rules file which would contain a list of policies which should be applied.

> cat team1/rules
scprule2.scp
scprule3.scp

> cat team1/dev/rules
scprule1.scp

Silently failing to create github-cloudformation pipeline

Hi!

We are attempting to switch from CodeCommit to Github as the source (github-cloudformation) for a pipeline in our deployment_map.yml. The commit which implements this change has been picked up by deployment-framework-pipelines in CodePipeline and runs/exits without error, however, it does not create the expected deployment pipeline.

I wish I could give more information, since presumably an error has occurred somewhere, but nothing is being logged in the CreateOrUpdatePipelines stage of the pipeline which might suggest whats going wrong 😅

Make ADF Single Click deployment via AWS SAR

ADF Should be fully automated from an installation perspective all from a single click as part of the Serverless Application Repository. A user should be able to click deploy and get a CodeCommit Url for the bootstrap repository and a CodeCommit Url for the pipelines repository that contains skeleton example files.

Allow CloudFormation Pipeline Parameters to be written yaml

Parameter files used for CloudFormation should be able to be written in yaml and get flipped into JSON before ingesting into CodePipeline. This helps keep the experience streamlined in yaml when writing CloudFormation templates and offers a overall better experience.

State Machine not found - First installation

This issue happened only once and I apologize for not being detailed. I just wanted to log it here to see if anyone has seen this or if its even reproducible.

Issue: Account bootstrapping failed with error StateMachine not found.

[ERROR] StateMachineDoesNotExist: An error occurred (StateMachineDoesNotExist) when calling the StartExecution operation: State Machine Does Not Exist: 'arn:aws:states:eu-west-1:652199396486:stateMachine:EnableCrossAccountAccess'
Traceback (most recent call last):
  File "/var/task/generic_account_config.py", line 40, in lambda_handler
    step_functions.execute_statemachine()
  File "/opt/python/stepfunctions.py", line 48, in execute_statemachine
    self._start_statemachine()
  File "/opt/python/stepfunctions.py", line 66, in _start_statemachine
    "error": self.error
  File "/var/runtime/botocore/client.py", line 320, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/var/runtime/botocore/client.py", line 623, in _make_api_call
    raise error_class(parsed_response, operation_name)

This happened when i was bootstrapping the deployment account for the first time. In the step functions, once it determined if it was a create or update, instead of checking if its a deployment account it jumped to ExecutingDeploymentAccountStateMachine step and this is the place of failure. I was not able to find the actual root cause. I tried to move the deployment account to root and back with the same result. I did not try any other account type. Unfortunately i do not have the screenshot.

Fix? I wouldn't call it a fix, but after i deleted ADF stack from my master account + paramter store and reinstalled it. It all seemed to work. Any idea what happened?

Rename bootstrap_repository/global.yml to example-global.yml

Hi @bundyfx

Would it be an idea to rename the bootstrap_repository/global.yml to bootstrap_repository/example-global.yml ? This would give the proper signal to users of ADF that they are supposed to tweak global.yml to their own requirements, similar to what you do with the adfconfig.yml file.

This would also make it easier for us to upgrade our adf environment to later versions.

Cheers,
Thiezn

Strategy for pulling in upstream changes?

According to the documentation the src/bootstrap_repository and src/pipelines_repository folders should be pushed to CodeCommit as standalone repositories, however I'm wondering if you have any recommended git strategy for pulling in changes from upstream (this repository)?

Ideally the strategy should make it easy to know which revision of the upstream (this repo) that the standalone versions of bootstrap_repository and pipelines_repository are based off, and be transparent about whether they have diverged from the upstream. Achieving this with the current requirement to split folders into separate git repositories seems to require pretty advanced git usage (e.g. subtree or filter-branch), and as a result it feels more error-prone and less user friendly.

I think the easiest solution would be that bootstrap_repository and pipelines_repository were just mirrors of the complete repository, and the Python code/CodePipeline was adapted to run code from the respective subdirectories. This would allow a much more straight forward git strategy:

  • Fork aws-deployment-framework, clone the fork and set upstream.
  • Add remotes for the CodeCommit repositories for bootstrap_repository and pipelines_repository.
  • Changing e.g. deployment_map.yml happens in the fork, and is pushed out to CodeCommit.
  • New changes in upstream can be pulled in via merge or rebase, and then pushed out to CodeCommit.

Would this work, or do you have a solution that works with the current structure?

Custom tags on Pipe CF stacks

I would like to create tags on my repos for pipes and apps created by ADF. I’d like to use it to control access, split cost ect.

I’m thinking something like this should work.

pipelines:
  - name: blue-example
    type: cc-cloudformation
    tags:
        - Service: Blue
        - Owner: Me
    params:
      - SourceAccountId: 0987654321
    targets:
      - /qa
      - /prod

Split deployment map into sub files

One single file for the deployment map has several issues.

  1. Single point of failure. All teams and developers push to the same yml config file. This is fragile as a single malformed entry in the yml can break the pipeline creation for all teams and introduces cross team blocker situations.

  2. Hard to automate adding new repositories and pipelines. Ultimately I would like to see a single click creation of git repo and pipeline. If each app could have its own deployment_map file then this would be much easier to script.

pyyml 5 conflicts with aws-sam-cli

in python requirements file src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/requirements.txt aws-sam-cli conflicts with pyyaml~=5.1.
Maybe it is going to ok to use PyYAML-3.13 instead ?

Cross account webhooks for CodeCommit as opposed to polling

Make the CodeCommit Role in source accounts optional and allow for Cloudwatch Events to trigger the pipeline in the deployment account as opposed to using Polling against the CodeCommit repo in the source accounts from the deployment account.

Move CloudTrail Creation into Custom Resource

When setting up ADF the CloudTrail creation process should be automated via a custom resource that checks for an existing cross region trail and if it exists continue, else create it. There should be no need to manually create a trail for the moveAccount event to trigger the ADF bootstrap step function.

deployment_map.yml actions and top level regions

When using actions: replace_on_failure and specifying a top level region in a pipeline the pipeline will not render to include the replace_on_failure action. Requires the j2 template to be updated for this to work as intended.

Remove ssm params when removing base stacks

Currently when the remove_base option is set, moving an account to the root OU will remove the base stack, however to completely clean an account from ADF resources we need to manually go into the account and remove some SSM params.

Can the logic which removes the base stacks be extended to remove these SSM params as well? This could perhaps be done by adding Tags to the params which can be filtered on.

This would significantly simplify the process to re-bootstrap accounts and move them between OUs

Allow OU targets in deployment_map.yml that do not (yet) contain accounts

Hi,

Would it be possible to add an OU that does not (yet) contain any accounts to be provided to the deployment_map.yml in the pipelines repository? This could be helpful when introducing new OUs to the AWS organisation.

I noticed that the pipeline aws-deployment-framework-pipelines running in the deployment account gives the below error during the build phase:

========================== 22 passed in 0.31 seconds ===========================
 [Container] 2019/03/14 16:13:33 Running command python ./adf-build/generate_pipelines.py
Traceback (most recent call last):
 File "./adf-build/generate_pipelines.py", line 147, in <module>
 main()
 File "./adf-build/generate_pipelines.py", line 143, in main
 cloudformation.create_stack()
 File "/codebuild/output/src007560056/src/adf-build/shared/python/cloudformation.py", line 254, in create_stack
 create_change_set = self._create_change_set()
 File "/codebuild/output/src007560056/src/adf-build/shared/python/cloudformation.py", line 191, in _create_change_set
 ChangeSetType=self._get_change_set_type())
 File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
 return self._make_api_call(operation_name, kwargs)
 File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
 raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationError) when calling the CreateChangeSet operation: [/Resources/Pipeline/Type/Stages/3/Actions] 'null' values are not allowed in templates
 [Container] 2019/03/14 16:13:37 Command did not exit successfully python ./adf-build/generate_pipelines.py exit status 1
[Container] 2019/03/14 16:13:37 Phase complete: BUILD Success: false
[Container] 2019/03/14 16:13:37 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python ./adf-build/generate_pipelines.py. Reason: exit status 1

My deployment_map.yml file is similar to the below, where the /development OU does have an account, but the /production OU doesn't:

pipelines:
  - name: sample-vpc # The name of your pipeline (This will match the name of your repository)
    type: cc-cloudformation # The pipeline_type you wish to use for this pipeline
    params:
      - SourceAccountId: 111111111111
      - NotificationEndpoint: [email protected] # The Notification (user/team/slack) responsible for this pipeline
    targets: # Deployment stages
      - path: /development
        regions: eu-west-1
      - path: /production
        regions: eu-west-1

I've briefly reviewed the stacktrace and I suspect my deployment_map.yml will trigger the following code and fill the self.target_structure.account_list with accounts if there are any.

Perhaps we can check if the list is emtpy somewhere here to avoid creating a cloudformation stack if there is none?

I have to say I'm very new to adf so might be completely off here in my assumptions, and doing something else wrong that causes this issue.

Kind regards,
Thiezn

Rename base stacks to follow global-base naming structure

Currently when bootstrapping an AWS Account it will receive the base global stack (and optional regional stacks). This stack is named based on the OU the account was moved into, not the level in which the account applied its specific base template. A larger change here would push for the idea that the base stacks are all named the same, this allows for accounts to be moved from OU to OU and allows for base stacks to update as opposed having to move them back to the root to remove and back into some other OU for a new base stack. We should investigate how to best make this more transparent and streamlined.

Parallel generation of pipelines

When processing the deployment map sequentially adding pipelines makes it slower and slower to get a new pipe generated. Parallelising the process would be appreciated in order to increase creation speed.

Use ExternalId for all assume role calls

When ADF assumes any role, it should use a dynamically generated externalId that is automatically rotated to ensure ADF roles can never be assumed by anything other than ADF itself.

Allow pipelines to target nested organisational units

Hi,

At the moment we can target a specific ou in the deployment_map.yml and roll out resources in all accounts within that ou. What would be great is if we can target accounts that are in a nested OU. Let me try to explain using some examples:

This would be our deployment map:

 pipelines:
    type: cc-cloudformation # The pipeline_type you wish to use for this pipeline
    params:
      - SourceAccountId: 112233
      - NotificationEndpoint: [email protected]
      - RestartExecutionOnUpdate: true
      - UpdateNestedOUs: true
    targets: 
      - path: /europe
        regions: eu-west-1

And this is our organisational structure:

- Root
   -- europe
        -> account0
        -- Development
            -> account1
            -> account2
        -- Production
            -> account3

At the moment the pipeline would only deploy stuff directly in the europe OU, in this case account0. Would it be possible to get adf to include the account1, account2 and account3 as well?

Of course people might want to be able to tweak this behaviour so perhaps introducting a param like UpdateNestedOUs: true/false can help with this.

Cheers,
Thiezn

Deployment from SAR should fail if region is incorrect

When deploying via the SAR its required that us-east-1 is used as this is where Organization events will be delivered too. If a user deploys into another region the stack will complete successfully but events will never fire based on MoveAccount since Organizations wont deliver into that other region.

Add the ability to specify which region to extract an ssm param for in the parameter files

Currently you can use the resolve: keyword in the params .json files to resolve parameters that are stored in the SSM Parameter Store, however this only allows you to retrieve the parameter form the main deployment account region.

{
    "Parameters" : {
        "SomeParam":"resolve:/path/to/key"
    }
}

Having a way to specify or override the region (like below for example), would help with multi-region deployments.

{
    "Parameters" : {
        "SomeParam":"resolve:/**region**/path/to/key"
    }
}

Document the "Delete Scenario"

When installing the new SAR deployed version of ADF I tried to uninstall my current version and its not easy to find all the stacks that need to be delete in all the accounts and regions.

Also once that was done I saw that there where still config parameters left in Parameter Store from the old install. These I guess are not created by Cloudformation so they become orphans. I would actually prefer that all ADF config parameters had a path starting with /adf.

So a good documentation and full implementation of a unistall would be nice.

Tomas

Custom deploy stage

i would like to have the ability to run my own custom "deployspec.yml" so that I can run serverless.com, terraform or just deploy awslabs examples that dont use CF but just use aws cli .

This could be done by creating a codebuild project that handles a deploy stage. Then I would have something like this in my repo

´´´
buildspec.yml
deployspec.yml
´´´

Or even by adding config to the target in the deploy_map you could

´´´
buildspec.yml
deployspec.yml
deployspec-prod.eu-west-1.yml
´´´

Global vs regional

After I cloned the the bootstrap repository I added a folder “sharedservice” with a “non-prod” subfolder. Here I added a global.yml and regional.yml file. The global has IAM resources and the regional has a few resources like S3 access log bucket. In the future I will add stuff like cloudtrail enablement and so on.

I am able to bootstrap an account with global.yml but regional.yml is never included in the bootstrap process. Is there some reason for this?

Lars

Configure loglevel from adfconfig.yml

Currently the ADF framework bootstrap output different loglevels. The default level is INFO and it can be customisable in CodeBuild. It would be great if this could be configured directly in the adfconfig.yml instead of going to CodeBuild.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.