Giter VIP home page Giter VIP logo

aws-transmutation's Introduction

(EOL) AWS Transmutation CD Pipeline

Build Status

A Modern Multi-Purpose Continuous Deployment Pipeline Template for AWS

๐Ÿง Feedback Welcome: Please submit issues about design quirks and problems!

๐Ÿ‘ฉโ€๐Ÿ”ง Contributors Wanted: Help needed reviewing and adding features

Why Transmute?

Stop wasting time with the aws web console and manage your own build + deployment straight from your project repo. Reuse the same pipeline template for all your projects. Never release broken code with integrated CD using AWS CodePipeline. ๐Ÿ‘‹ Join the alchemists and focus on turning your user experience to gold! ๐Ÿฅ‡

  • Develop on GitHub
  • Automatic github Status Updates
  • No-hastle Continuous Deployment solution
    • Configure pipelines once.
    • All project specific build commands live in your repo. Use aws CLI for deployment!
  • Use the Same Pipeline Template for individual branches
    • For example, configure for testing and run staging branch CI
    • For example, configure for deployment and run master branch CD
    • For example, configure CI on staging / trying and CD on master and develop using a merge bot like Bors-NG. Your code only deploys when merged pull requests pass CI tests!
  • Keep production environment safe by separating testing and production pipelines on Separate AWS Accounts

The Developer's Dream

Write code and seamlessly automate testing and deployment.

Piece of cake ๐Ÿฐ with AWS Transmutation Pipeline. Take a look at this example production setup:

  • Minimum Effort deployment from pull requests. Everything else is automatic!
  • Pull Request Continuous Integration with Bors-NG: Merges to master & deployments only happen when your tests succeed!
  • Never release broken builds! Full integration testing of stack before deployment

Separate Testing and Production environments for safety

Every pipeline is a separate entity! Keep your deployment pipeline on a production AWS account and all your testing on accounts you can afford to accidentally mess stuff up in! All cross communication happens within git enabled by live status updates.

Live Development

Local development can be hit or miss. It's not always possible to perfectly replicate AWS on your local machine, and often requires paid tools like localstack to do so accurately.

Add Transmutation template to your toolset. Develop locally and then push with confidence.

With a pipeline template it becomes possible for anyone to easily launch their own pipeline and automate test deployment. Any developer sets up a Transmutation pipeline on their own account and configure it to deploy a specific development git branch. Commit changes and wait for build to succeede (or not)! Cheapen and simplify the way your develop.

Getting Started

  1. Fork the example repo located at https://github.com/MarcGuiselin/aws-transmutation-starter

    Make sure to keep it public, otherwise bors-ng will ignore the project

  2. Create GitHub OAuth Token. Instructions here.

    When you get to the scopes/permissions page, you should select the "repo" and "admin:repo_hook" scopes

  3. This project will have CI on staging and trying (for Bors-NG) and CD on master for production, so we will create a pipeline for each. You can skip trying if you are not using a merge bot like Bors-NG.

    • Launch Transmutation Pipeline Stack for master using the button below.

      Launch Stack

      1. Click Next
      2. Rename Pipeline Configuration Name to my-transmutation-starter-master-pipeline
      3. Rename Deploy Stack Name to my-transmutation-starter-master-stack
      4. Select Stage prod
      5. Select Features Build > Deploy
      6. Input your GitHub OAuth Token
      7. Input the Repo Owner / Name for your forked repository
      8. Input master for your Branch
      9. We want to load the parameters for production deployment, so rename CloudFormation Template Configuration to prod-configuration.json
      10. Click Next
      11. Click Next again
      12. Acknowledge Access Capabilities
      13. Click Create stack
    • Launch Transmutation Pipeline Stack for staging using the button below.

      Launch Stack

      1. Click Next
      2. Rename Pipeline Configuration Name to my-transmutation-starter-staging-pipeline
      3. Rename Deploy Stack Name to my-transmutation-starter-staging-stack
      4. Select Features Build > Deploy > Integration > Cleanup
      5. Input your GitHub OAuth Token
      6. Input the Repo Owner / Name for your forked repository
      7. Input staging for your Branch
      8. Click Next
      9. Click Next again
      10. Acknowledge Access Capabilities
      11. Click Create stack
    • Launch Transmutation Pipeline Stack for trying using the button below. (optional)

      Launch Stack

      1. Click Next
      2. Rename Pipeline Configuration Name to my-transmutation-starter-trying-pipeline
      3. Rename Deploy Stack Name to my-transmutation-starter-trying-stack
      4. Select Features Build > Deploy > Integration > Cleanup
      5. Input your GitHub OAuth Token
      6. Input the Repo Owner / Name for your forked repository
      7. Input trying for your Branch
      8. Click Next
      9. Click Next again
      10. Acknowledge Access Capabilities
      11. Click Create stack
  4. The master (deployment) pipeline will build and deploy your project! Once the pipeline succeeds (can take up to 5 minutes), find the outputs from cloudformation and visit HomepageUrl to see your project:

  5. Install Bors-NG (optional)

    You can also Setup your own Bors-NG instance

  6. Clone your forked repo locally with git clone

  7. Let's make some changes to index.html. Make the project yours!

  8. Create a new-feature branch, commit, and push to it

    git branch new-feature
    git checkout new-feature
    git add .\src\index.html
    git commit
    git push --set-upstream origin new-feature
    
  9. Create a pull request on GitHub.

    1. Merge new-feature into master
    2. Describe your new feature
    3. If you are using bors-ng, then do not click the green Merge button. Comment bors r+ and wait for bors to run integration tests and merge!
    4. If you arn't using bors, then you can click the Merge button.
  10. Now that the merge passed integration tests and our changes finally made it to master the deployment pipeline will automatically kick into gear! Once the pipeline succeeds (can take up to 5 minutes), reload your project's homepage and you will see your changes!

Develop With Git Branches

Shift to the staging > master model of development. Develop on the staging branch and merge releases into master with pull requests. Bors-NG is a popular tool that lets you automate this.

When you are ready, create a pull request. Bors will run build, unit, and integration tests using in instance of Transmutation Pipeline configured for integration testing. Upon successful completion of tests Bors will finalize the merge.

Once a new release makes it to master, the production pipeline will kick into gear and deploy your code to production.

Pro Tip: Although it isn't built-in to Transmutation (yet), it's wise to configure manual confirmation of test deployments before they are cleaned up. Send an email to the head of your team, or post a status on github so someone can manually confirm everything looks good before letting the pipeline succeed and deployment subsequently occur.

Why Bors?

"Bors is a GitHub bot that prevents merge skew / semantic merge conflicts, so when a developer checks out the main branch, they can expect all of the tests to pass out-of-the-box."

What that means for you is that your master branch will always contain working, deployment-ready code. You can then configure a Transmutation Pipeline to deploy changes to master once they pass all tests and are merged. Your users will never experience broken releases!

Recommended Project Structure

my-transmutation-repo
โ”œ src, web, or whatever!  # Develop a static site development top level, or organize every part of your project in folders to your pleasing
โ”œ backend                 # Keep code for backend stuff like apis, ec2 stuff, database creation templates and such here.
โ”‚ โ”œ Serverless functions     
โ”‚ โ”œ Database templates
โ”‚ โ”” ...
โ”œ testing                 # Keep tests in their own folder
โ”‚ โ”œ Unit tests
โ”‚ โ”” Integration tests
โ”œ template.yaml           # Cloudformation template for whole project
โ”œ build.yaml              # Build commands. For example: build production static site, do unit tests
โ”œ deploy.yaml             # Deploy commands. For example: upload s3 site content, update SQL database structure
โ”œ integ.yaml              # Integration test commands. For example: perform api tests with postman, test website contact form
โ”œ cleanup.yaml            # Cleanup after staging/testing. For example: empty s3 buckets so they can be deleted
โ”” prod-configuration.json # Configure CloudFormation parameters for production builds

Details

Dynamic Permissions

Transmutation does something a little naughty: it rewrites deployment (CodeBuild) permissions every time your stack is deployed. The PermissionsUpdate lambda function reads all the resources created from your stack and writes new permissions that allow codebuild to have access to these resources. What this means is that you never have to rewrite your pipeline's permissions even as your own CloudFormation stack grows!

Supported Resources

Help needed adding more! Submit a pull request.

Raw definitions to these supported resource types are in aws-arn-from-resource-id.js

Compute Storage Database Networking
EC2 (partial) S3 RDS VPC (partial)
Lambda S3 Glacier DynamoDB CloudFront
EFS Route53 (partial)

Justification

A number of issues and major services like s3 and dynamodb do not support ResourceTag based global condition keys (see the listed resource-based policies here). In addition some services like CloudFront do not have the tag aws:cloudformation:stack-name automatically populated when they are created. Hence a rule that relied on this tag could not work. A script, however, could cover a much broader range of resources.

The way this script is implemented guarantees that no erroneous or overly permissive permissions are added. This policy ends up looking something like this:

{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "*",
        "Resource": [
            "arn:aws:cloudfront::123456789123:distribution/ABCDEFGHIJKLMN",
            "arn:aws:apigateway:us-east-1:123456789123:/apis/abcdefghij",
            "arn:aws:apigateway:us-east-1:123456789123:/apis/abcdefghij/*",
            "arn:aws:s3:::my-bucket",
            "arn:aws:s3:::my-bucket/*"
        ]
    }
}

Remember that these auto-generated permissions are only given to the codebuild resources which run each of your pipeline yaml files. Obviously this will not work for everyone, so you can disable Automatic Build Permissions when creating your pipeline. This will remove this step.

Environment Variables

All pipeline steps are executed using AWS CodeBuild virtual containers that will run respective yaml files. Your commands will have access to the following environment variables:

  • PIPELINE_BUCKET = name of artifact bucket
  • PIPELINE_STAGE = either prod or test
  • PIPELINE_STACK_NAME = name of the stack we are deploying to
  • Load more from prod.env or test.env files with the command export $(cat $PIPELINE_STAGE.env | xargs)

Configuring differences in prod and test deployments

Say you want to release your production to a real domain, but don't want to deploy test builds to that same domain as well!

Instead of hard coding values like a domain name in your template, use CloudFormation Parameters and Conditions. Then use test-configuration.json and prod-configuration.json files to set different parameters for test and prod builds. Transmutation is configured to optionally use these files during template changeset creation/execution.

For example, aws-transmutation-starter if you don't set Domain, the template will output the cloudfront distribution url to the static website and api. However, if you set a Domain the template will configure a Route53 to the domain and link our static site and api through it.

Using template outputs

There are many ways to get outputs from your template, but the most straight forward method gets outputs using the aws cli and export the output as an environment variable like so:

phases:
  install:
    commands:
      # Get output 'ApiUrl' as an environment variable
      - >
        export API_URL="$( \
          aws cloudformation describe-stacks \
            --stack-name $PIPELINE_STACK_NAME \
            --query "Stacks[0].Outputs[?OutputKey=='ApiUrl'].OutputValue" \
            --output text \
        )"

How do I change/delete my pipeline now that my code is in production?

Find your pipeline stack and delete it. This won't affect your production stack. You can then create your own custom pipeline or recreate a Transmutation Pipeline using the same name and branch.

Thanks

@honglu @jamesiri @jlhood and everyone else involved in the development of AWS SAM CodePipeline CD.

@jenseickmeyer's project github-commit-status-bot which heavily inspired the design of github-status-update.

Donate

๐Ÿป If you use or enjoy my work buy me a drink or show your support by staring this repository. Both are very appreciated!

License and Copyright

Please see the LICENSE for an up to date copy of the copyright and license for the entire project. If you make a derivative project off of mine please let me know! I'd love to see what people make with my work!

aws-transmutation's People

Contributors

marcguiselin avatar

Stargazers

Roman avatar  avatar Leandro Padua avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.