Giter VIP home page Giter VIP logo

cfpack's Introduction

cfpack

Version Downloads/week License

A small CLI tool that can help you to deal with huge CloudFormation templates by splitting it into multiple smaller templates. Using this tool you can also build sharable drop-in templates that you can share across your projects.

cfpack

Table of Contents

Installation

Install the package as global dependency to be able to use with different projects:

npm i -g cfpack.js

You can also install it as a project dependency and add NPM scripts if you need it for a specific project or you want to run it during CI/CD process. Just run the following command:

npm i cfpack.js --save-dev

Then you can create shortcuts in your package.json file:

{
	"name": "my-project",
	...
	"scripts": {
		"stack:build": "cfpack build",
		"stack:deploy": "cfpack deploy",
		"stack:delete": "cfpack delete"
	},
	...
}

Enable bash/zsh-completion shortcuts

If you want to enable bash/zsh-completion shortcuts in your terminal, then you need to run cfpack completion command and add generated script to your .bashrc or .bash_profile (or .zshrc for zsh). You can do it by running the following command:

cfpack completion >> ~/.bashrc

Once completion script is added, you need to logout and then login back to make sure that terminal reads the script and start using completion for cfpack commands.

Getting Started

Before you start using this tool, you need to create a configuration file in the root of your project. Just run cfpack init command and it will create cfpack.config.js file with default settings. The file is pretty obvisous and specifies a folder with template files, a path to resulting template, CloudFormation stack information and the like. Just amend values if you need to change something and it will use what you want.

You may also need to make sure that you have AWS credentials on your machine. The easiest way to do it is to install AWS CLI tool on your machine and run aws configure command to enter access key id and secret access key. It will be used by AWS Node.js SDK to work with CloudFormation stacks when you deploy it or want to delete it.

Build Templates

Once everything is ready, you can split your original CloudFormation template into multiple smaller templates and put it into the entry folder. For example, if you have a template that declares CodeBuild, CodePipeline, S3 Bucket, AWS Lambda, DynamoDb tables and appropriate IAM resources, then you can create the following templates in the entry folder and split resoruces between them:

  • build.yaml - will contain CodeBuild and CodePipeline resources
  • compute.yaml - will contain AWS Lambda resources
  • database.yaml - will contain DynamoDb tables
  • storage.yaml - will contain S3 buckets
  • roles.yaml - will contain IAM roles and policies

If you have parameters, outputs, metadata, mappings and/or conditions in your original template, then it also can be split between different templates. Just use your judment to deside what should be where.

Just keep in mind that whenever you create a "sub-template", it has to adhere the standard formating and be valid from CloudFormation point of view.

Commands

The package provides four commands: init, build, deploy and delete. These commands are pretty much self explanatory, but let's take a look at each of them.

Init

The init command is intended to initialize configuration file in the current working directory. Just run cfpack init and a new cfpack.config.js file will be create in the folder. Please, pay attention that it will override existing one if you have already created it.

Build

The build command will loop through the entry folder, find all files in it, read temlates and compose the final template which will be saved at a location specified in the config file. The command understands both json and yaml templates, and uses JSON format for the final template.

Deploy

The deploy command executes build task first to create resulting template and then use it to create or update CloudFormation stack using AWS Node.js SDK. The command checks whether or not a stack exists to determine which action is required to create or update the stack. This command will also upload artifacts if you define it in the cfpack.config.js file to make sure that CloudFormation stack can properly provision resoureces like lambda functions or appsync graphql API or resolvers.

Artifacts

The artifacts command allows you to upload artifacts that are defined in the config file. You can define as many artifacts as you need.

Delete

Finally, the delete command just checks if the stack exists and then calls API to delete it.

Config file

Parameters

As it has been said above, the config file is pretty obvious and self explanatory. It allows you to define your stack information and provide additional details like parameters or capabilities. Thus if your template uses input parameters, you can define them in the config file in the stack > params section as shown below:

module.exports = {
    ...
    stack: {
        name: "my-stack",
        region: "us-east-1",
        params: {
            ...
            Parameters: [
                {
                    ParameterKey: 'key1',
                    ParameterValue: 'valueA'
                },
                {
                    ParameterKey: 'key2',
                    ParameterValue: 'valueB'
                }
            ]
        }
    }
};

If your parameters contain sensetive data and you can't commit it into your repository, then you can consider using environment vairables and dotenv package to load it. Install it, create .env file and define values that you want to use. Then update your cfpack.config.js file.

# .env file
KEY1_VALUE=valueA
KEY2_VALUE=valueB
// cfpack.conifg.js

require('dotenv').config();

module.exports = {
    ...
    stack: {
        name: "my-stack",
        region: "us-east-1",
        params: {
            ...
            Parameters: [
                {
                    ParameterKey: 'key1',
                    ParameterValue: process.env.KEY1_VALUE
                },
                {
                    ParameterKey: 'key2',
                    ParameterValue: process.env.KEY2_VALUE
                }
            ]
        }
    }
};

Artifacts

If your templates have resources (like lambda functions, appsync graphql schema or resolvers, etc) that rely on artifacts located in a s3 bucket, then you can define which files need to be uploaded during deployment process.

Let's consider that you have a template like this:

Resources:
  Schema:
    Type: AWS::AppSync::GraphQLSchema
    Properties:
      ApiId: ...
      DefinitionS3Location: s3://my-bucket/graphql/schema.graphql
  ResolverA:
    Type: AWS::AppSync::Resolver
    Properties:
      ApiId: ...
      DataSourceName: ...
      TypeName: typeA
      FieldName: field1
      RequestMappingTemplateS3Location: s3://my-bucket/graphql/resolvers/typeA/field1/request.txt
      ResponseMappingTemplateS3Location: s3://my-bucket/graphql/resolvers/typeA/field1/response.txt
  ResolverB:
    Type: AWS::AppSync::Resolver
    Properties:
      ApiId: ...
      DataSourceName: ...
      TypeName: typeB
      FieldName: field2
      RequestMappingTemplateS3Location: s3://my-bucket/graphql/resolvers/typeB/field2/request.txt
      ResponseMappingTemplateS3Location: s3://my-bucket/graphql/resolvers/typeB/field2/response.txt
  LambdaFunctionA:
    Type: AWS::Lambda::Function
    Properties: 
      Handler: index.handler
      Role: ...
      Code: 
        S3Bucket: my-bucket
        S3Key: lambdas/function-a.zip
      Runtime: nodejs8.10
      ...
  LambdaFunctionB:
    Type: AWS::Lambda::Function
    Properties: 
      Handler: index.handler
      Role: ...
      Code: 
        S3Bucket: my-bucket
        S3Key: lambdas/function-b.zip
      Runtime: nodejs8.10
      ...

And the structure of your project looks like this:

/path/to/your/project
  ├─ package.json
  ├─ package-lock.json
  ├─ cfpack.config.json
  ├─ graphql
  │   ├─ schema.graphql
  │   └─ resolvers
  │       ├─ typeA
  │       │   ├─ field1
  │       │   │   ├─ request.txt
  │       │   │   └─ response.txt
  │       │   └─ ...
  │       ├─ typeB
  │       │   ├─ field2
  │       │   │   ├─ request.txt
  │       │   │   └─ response.txt
  │       │   └─ ...
  │       └─ ...
  ├─ lambdas
  │   ├─ functionA
  │   │   ├─ src
  │   │   ├─ node_modules
  │   │   ├─ package.json
  │   │   └─ package-lock.json
  │   └─ functionB
  │       ├─ src
  │       ├─ node_modules
  │       ├─ package.json
  │       └─ package-lock.json
  └─ ...

Then you can update the configuration file to upoad all artifacts like this:

module.exports = {
    ...
    stack: {
        name: "my-stack",
        region: "us-east-1",
        params: {
            ...
        },
        artifacts: [
            {
                bucket: "my-bucket",
                files: {
                    "graphql/": {
                        baseDir: "graphql",
                        path: "**/*"
                    },
                    "lambdas/function-a.zip": {
                        baseDir: "lambdas/functionA",
                        path: "**/*",
                        compression: "zip"
                    },
                    "lambdas/function-b.zip": {
                        baseDir: "lambdas/functionB",
                        path: "**/*",
                        compression: "zip"
                    }
                }
            }
        ]
    }
};

Please, pay attention that the bucket must already exist to successfully upload artifacts. It means that you can't define a bucket that you are going to use to store artifacts in your CloudFormation template because artifacts need to be uploaded before your stack is created.

IAM roles

Please, pay attention that if your CloudFormation template contains IAM roles or policies you must explicity acknowledge that it contains certain capabilities in order for AWS CloudFormation to create or update the stack. To do it, just add Capabilities to your config file as shown below:

module.exports = {
    ...
    stack: {
        name: "my-stack",
        region: "us-east-1",
        params: {
            ...
            Capabilities: ['CAPABILITY_IAM'],
            ...
        }
    }
};

Contribute

Want to help or have a suggestion? Open a new ticket and we can discuss it or submit a pull request.

LICENSE

The MIT License (MIT)

cfpack's People

Contributors

dependabot[bot] avatar eugene-manuilov avatar gitsome avatar krzysiekwyka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cfpack's Issues

Add ability to upload template to S3

I'm running into an issue as my template has apparently exceeded the limit for submitting it via the command line. I'm getting the following error when trying cpack build and cpack deploy:

Member must have length less than or equal to 51200

It would be fantastic to be able to upload the template file to S3 and then use that in the cloudformation deployment process.

Use Previous Value for parameters?

It doesn't appear that you can run a deployment without always updating parameters' values. I have a stack that has Lambda functions updating parameters from events (forcing a redeployment) and want the values that Lambda has updated to be retained when I deploy.

If I have an empty value for 'ParameterValue' in the cfpack.config file, I get an error that "Need to specify either usePreviousValue as true or a value for the parameter" - it would be super useful if there was an option in the config we could use to set this flag to true.

Artifacts command uploads with doubled up path and backslashes for Windows 10

Hi there,

I've been having an issue with the artifacts command for a while so decided to debug it now and found that the code uses \ instead of / due to path.join? I only had a brief look but from the docs I figured I'm doing it correctly, but it does not seem to work.

I tried using the zip compression but I download the artifact and it gives me a error saying it's invalid.

Maybe I'm doing something wrong?

Config:
image

Eg:
image

I also changed this to not join location and file as it then doubles up the path, as you can see in the screenshot above:
image

Thanks

Deploy doesn't update Lambdas when using artifacts

So I've been trying to deploy some Lambdas with this, using the artifacts approach and I've discovered that while the first deploy works, updates to the stack don't work properly.

After doing some searching, I believe the S3Key needs to be rotated in the CloudFormation, but this would also be the case for the cfpack.config.js. Is there a way to introduce variables in to trigger some sort of automatic rotation?

false-positive JSON size error from AWS

Hi!

When validating templates as part of the 'build' task, the built template JSON gets prettified, increasing its size considerably. I noticed that for some of my larger templates it may grow by a factor of 2. The obvious fix would be to remove the two extra parameters from the call to JSON.stringify(), but that would leave us with pretty ugly error messages should validation fail for reasons other than size. Perhaps conditionally run it without these options for larger templates?

Thanks!

-- Arjan

see:

cloudformation.validateTemplate({ TemplateBody: JSON.stringify(this.output.template, '', 4) }, (err) => {

YAMLException with !Sub

I seem to get an error trying to process my CloudFormation to do with the Sub command.

├─ Error processing /home/iw651/src/prototypes/esg-app-cms/cloudformation/cf/s3.yml template: YAMLException: unknown tag !<!Sub> at line 18, column 7:
│ PublicAccessBlockConfiguration:

The YAML I'm trying to process is like:

---
AWSTemplateFormatVersion: "2010-09-09"
Transform: "AWS::Serverless-2016-10-31"

Parameters:
  EnvType:
    Type: String
    Default: "dev"

Resources:
  StrapiCMSAssetBucket:
    Type: "AWS::S3::Bucket"
    DeletionPolicy: Retain
    Properties:
      BucketName: !Sub
        - "my-test-strapi-assets-${EnvType}"
        - EnvType: !Ref EnvType

Support uploading of local artifacts

It is possible to upload local artifacts to s3 prior to a deploy using the aws cli package command.

Currently when I use cfpack on a template that makes use of this mechanism I get the following error(s):

[19:59:18] cfpacktest CREATE_IN_PROGRESS Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 5. Resource with id [CognitoCustomMessageLambda] is invalid. 'CodeUri' is not a valid S3 Uri of the form "s3://bucket/key" with optional versionId query parameter. Resource with id [CreateProfileLambda] is invalid. 'CodeUri' is not a valid S3 Uri of the form "s3://bucket/key" with optional versionId query parameter. Resource with id [FetchQueryResultSingleAuthenticatedLambda] is invalid. 'CodeUri' is not a valid S3 Uri of the form "s3://bucket/key" with optional versionId query parameter. Resource with id [SendTemplatedEmailLambda] is invalid. 'CodeUri' is not a valid S3 Uri of the form "s3://bucket/key" with optional versionId query parameter. Resource with id [SystemInformationLambda] is invalid. 'CodeUri' is not a valid S3 Uri of the form "s3://bucket/key" with optional versionId query parameter.

cfpack would need to somehow invoke the package command (or equivalent) during its build stage.

Edit: Looking into the source code of the aws-cli program, package is just a python script that uploads local artifacts to s3 and then modifies the input template. The only options appear to be invoking this command from cfpack or reimplementing it within cfpack. 😢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.