This is a tool that can be used to build, test, and deploy a Lambda function behind an API Gateway in Amazon Web Services that returns a specific response. In this case:
{
"message": "Automation for the People",
"timestamp": "<current_epoch>"
}
This tool requires the following pre-requisites:
- Linux, Mac, or Compatible bash-enabled OS
- Docker
- Docker-Compose
- Git CLI
- An AWS Account with the credential chain initialized.
- for more info, see this: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
- An S3 Bucket
- Doesn't have to already exist but the name specified must be unique.
- This tool will create the bucket for you if it doesn't already exist.
If you feel like skipping the below steps and doing it the easy way, feel free to run this:
# EXAMPLE: ./up.sh my_bucket v0.0.5
./up.sh <my_s3_bucket_name> <my_version>
To destroy and cleanup your environment, simply run:
./down.sh
This tool uses docker-compose to make some of the standup orchestration simpler and easier.
To get started, clone this repo and run a compose build, like this:
mkdir special-responder && cd special-responder
git clone https://github.com/stelligent/miniproject-LUCAS-MICHAEL.git .
docker-compose build
Like my grandma used to say: If you're not unit testing, it's broken.
In order to run a full suite of unit tests, you can run the following:
docker-compose run test
This uses python's built in "unittest" library to run a few basic tests, including:
- Testing the instantiation of the class.
- Testing the logic that allows the injection of a custom message.
- Asserting that the timestamp is epoch to the second without milliseconds.
As stated in the pre-requisites, an S3 bucket is needed to store the codebase. Let's copy the env.sh.template to env.sh and make some changes.
cp env.sh.template env.sh
Put the name of the S3 bucket in "env.sh" at the line with the following:
export S3_BUCKET_NAME="<bucket_name>"
# EXAMPLE: export S3_BUCKET_NAME="my-bucket"
This will actually create the bucket if it doesn't exist, but its better to make sure that the bucket exists first.
Then you can set the version number.
export TF_VAR_function_version="<version>"
# EXAMPLE: export TF_VAR_function_version="v0.0.1"
Once that is complete you can source the env to load your AWS credentials to the environment so terraform can access them through docker-compose's environment variables integration.
source env.sh
There is a nice little fixture to simplify uploading the lambda code to s3.
docker-compose run upload
You can optionally supply the version number for this deployment. Just be sure to update the version number either in "env.sh" or by statically writing it into "deployment/main.tf".
docker-compose run upload v0.0.1
This tool uses terraform to provision the necessary AWS resources to run the stack. This includes the following resources:
- API Gateway API, Resources, Stages, etc
- AWS Lambda Function
- IAM Roles and Polices for Execution and Invocation
First, change directory to ./deployment/ and edit the main.tf there as needed. Follow the comments and directions in the file. This main.tf is the root file that terraform will reference in the next step.
docker-compose run deploy
Terraform should return an output for each "module" stanza in "deployment/main.tf". Each of these outputs, should the odds be ever in our favor, will be a functional endpoint returning the desired output.
You can also optionally run ANY terraform command with docker-compose like so:
docker-compose run terraform destroy
I hope you had fun reviewing my fun little thingamajiger. Feel free to drop any questions or improvements as issues to the repo!