This project is a response to a cosmic anomaly affecting the Spaceship Titanic, an interstellar passenger liner traveling through the cosmos. Launched with over 13,000 passengers onboard, the vessel was en route to three newly habitable exoplanets near Alpha Centauri. However, tragedy struck as the spaceship collided with a hidden spacetime anomaly within a dust cloud near Alpha Centauri, causing almost half of the passengers to be transported to an alternate dimension.
The project aims to address the aftermath of this cosmic event by predicting the whereabouts of the transported passengers. While data regarding the fate of some passengers is known, a significant portion remains unaccounted for. Leveraging a machine learning model, the application predicts the locations of these missing individuals with considerable accuracy.
In the year 2912, the Spaceship Titanic faced a fate similar to its namesake from a millennium before. Though the ship remained intact, an unexpected collision with a spacetime anomaly led to passengers being transported to an alternate dimension. The families of the passengers were left in distress, seeking information about their loved ones who were aboard the ill-fated voyage.
However, the communication link was severed before the crew could relay details about all the transported individuals. To alleviate the families' distress, this project was initiated. It utilizes available data about confirmed passengers' whereabouts and employs a predictive machine learning model to deduce the locations of those unaccounted for.
This project includes the following components:
- app/app.py - Contains the Lambda function code, including the machine learning inference logic.
- app/Dockerfile - Dockerfile used to build the container image.
- app/model - TensorFlow model used to predict the locations of passengers after the cosmic anomaly, trained on available data.
- app/requirements.txt - Pip requirements to be installed during container build.
- events - Sample invocation events to test the function.
- template.yaml - AWS SAM template defining the application's AWS resources.
This application utilizes various AWS resources, including Lambda functions and API Gateway. These resources are defined in the template.yaml
file. You can update this template to add or modify AWS resources using the same deployment process.
To deploy the application for the first time, you'll need the following tools:
- SAM CLI - Install the SAM CLI
- Docker - Install Docker community edition
- Python 3 (for local testing) - Install Python 3
Run the following commands in your terminal:
$ sam build
$ sam deploy --guided
The sam build command will build the Docker image and copy your application's source into it. The sam deploy command will package and deploy your application to AWS, prompting for stack name, AWS region, confirmation before deployment, IAM role creation, and saving configuration arguments.
The API Gateway Endpoint URL will be displayed in the output values after deployment.
Use the SAM CLI to build and test your application locally:
$ sam build
To invoke a single function locally with a test event:
$ sam local invoke InferenceFunction --event events/event.json
To emulate your application's API locally:
$ sam local start-api
$ curl http://localhost:3000/passenger
The sam local start-api command will run the API locally on port 3000.
Use sam logs to fetch logs generated by your deployed Lambda function:
$ sam logs -n InferenceFunction --stack-name "cosmic-rescue" --tail
Refer to the SAM CLI Documentation for more log filtering options.
To delete the created application, use the AWS CLI:
$ sam delete --stack-name "cosmic-rescue"