- Create a project to contain resources used in this demo, or re-use one that you've used previously.
-
Open GCP Cloud Shell with SDK pointed at demo project
-
Run the following to ensure key services are enabled
export PROJECT_ID=$(gcloud config get-value project) gcloud services enable sqladmin.googleapis.com --quiet gcloud services enable pubsub.googleapis.com --quiet gcloud services enable compute.googleapis.com --quiet gcloud services enable cloudresourcemanager.googleapis.com --quiet sleep 60
This will take 5-8 minutes to complete. The
sleep
at the end is meant to avoid timing problems where the APIs won't respond immediately after being enabled. -
Run the following get get the current version of demo into home directory
cd ~ rm -rf ce-demo-lms git clone https://github.com/jwdavis/ce-demo-lms.git cd ~/ce-demo-lms/terraform
-
Run the following to populated some Terraform variables. Make sure to replace the placeholders (including the <>) with real values:
export TF_VAR_SUP_PASS=<sup_pass> export TF_VAR_SQL_PASS=<sql_pass> export TF_VAR_SQL_SUFFIX=$(date +%Y%m%d%H%M%S) export TF_VAR_project=$PROJECT_ID
-
Run the following to do the Terraform build:
terraform init terraform apply -auto-approve
-
Installation with take about 20 minutes to complete (Cloud SQL takes a long time to create a primary and 2 read replicas). Also, after TF shows it's down, it may still take 5+ minutes for the load balancer to settle down
-
Open browser pointed at load balancer IP (this is shown after the setup has completed) and validate app is running
- Show home page
- Show modules
- Show a module module with video playing
- Show create module
- walk them through diagram
- Note that MIGs are autoscaling 1-10
- NGINX servers scale based on LB load
- Transcoding servers scale based on CPU load
- Primary Cloud SQL instance is HA
- There are Cloud SQL read replicas in other regions
- App is written to read from local replica, write to primary
- When video is uploaded, app sends pubsub message to topic
- Transcoding app reads messages about uploads and processes them
- Server speak to Cloud SQL using Cloud SQL Auth Proxy
- Optionally, you can call out additional details
- Custom subnet network is created for solution
- Firewall rules only allow HTTP traffic from google LBs and HC
- Cloud SQL Admin API is enabled during setup
- What might you do differently?
- global ip
- backend service page (note cdn)
- backend bucket page (note cdn)
- url map page
- show the three web and one transcode
- show autoscaling setup
- sSSHsh into test machines in us, europe, asia
- generate load from three regions (the command customized for your lb IP is shown in cloud shell)
- show them what's happening using the LB monitoring page.
- it takes a while for the page to update
- hopefully, it shows traffic from each source going to different backends
- Google Cloud Monitoring dashboard for load balancing can also be fun
- Google Cloud Monitoring dashboard for Cloud SQL can be fun
- on each test VM, generate load of video (the command customized for your lb IP is shown in cloud shell)
- show each vm having similar performance (though videos are in us)
- show the cdn monitoring page to see increase in cdn use
- it takes a while for the page to update
- show lms backend service has no traffic
- the videos are being served by bucket and CDN
- click on backend bucket in lb monitoring page to show requests there
- you may note that CDN only caches objects <10MB (mantas video is)
- there's a beta for large object caching
- on each test VM, generate high rps load from each test machine (the command customized for your lb IP is shown in cloud shell)
- show instance groups changing size
- in LB monitoring page, click on backend service to show backend details
- watch the test machines to see if ab errors out
- The river chart will be totally messed up now, with all traffic showing as flowing through the /videos path which is wrong. This is an error in the chart.
- show raw media bucket
- show transcoded media bucket
- create module with video
- backend takes 30-60 seconds to kickoff
- show new server spinning up
- show cpu utilization in instance group
- this takes a while to update
- The entire build is handled via Terraform
- Can be fun to start deployment, then show students some of how it works
-
Run the following in Cloud Shell
# Workaround https://github.com/hashicorp/terraform-provider-google/issues/6782 sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 > /dev/null export APIS="googleapis.com www.googleapis.com storage.googleapis.com iam.googleapis.com cloudresourcemanager.googleapis.com sqladmin.googleapis.com pubsub.googleapis.com compute.googleapis.com" for name in $APIS do ipv4=$(getent ahostsv4 "$name" | head -n 1 | awk '{ print $1 }') grep -q "$name" /etc/hosts || ([ -n "$ipv4" ] && sudo sh -c "echo '$ipv4 $name' >> /etc/hosts") done # Workaround end
cd ~/ce-demo-lms/terraform terraform destroy -auto-approve && \ cd ~ && \ rm -rf ~/ce-demo-lms && \ export PROJECT_ID=$(gcloud config get-value project) && \
-
If you receive an error that looks similar to this
terraform dial tcp [2607:f8b0:400c:c15::80]:443: connect: cannot assign requested address
Just rerun the command; eventually it'll all work. This is a bug in TF and Cloud Shell.