Some proof-of-concepts that demonstrate how Cloudflare can work with GKE.
- Google Cloud account
- Google Cloud SDK with
gcloud
&kubectl
CLI tools - Terraform with
terraform
CLI tools - Cloudflare account with Argo enabled
1.1 Initialize Terraform
terraform init
1.2 Plan Terraform and verify the plan
terraform plan
(Optional) Set the variables as environment variables if you want want to entering them interactively every time you plan. For example,
# Read from the local user name
export TF_VAR_resource_prefix=$USER
# Read from gcloud default project id
export TF_VAR_gcp_project_id=$(gcloud config get-value project)
1.3 Apply the Terraform plan
terraform apply
It will takes > 10 minutes.
2.1 Connect to the cluster Follow the instructions in GCP Console -> Kubernetes Engine -> Cluster -> Connect OR run
gcloud container clusters get-credentials <CLUSTER_NAME>
2.2 Run some kubectl to make sure it's configured correctly.
kubectl config get-contexts
2.3 The foundation has been laid. The real fun starts from here...
cloudflared-sidecar.yaml
- Login in to Cloudflare Argo Tunnel
cloudflared tunnel login
- Load the cert to the K8s secret store
kubectl create secret generic cloudflared-cert --from-file="$HOME/.cloudflared/cert.pem"
- Apply the deployment
kubectl apply -f cloudflared-sidecar.yaml
- Check Cloudflare dashboard > Traffic > Argo Tunnel
Solution: remove resource limit My Container is terminated
gke-ingress.yaml
- Apply the deployment
kubectl apply -f gke-ingress.yaml
- Get the external IP address
kubectl get ingress
- Add it to Cloudflare DNS as an origin
- Using Kubernetes on GKE and AWS with Cloudflare Load Balancer
- Setting up HTTP Load Balancing with Ingress
- HTTP(S) load balancing with Ingress
- Configuring load balancing through Ingress
- GKE Exposing applications using services - ClusterIP vs NodePOrt vs LoadBalancer etc
- ClusterIP vs NodePOrt vs LoadBalancer etc
- Making Load Balancer IP Static
Error during sync: error running load balancer syncing routine: loadbalancer default-cwang-httpbin-ingress--6029373544ea4799 does not exist: googleapi: Error 400: STANDARD network tier (the project's default network tier) is not supported: STANDARD network tier is not supported for global forwarding rule., badRequest
Just set the Network Service Tier to Premium. Using Network Service Tiers
Deployment Mode 3: Cloudflare Argo Tunnel in "Trailer" mode without Cloudflare Load Balancer or GCP Forwarding Rule
cloudflared-trailer.yaml
- Login in to Cloudflare Argo Tunnel
cloudflared tunnel login
- Load the cert to the K8s secret store
kubectl create secret generic cloudflared-cert --from-file="$HOME/.cloudflared/cert.pem"
- Apply the deployment
kubectl apply -f cloudflared-trailer.yaml
- Check Cloudflare dashboard > Traffic > Argo Tunnel
The service is not working.
kubectl run -it --rm --restart=Never alpine --image=alpine sh
If you don't see a command prompt, try pressing enter.
/ # wget -O- cwang-gke-int-lb-service
Connecting to cwang-gke-int-lb-service (10.112.9.183:80)
wget: can't connect to remote host (10.112.9.183): Connection refused
Service is not associated with the correct deployment?
kubectl describe endpoints cwang-gke-int-lb-service
Name: cwang-gke-int-lb-service
Namespace: default
Labels: app=cwang-gke-int-lb-app
Annotations: <none>
Subsets:
Events: <none>
- Labels & Selectors
- GKE Network Overview
- Using environment variables inside of your config
- DNS for Services and Pods
- Labels, Selectors, and MatchingLabels
kubectl config current-context
kubectl create -f FILE.yaml
kubectl apply -f FILE.yaml
kubectl delete -f FILE.yaml
kubectl get namespace
kubectl config get-contexts
kubectl get po --output wide
kubectl describe pods
kubectl logs POD_NAME CONTAINER_NAME
kubectl top node
kubectl get ingress INGRESS_NAME --output yaml
kubectl exec POD_NAME -- printenv | grep SERVICE
kubectl exec -it POD_NAME -- /bin/bash
kubectl run -it --rm --restart=Never alpine --image=alpine sh
kubectl get endpoints
kubectl scale deploy tunnel --replicas=2
terraform show gke.tf