kops toolbox template
@kris-nova, I wanted to clarify what I meant kops toolbox
was useful during the episode because I was unable to express myself in 200 characters at a time.
I was specifically referring to the template
function of the toolbox.
Creating clusters with the command line is fine to start with but if you want to deploy the same cluster topology into different environments (for example: dev, stage or production) the templating function is really handy.
For my clusters I cerated a cluster_template.yaml
similar to the one below (it's a bastion topology so nodes are not exposed on the internet):
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: {{ .myClusterName }}.{{ .dnsZone }}
spec:
# sts:AssumeRole is needed for kube2iam
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": ["sts:AssumeRole"],
"Resource": ["*"]
}
]
api:
loadBalancer:
type: Public
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: {{ .kopsStateStore }}/{{ .myClusterName }}.{{ .dnsZone }}
encryptionConfig: false # until we figure out how to make `kops create encryptionconfig -f config.yaml` work (config file format unknown)
etcdClusters:
- etcdMembers:
- encryptedVolume: true
instanceGroup: master-{{ .awsRegion }}a
name: a
- encryptedVolume: true
instanceGroup: master-{{ .awsRegion }}b
name: b
- encryptedVolume: true
instanceGroup: master-{{ .awsRegion }}c
name: c
enableEtcdTLS: {{ .etcdTLS }}
name: main
version: {{ .etcdVersion }}
- etcdMembers:
- encryptedVolume: true
instanceGroup: master-{{ .awsRegion }}a
name: a
- encryptedVolume: true
instanceGroup: master-{{ .awsRegion }}b
name: b
- encryptedVolume: true
instanceGroup: master-{{ .awsRegion }}c
name: c
enableEtcdTLS: {{ .etcdTLS }}
name: events
version: {{ .etcdVersion }}
iam:
allowContainerRegistry: true
legacy: false
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: {{ .kubernetesVersion }}
masterInternalName: api.internal.{{ .myClusterName }}.{{ .dnsZone }}
masterPublicName: api.{{ .myClusterName }}.{{ .dnsZone }}
networkCIDR: {{ .myNetworkCIDR }}
networking:
calico: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
subnets:
- cidr: {{ .myNetworkPrefix }}.32.0/19
name: {{ .awsRegion }}a
type: Private
zone: {{ .awsRegion }}a
- cidr: {{ .myNetworkPrefix }}.64.0/19
name: {{ .awsRegion }}b
type: Private
zone: {{ .awsRegion }}b
- cidr: {{ .myNetworkPrefix }}.96.0/19
name: {{ .awsRegion }}c
type: Private
zone: {{ .awsRegion }}c
- cidr: {{ .myNetworkPrefix }}.0.0/22
name: utility-{{ .awsRegion }}a
type: Utility
zone: {{ .awsRegion }}a
- cidr: {{ .myNetworkPrefix }}.4.0/22
name: utility-{{ .awsRegion }}b
type: Utility
zone: {{ .awsRegion }}b
- cidr: {{ .myNetworkPrefix }}.8.0/22
name: utility-{{ .awsRegion }}c
type: Utility
zone: {{ .awsRegion }}c
topology:
bastion:
bastionPublicName: bastion.{{ .myClusterName }}.{{ .dnsZone }}
dns:
type: Public
masters: private
nodes: private
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
name: bastions
spec:
image: {{ .bastionAwsAmiId }}
machineType: {{ .bastionSize }}
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: bastions
role: Bastion
subnets:
- utility-{{ .awsRegion }}a
- utility-{{ .awsRegion }}b
- utility-{{ .awsRegion }}c
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
name: master-{{ .awsRegion }}a
spec:
image: {{ .baseAwsAmiId }}
machineType: {{ .masterSize }}
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-{{ .awsRegion }}a
role: Master
subnets:
- {{ .awsRegion }}a
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
name: master-{{ .awsRegion }}b
spec:
image: {{ .baseAwsAmiId }}
machineType: {{ .masterSize }}
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-{{ .awsRegion }}b
role: Master
subnets:
- {{ .awsRegion }}b
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
name: master-{{ .awsRegion }}c
spec:
image: {{ .baseAwsAmiId }}
machineType: {{ .masterSize }}
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-{{ .awsRegion }}c
role: Master
subnets:
- {{ .awsRegion }}c
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
name: nodes
spec:
image: {{ .baseAwsAmiId }}
machineType: {{ .nodeSize }}
maxSize: {{ .nodesMaxCount }}
minSize: {{ .nodesMinCount }}
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
subnets:
- {{ .awsRegion }}a
- {{ .awsRegion }}b
- {{ .awsRegion }}c
(NOTE: we have to use custom AMIs because the base image is a CentOS 7, and on top the bastion has 2FA, this might not be necessary for everyone)
Then you'd need value files to fill in the template before.
Example:
dev_vars.yaml
:
baseAwsAmiId: ami-XXXXX
bastionAwsAmiId: ami-YYYYY
awsRegion: eu-west-1
bastionSize: t2.micro
myClusterName: k8sdev
dnsZone: mydomain.tld
etcdTLS: false
etcdVersion: 3.1.11
kopsStateStore: s3://mycompany-kops-state-store
kubernetesVersion: 1.9.6
masterSize: t2.medium
myNetworkCIDR: 172.21.0.0/16
myNetworkPrefix: 172.21
nodeSize: t2.medium
nodesMinCount: 3
nodesMaxCount: 5
stage_vars.yaml
:
baseAwsAmiId: ami-ZZZZZ
bastionAwsAmiId: ami-WWWWW
awsRegion: us-east-2
bastionSize: t2.micro
myClusterName: k8sstage
dnsZone: mydomain.tld
etcdTLS: true
etcdVersion: 3.1.11
kopsStateStore: s3://mycompany-kops-state-store
kubernetesVersion: 1.9.8
masterSize: m4.large
myNetworkCIDR: 172.22.0.0/16
myNetworkPrefix: 172.22
nodeSize: m4.large
nodesMinCount: 2
nodesMaxCount: 5
Then to generate the final cluster spec file:
kops toolbox template --values <environment>_vars.yaml --template cluster_template.yaml --output new_cluster.yaml
And apply the changes:
kops replace -f new_cluster.yaml
kops update cluster $NAME # review and re-run the command with --yes
kops rolling-update cluster $NAME # review and re-run with --yes. This will reboot nodes one at a time