Versions used in this nested environment:
Base System ESXi 6.7u3
Base vCenter 7.0.3
Nested ESXi 7.0.3d
Nested vCenter 8.0.0 Build 20183400 (some sort of alpha/beta version) (sorry internal right now only on build web)
AVI 22.1.1-9052 (https://portal.avipulse.vmware.com/software/vantage)
The nested test environment is built via William Lam (https://williamlam.com/nested-virtualization) Power Shell script
Deploys one vCenter
Deploys 9 ESXi server into 3 vCenter clusters with vSAN and Networks
Deploy AVI by hand
AVI set up
Done by hand
WCP enablement
Done by hand
Workload cluster
Done by hand
DNS = "10.197.96.7"
NTP = "10.128.152.81"
AVI Management Network "192.168.1.1/24" "192.168.1.60-192.168.70"
AVI Frontend Network "192.168.4.1/24" "192.168.4.70-192.168.4.100"
AVI Route workload to frontend network 192.168.5.0/24 -> 192.168.4.1
WCP Management Network 192.168.1.80 <--- Start
WCP Workload Network "192.168.5.1/24" "192.168.5.120-192.168.5.140"
vCenter = 192.168.1.50
AVI = 192.168.1.40
ESXi = See William Lams script
The script modified for 3 vCenter clusters and 9 ESXi hosts
https://github.com/ogelbric/vSphere8_Avi_AZ/blob/main/PSAutomationScript
Script kick off:
Result:
Makre sure the is a content Lib (empty) for the AVI SE's (new) (AVI will push SE's to the lib from the cloud setup
Route from 192.168.5.0/24 (workload network to frontent) -> 192.168.4.1
Set Up AVI - Default Cloud is green (FYI the AKO POD in TKGs/u is lookign for the DefaultCloud in AVI!)
The 3 Supervisor VM's have 4-4-5 IP's - that is correct
Logging onto vCenter to jump to Supervisor to check on AKO
ssh [email protected] # my vcenter
shell
/usr/lib/vmware-wcp/decryptK8Pwd.py
ssh [email protected] # and use password from above command
k get pods -A # looks like AKO is in some crash loop
k logs -n vmware-system-ako vmware-system-ako-ako-controller-manager-58fbd65b89-2hzpk # and the log tells me my IPAM has issues on my cloud...
2022-08-11T17:41:36.360Z INFO cache/controller_obj_cache.go:2633 Skipping the check for SE group labels
2022-08-11T17:41:36.360Z ERROR k8s/ako_init.go:274 Error while validating input: Cloud does not have a ipam_provider_ref configured
2022-08-11T17:41:36.360Z ERROR ako-main/main.go:228 Handle configmap error during reboot, shutting down AKO.
Looking at AVI the default cloud sure thing it "forgot" about my front end network IPAM
k get pods -A |grep -i ako # get the pods again
k delete -n vmware-system-ako pod vmware-system-ako-ako-controller-manager-58fbd65b89-2hzpk # delete the POD
k get pods -A |grep -i ako # get the new pod name and it is running now=
k logs -n vmware-system-ako vmware-system-ako-ako-controller-manager-58fbd65b89-nc8px -f. #follow the log looks like it is working now
/usr/local/bin/kubectl-vsphere login --vsphere-username [email protected] --server=https://192.168.4.70 --insecure-skip-tls-verify
kubectl config use-context namespace1000
Create Workload cluster accross 3 zones (regular old TKC cluster(capw)) (note content lib does not have special image)
https://github.com/ogelbric/vSphere8_Avi_AZ_without_Arcas/blob/main/tkrcluster.yaml
k apply -f ./tkrcluster.yaml
[root@centosrouter 8u0]# k get tkc
NAME CONTROL PLANE WORKER TKR NAME AGE READY TKR COMPATIBLE UPDATES AVAILABLE
tkr-zoned-cluster01 3 3 v1.22.9---vmware.1-tkg.1.cc71bc8 5m9s False True
[root@centosrouter 8u0]# k get tkr
NAME VERSION READY COMPATIBLE CREATED
v1.16.12---vmware.1-tkg.1.da7afe7 v1.16.12+vmware.1-tkg.1.da7afe7 False False 92m
v1.16.14---vmware.1-tkg.1.ada4837 v1.16.14+vmware.1-tkg.1.ada4837 False False 96m
v1.16.8---vmware.1-tkg.3.60d2ffd v1.16.8+vmware.1-tkg.3.60d2ffd False False 95m
v1.17.11---vmware.1-tkg.1.15f1e18 v1.17.11+vmware.1-tkg.1.15f1e18 False False 98m
v1.17.11---vmware.1-tkg.2.ad3d374 v1.17.11+vmware.1-tkg.2.ad3d374 False False 95m
v1.17.13---vmware.1-tkg.2.2c133ed v1.17.13+vmware.1-tkg.2.2c133ed False False 98m
v1.17.17---vmware.1-tkg.1.d44d45a v1.17.17+vmware.1-tkg.1.d44d45a False False 98m
v1.17.7---vmware.1-tkg.1.154236c v1.17.7+vmware.1-tkg.1.154236c False False 94m
v1.17.8---vmware.1-tkg.1.5417466 v1.17.8+vmware.1-tkg.1.5417466 False False 92m
v1.18.10---vmware.1-tkg.1.3a6cd48 v1.18.10+vmware.1-tkg.1.3a6cd48 False False 96m
v1.18.15---vmware.1-tkg.1.600e412 v1.18.15+vmware.1-tkg.1.600e412 False False 97m
v1.18.15---vmware.1-tkg.2.ebf6117 v1.18.15+vmware.1-tkg.2.ebf6117 False False 98m
v1.18.19---vmware.1-tkg.1.17af790 v1.18.19+vmware.1-tkg.1.17af790 False False 91m
v1.18.5---vmware.1-tkg.1.c40d30d v1.18.5+vmware.1-tkg.1.c40d30d False False 96m
v1.19.11---vmware.1-tkg.1.9d9b236 v1.19.11+vmware.1-tkg.1.9d9b236 False False 98m
v1.19.14---vmware.1-tkg.1.8753786 v1.19.14+vmware.1-tkg.1.8753786 False False 94m
v1.19.16---vmware.1-tkg.1.df910e2 v1.19.16+vmware.1-tkg.1.df910e2 False False 98m
v1.19.7---vmware.1-tkg.1.fc82c41 v1.19.7+vmware.1-tkg.1.fc82c41 False False 94m
v1.19.7---vmware.1-tkg.2.f52f85a v1.19.7+vmware.1-tkg.2.f52f85a False False 95m
v1.20.12---vmware.1-tkg.1.b9a42f3 v1.20.12+vmware.1-tkg.1.b9a42f3 True True 94m
v1.20.2---vmware.1-tkg.1.1d4f79a v1.20.2+vmware.1-tkg.1.1d4f79a True True 92m
v1.20.2---vmware.1-tkg.2.3e10706 v1.20.2+vmware.1-tkg.2.3e10706 True True 97m
v1.20.7---vmware.1-tkg.1.7fb9067 v1.20.7+vmware.1-tkg.1.7fb9067 True True 95m
v1.20.8---vmware.1-tkg.2 v1.20.8+vmware.1-tkg.2 True True 96m
v1.20.9---vmware.1-tkg.1.a4cee5b v1.20.9+vmware.1-tkg.1.a4cee5b True True 94m
v1.21.2---vmware.1-tkg.1.ee25d55 v1.21.2+vmware.1-tkg.1.ee25d55 True True 98m
v1.21.6---vmware.1-tkg.1 v1.21.6+vmware.1-tkg.1 True True 98m
v1.21.6---vmware.1-tkg.1.b3d708a v1.21.6+vmware.1-tkg.1.b3d708a True True 97m
v1.22.9---vmware.1-tkg.1.cc71bc8 v1.22.9+vmware.1-tkg.1.cc71bc8 True True 94m
Log onto workload cluster
kubectl vsphere login --server 192.168.4.70 --vsphere-username [email protected] --managed-cluster-namespace namespace1000 --managed-cluster-name tkr-zoned-cluster01 --insecure-skip-tls-verify
kubectl config use-context tkr-zoned-cluster01
k get nodes
NAME STATUS ROLES AGE VERSION
tkr-zoned-cluster01-workerpool-1-ddn8z-d58f8694f-5bjct Ready <none> 173m v1.22.9+vmware.1
tkr-zoned-cluster01-workerpool-2-4r728-68dd78b9-bszlt Ready <none> 167m v1.22.9+vmware.1
tkr-zoned-cluster01-workerpool-3-89dcs-7576c89bb5-5xfhd Ready <none> 167m v1.22.9+vmware.1
tkr-zoned-cluster01-xfvvq-gwgnj Ready control-plane,master 163m v1.22.9+vmware.1
tkr-zoned-cluster01-xfvvq-w78mh Ready control-plane,master 148m v1.22.9+vmware.1
tkr-zoned-cluster01-xfvvq-wscsf Ready control-plane,master 3h4m v1.22.9+vmware.1
https://github.com/ogelbric/vSphere8_Avi_AZ_without_Arcas/blob/main/google-nginx-lbsvcGA.yaml
k get pods
NAME READY STATUS RESTARTS AGE
nginx-55f8c45555-bf4gs 1/1 Running 0 19m
nginx-55f8c45555-pdxk4 1/1 Running 0 19m
nginx-55f8c45555-xpcjf 1/1 Running 0 19m
[root@centosrouter 8u0]# k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-55f8c45555-bf4gs 1/1 Running 0 19m 192.168.130.2 tkr-zoned-cluster01-workerpool-2-4r728-68dd78b9-bszlt <none> <none>
nginx-55f8c45555-pdxk4 1/1 Running 0 19m 192.168.131.2 tkr-zoned-cluster01-workerpool-3-89dcs-7576c89bb5-5xfhd <none> <none>
nginx-55f8c45555-xpcjf 1/1 Running 0 19m 192.168.129.2 tkr-zoned-cluster01-workerpool-1-ddn8z-d58f8694f-5bjct <none> <none>
k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.168.192.1 <none> 443/TCP 3d20h
nginx LoadBalancer 192.168.224.238 192.168.4.75 80:31205/TCP 28m
supervisor ClusterIP None <none> 6443/TCP 3d20h
curl -v 192.168.4.75
* Rebuilt URL to: 192.168.4.75/
* Trying 192.168.4.75...
* TCP_NODELAY set
* Connected to 192.168.4.75 (192.168.4.75) port 80 (#0)
> GET / HTTP/1.1
> Host: 192.168.4.75
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.19.4Check on ingress
Create Workload cluster accross 3 zones (new classy cluster(capv)) (note content lib has special image)
!!! Note last line has the special image
!!! Also be aware of the content lib name vs. the outcome on NAME/VERSION
!!! The name was (yes difficult) Derived as follows:
!!! k get tkr -o yaml | grep " name: photon"
!!! photon-3-amd64-vmi-k8s-v1.23.8---vmware.2-tkg.1-zshippable
k get tkr
NAME VERSION READY COMPATIBLE CREATED
v1.16.12---vmware.1-tkg.1.da7afe7 v1.16.12+vmware.1-tkg.1.da7afe7 False False 16h
v1.16.14---vmware.1-tkg.1.ada4837 v1.16.14+vmware.1-tkg.1.ada4837 False False 16h
v1.16.8---vmware.1-tkg.3.60d2ffd v1.16.8+vmware.1-tkg.3.60d2ffd False False 16h
v1.17.11---vmware.1-tkg.1.15f1e18 v1.17.11+vmware.1-tkg.1.15f1e18 False False 16h
v1.17.11---vmware.1-tkg.2.ad3d374 v1.17.11+vmware.1-tkg.2.ad3d374 False False 16h
v1.17.13---vmware.1-tkg.2.2c133ed v1.17.13+vmware.1-tkg.2.2c133ed False False 16h
v1.17.17---vmware.1-tkg.1.d44d45a v1.17.17+vmware.1-tkg.1.d44d45a False False 16h
v1.17.7---vmware.1-tkg.1.154236c v1.17.7+vmware.1-tkg.1.154236c False False 16h
v1.17.8---vmware.1-tkg.1.5417466 v1.17.8+vmware.1-tkg.1.5417466 False False 16h
v1.18.10---vmware.1-tkg.1.3a6cd48 v1.18.10+vmware.1-tkg.1.3a6cd48 False False 16h
v1.18.15---vmware.1-tkg.1.600e412 v1.18.15+vmware.1-tkg.1.600e412 False False 16h
v1.18.15---vmware.1-tkg.2.ebf6117 v1.18.15+vmware.1-tkg.2.ebf6117 False False 16h
v1.18.19---vmware.1-tkg.1.17af790 v1.18.19+vmware.1-tkg.1.17af790 False False 16h
v1.18.5---vmware.1-tkg.1.c40d30d v1.18.5+vmware.1-tkg.1.c40d30d False False 16h
v1.19.11---vmware.1-tkg.1.9d9b236 v1.19.11+vmware.1-tkg.1.9d9b236 False False 16h
v1.19.14---vmware.1-tkg.1.8753786 v1.19.14+vmware.1-tkg.1.8753786 False False 16h
v1.19.16---vmware.1-tkg.1.df910e2 v1.19.16+vmware.1-tkg.1.df910e2 False False 16h
v1.19.7---vmware.1-tkg.1.fc82c41 v1.19.7+vmware.1-tkg.1.fc82c41 False False 16h
v1.19.7---vmware.1-tkg.2.f52f85a v1.19.7+vmware.1-tkg.2.f52f85a False False 16h
v1.20.12---vmware.1-tkg.1.b9a42f3 v1.20.12+vmware.1-tkg.1.b9a42f3 True True 16h
v1.20.2---vmware.1-tkg.1.1d4f79a v1.20.2+vmware.1-tkg.1.1d4f79a True True 16h
v1.20.2---vmware.1-tkg.2.3e10706 v1.20.2+vmware.1-tkg.2.3e10706 True True 16h
v1.20.7---vmware.1-tkg.1.7fb9067 v1.20.7+vmware.1-tkg.1.7fb9067 True True 16h
v1.20.8---vmware.1-tkg.2 v1.20.8+vmware.1-tkg.2 True True 16h
v1.20.9---vmware.1-tkg.1.a4cee5b v1.20.9+vmware.1-tkg.1.a4cee5b True True 16h
v1.21.2---vmware.1-tkg.1.ee25d55 v1.21.2+vmware.1-tkg.1.ee25d55 True True 16h
v1.21.6---vmware.1-tkg.1 v1.21.6+vmware.1-tkg.1 True True 16h
v1.21.6---vmware.1-tkg.1.b3d708a v1.21.6+vmware.1-tkg.1.b3d708a True True 16h
v1.22.9---vmware.1-tkg.1.cc71bc8 v1.22.9+vmware.1-tkg.1.cc71bc8 True True 16h
v1.23.8---vmware.2-tkg.1-zshippable v1.23.8+vmware.2-tkg.1-zshippable True True 16h
Example of classy cluster yaml file:
https://github.com/ogelbric/vSphere8_Avi_AZ_without_Arcas/blob/main/classycluster.yaml
[root@centosrouter 8u0]# k apply -f ./classycluster.yaml
cluster.cluster.x-k8s.io/classy2-zoned-photon created
[root@centosrouter 8u0]# k get clusters
NAME PHASE AGE VERSION
classy2-zoned-photon 5s v1.23.8+vmware.2
[root@centosrouter 8u0]#
Trouble shooting
kubectl get tkc,virtualmachine,cluster,virtualmachineservice,vspherecluster,vspheremachine,machine,kcp,kubeadmconfig
k get tkr
....