tigera / operator Goto Github PK
View Code? Open in Web Editor NEWKubernetes operator for installing Calico and Calico Enterprise
License: Apache License 2.0
Kubernetes operator for installing Calico and Calico Enterprise
License: Apache License 2.0
I want to provision a reasonable number of small test EKS clusters that don't necessarily require a large number of nodes but do need to be fully functional, i.e. calico, cluster-autoscaler, etc. The typha deployment seems to be set to 3 replicas which means each cluster has a minimum of 3 nodes regardless of how utilised they are.
The typha deployment is set for 3 replicas which means as soon as it's deployed, the autoscaler kicks in and increases the nodes to 3 to satisfy the deployment and will never scale down again. I can edit the deployment once deployed but it seems the change is overwritten again within a few minutes. Do I need 3 for consensus reasons or can I get away with fewer?
I've made use of the registry
and imagePullSecrets
fields on the Installation
resource as my EKS clusters are entirely private, (thanks for those!), would adding another field here be a potential solution?
On larger Openshift clusters: restarts of the typha operator cause really large memory spikes that result in the APIServer getting out of memory events and effects downstream components. In my cluster I currently have
12374 pods
225 nodes
3314 services
kubectl get pods --all-namespaces -o wide --no-headers | wc -l
13274
kubectl get nodes --no-headers | wc -l
225
kubectl get services --all-namespaces --no-headers | wc -l
3314
Note we have a total of 3 typha pods in the cluster
kubectl get pods -n calico-system -l k8s-app=calico-typha
NAME READY STATUS RESTARTS AGE
calico-typha-b66bc54df-44zk7 1/1 Running 0 28m
calico-typha-b66bc54df-shh4f 1/1 Running 0 28m
calico-typha-b66bc54df-vl7jp 1/1 Running 0 28m
Posting the associated graphs that show the behavior when restarting typha (I can replicate by simply doing kubectl delete pods -n calico-system -l k8s-app=calico-typha
) and waiting some time
At the time of the restart the kube-apiserver memory doubles after the restart of the pods (goes up by 20+Gigs).
The maximum size of the backing database if 4GB. It seems odd that on startup of typha it would result in over 20+GB of utilization when it's meant to be a single "pool" of connecting to the datastore for all the calico-node pods.
Memory usage of the openshift apiserver on a restart of typha should not double. Typha should be able to download the data it needs in a controlled fashion that doesn't cause the APIServer to drastically change from a memory perspective.
At startup of typha pods memory usage drastically increases for the kube-apiserver hosting the cluster (doubles).
Potentially try and rate limit data download for larger clusters? Maybe look at watch optimizations?
This issue affects the availability of the kube-apiserver. It can lead to slower processing of other workloads, drastically slower kubectl request processing (to the point where the requests time out), and failure of other control plane components like scheduling of pods, failover of pods, etc. This is due to the fact that calico-typha is consuming all the resources from the kube-apiserver
Cluster: Openshift 4.7.19 (4.7.19_1526_openshift IBM Cloud Openshift cluster)
Details about the scale of the cluster are listed at the top of the issue
Image 1.20.4 (default in manifests in the quickstart) seems to be missing from quay.io:
podman pull quay.io/tigera/operator:v1.20.4
Trying to pull quay.io/tigera/operator:v1.20.4...
Error: Error initializing source docker://quay.io/tigera/operator:v1.20.4: Error reading manifest v1.20.4 in quay.io/tigera/operator: manifest unknown: manifest unknown
I'm currently moving over from the legacy EKS Helm charts Calico implementation in favour of using the operator and I'm interested as to why the operator needs to be on the host network as well as have access to /var/lib/calico on the host?
tigera-operator
over at the CNCF ArtifactHub:See for example:
Talking in the Calico Community meeting on Dec 8 we were discussing setting FelixConfiguration instead of using environment variables and one of the issues is knowing the difference between if the Operator set the field or if the user had explicitly set it.
There is the managedFields metadata on resources that we should be able to use to know if a field is being managed by the operator or if a user (or something other than the operator set the field). We should add the machinery to manage FelixConfiguration fields and know if they had been written/overwritten by the user would allow us to use FelixConfiguration instead of setting environment variables.
calico-kube-controllers is missing rbac with default setup to allow the controller to IPAMBlocks. and keeps returning this logs over and over
2021-05-07 13:37:38.999 [INFO][1] watchercache.go 243: Failed to create watcher ListRoot="/calico/ipam/v2/assignment/" error=connection is unauthorized: unknown (get IPAMBlocks.crd.projectcalico.org) performFullResync=true
2021-05-07 13:37:38.999 [INFO][1] watchercache.go 174: Full resync is required ListRoot="/calico/ipam/v2/assignment/"
manually adding new cluster role and cluster role binding to service account that runs the service. example patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-kube-controllers-patch
rules:
- apiGroups:
- ""
resources:
- nodes
- endpoints
- services
verbs:
- watch
- list
- get
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
- apiGroups:
- crd.projectcalico.org
resources:
- ippools
verbs:
- list
- get
- watch
- apiGroups:
- crd.projectcalico.org
resources:
- blockaffinities
- ipamblocks
- ipamhandles
- networksets
verbs:
- get
- list
- watch
- create
- update
- delete
- apiGroups:
- crd.projectcalico.org
resources:
- clusterinformations
verbs:
- get
- create
- update
- apiGroups:
- crd.projectcalico.org
resources:
- hostendpoints
verbs:
- get
- list
- create
- update
- delete
- apiGroups:
- crd.projectcalico.org
resources:
- kubecontrollersconfigurations
verbs:
- get
- create
- update
- watch
- apiGroups:
- policy
resourceNames:
- calico-kube-controllers
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-kube-controllers-patch
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers-patch
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: calico-system
install using aws cni yaml https://github.com/aws/amazon-vpc-cni-k8s/blob/master/config/v1.7/calico.yaml on a 1.18 k8s cluster
aws eks 1.18
It would be cool to be able to deploy specific calico version or just update minor calico version
I'd expect something like "put var here, i deploy calico of given version".
Nothing like that.
Add environment variable for that?
Openshift 4.x
We're expecting the pod calico-node
not to restart but the pod restart after the liveness probe fails.
We would like to open a PR by adding an initialDelaySeconds
attribute to the liveness probe. What do you think ?
This issue affects us as it is polluted our alert feed.
EKS
This is an improvement, need to add nodeSelector for calico-node DS to "not" schedule on Fargate nodes. Similar to https://github.com/aws/amazon-vpc-cni-k8s/blob/master/config/master/aws-k8s-cni-cn.yaml#L100
Please see this - aws/amazon-vpc-cni-k8s#1429
aws/amazon-vpc-cni-k8s#1429 (comment)
After setting up the operator and configuring Felix to supply Prometheus metrics via the FelixConfiguration CR (which is all working fine), I then went on a hunt for enabling the same for Typha. This can be enabled if you go the non operator route and just use manifests to deploy Calico, but I'm trying to stick with operators where possible, but it currently looks like this is non-configurable via the operator as the environment variables supplied to Typha are all hard coded (unless I've completely missed something), as per the link below.
Line 418 in b8c3549
As Typha uses host networking, this obviously raises a possible on the security / port already in use front, and while I could just do a custom build of the operator with these hard coded to enabled and set to the appropriate port for my environment, I wanted to see if there were any plans to add a CRD for configuring Typha in the same manner as Felix?
we deployed AKS cluster and using calico & kubenet
we used same pipeline to deploy many AKS clusters
we need the tigera-operator is in running state and not crashing
pod tigera-operator is in crashloopback with below logs
2021/07/05 10:53:40 [INFO] Version: v1.17.1
2021/07/05 10:53:40 [INFO] Go Version: go1.15.2
2021/07/05 10:53:40 [INFO] Go OS/Arch: linux/amd64
{"level":"error","ts":1625482450.1487107,"logger":"controller-runtime.manager","msg":"Failed to get API Group-Resources","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.New\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/manager.go:317\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:157\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
{"level":"error","ts":1625482450.1493962,"logger":"setup","msg":"unable to start manager","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:175\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
*we tested traffic from the node hosted the tigera operator toward the API server FQDN above and its working fine !
*if it fail to connect with API server so we expect t all nodes are not ready , but all are ready state
*we have firewall but its allowing traffic to API server FQDN and all calico pods are running expect tigera
below describe and logs for tigera-operator
C:\WINDOWS\system32>kubectl describe pod tigera-operator-64bd78b58-99lmc -n tigera-operator
Name: tigera-operator-64bd78b58-99lmc
Namespace: tigera-operator
Priority: 0
Node: aks-systempool-14727861-vmss000000/10.248.56.4
Start Time: Fri, 02 Jul 2021 18:03:40 +0200
Labels: k8s-app=tigera-operator
name=tigera-operator
pod-template-hash=64bd78b58
Annotations:
Status: Running
IP: 10.248.56.4
IPs:
IP: 10.248.56.4
Controlled By: ReplicaSet/tigera-operator-64bd78b58
Containers:
tigera-operator:
Container ID: containerd://15091f8c3c039accf9d559695ff97bbdf28a7ce95ce9823e30e79b487b9dfa7a
Image: mcr.microsoft.com/oss/tigera/operator:v1.17.1
Image ID: sha256:bcba4d5a252ae36cbf5909e31e3fed19ec6efb1ef62afa58b74f2687fea87b5b
Port: <none>
Host Port: <none>
Command:
operator
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 05 Jul 2021 12:53:40 +0200
Finished: Mon, 05 Jul 2021 12:54:10 +0200
Ready: False
Restart Count: 718
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
WATCH_NAMESPACE:
POD_NAME: tigera-operator-64bd78b58-99lmc (v1:metadata.name)
OPERATOR_NAME: tigera-operator
TIGERA_OPERATOR_INIT_IMAGE_VERSION: v1.17.1
KUBERNETES_PORT_443_TCP_ADDR: aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io
Mounts:
/var/lib/calico from var-lib-calico (ro)
/var/run/secrets/kubernetes.io/serviceaccount from tigera-operator-token-4sv2d (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/lib/calico
HostPathType:
tigera-operator-token-4sv2d:
Type: Secret (a volume populated by a Secret)
SecretName: tigera-operator-token-4sv2d
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoExecute op=Exists
:NoSchedule op=Exists
CriticalAddonsOnly op=Exists
Events:
Type Reason Age From Message
Normal Pulled 46m (x711 over 2d18h) kubelet Container image "mcr.microsoft.com/oss/tigera/operator:v1.17.1" already present on machine
Warning BackOff 66s (x16784 over 2d18h) kubelet Back-off restarting failed container
tigera-operator pod failing
AKS cluster 1.20.7
The ports that are exposed by the containers are not named nor even explicitely defined in the kubernetes resources that are created by the operator.
When the operator creates a resource (e.g. felix/calico-node DaemonSet) and it is set to have metrics active (e.g. done with a FelixConfiguration CR), the resource should also correctly name and expose the port.
ports:
- containerPort: 9091
name: metrics
This is necessary so the prometheus-operator can correctly create the prometheus jobs to monitor the resources (they rely on the ports being available in the metadata).
Metrics are available from the correct port, however the port is not named/defined in the k8s resource.
I could possibly directly patch the DaemonSet, but that'd require me to disable the reconciliation which isn't a solution for production at all.
Another solution would be to not use the Prometheus-Operator and write the monitoring jobs for prometheus manually but that isn't really a solution that should be necessary.
Each operator release should also contain complete set of manifests required for deployment on supported platforms (like OpenShift 4.x).
Currently, documentation says that for deployment on OpenShift 4.x I should download set of files from "https://docs.projectcalico.org/manifests/...", but it is not clearly stated which version will be used or how to use different version than the one which is hardcoded in the above manifests (currently: 1.5.0).
I could try and modify image version after downloading those manifests, but then I'm not sure if some CRDs have not been changed (or added) in the release I'm interested in, or if I have to also change image version for init container (and which one should be used).
It's not possible to configure the path to the CNI bin directory for the Calico deployment. Calico puts the files in /opt/cni/bin
for Kubernetes deployments.
If the cluster does not use this path, the files end up in the wrong location and as a result the nodes never become ready.
The CNI bin directory can be specified on the operator deployment or Installation spec.
CNI bin directory can only ever be /opt/cni/bin
and if a cluster uses anything else, it will fail.
Update the operator to expose configuration options to set the CNI bin directory path with (and the config directory wouldn't be a bad idea either tbh)
a. Manually copy over the files to the correct location on each node in the cluster.
b. Patch Calico Node DaemonSet after the operator has deployed it to change the hostPath.
/opt/cni/bin
/opt/cni/bin
on the nodesWe have a chart that pulls the operator chart as dependency.
apiVersion: v2
appVersion: "1.0"
description: A Helm chart for tigera-operator
name: tigera-operator
version: "0.0.1"
dependencies:
- name: tigera-operator
version: "v3.20.2"
repository: https://docs.projectcalico.org/charts
It used to work fine, but now when attempting to deploy it we run into:
❯ helm3 repo add project-calico https://docs.projectcalico.org/charts
❯ helm3 dependency build
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "localstack-repo" chart repository
...Successfully got an update from the "hashicorp" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "strimzi" chart repository
...Successfully got an update from the "project-calico" chart repository
...Successfully got an update from the "gympass" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Save error occurred: could not find : no matching version
Error: could not find : no matching version
Even the docs mention version 3.20.2 as the latest minor for 3.20:
helm show values projectcalico/tigera-operator --version v3.20.2
That we have access to all versions on Helm.
We only have access up to 3.20.0:
❯ helm3 search repo project-calico -l
NAME CHART VERSION APP VERSION DESCRIPTION
project-calico/tigera-operator v3.20.0 v3.20.0 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.19.2 v3.19.2 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.19.1 v3.19.1 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.19.0 v3.19.0 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.18.4 v3.18.4 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.18.3 v3.18.3 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.18.2 v3.18.2 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.18.1 v3.18.1 Installs the Tigera operator for Calico
project-calico/tigera-operator v3.18.0 v3.18.0 Installs the Tigera operator for Calico
We can't move forward with a new deployment changing configuration in our production environment. For a quick fix we plan on downgrading to 3.20, which is not ideal.
For completeness' sake, here's the output prior to running helm repo update
, showing the 3.20.2 version:
❯ helm search project-calico -l
NAME CHART VERSION APP VERSION DESCRIPTION
project-calico/tigera-operator v3.20.2 v3.20.2 Installs the Tigera operator for Calico
I'm using EKS v1.17 and calico v1.13, and tried to upgrade calico to the latest one in aws-eks-vpc-cni repo (Tigera Operator v1.13.2, and calico v1.17).
Since calico changed to be installed to calico-namespace
via Tigera Operator, I've expected that old Calico resources in kube-system
namespace are cleaned up by Tigera Operator.
https://docs.projectcalico.org/maintenance/operator-migration
But after upgrade some of the resources still remained in kube-system
.
While calico-node
seems working well, but migrating namespace causes some error in operator, so I wonder the affect of that.
Easiest way is to remove the remaining resources manually, but I don't know if I should do that since the document says we should not touch kube-system
resources.
Is the namespace migration not completed? If it's done, can I remove the remaining resources manually?
# Apply calico-operator
$ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.9/calico-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
# Apply calico-crs
$ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.9/calico-crs.yaml
installation.operator.tigera.io/default created
calico-node
in calico-system
seems ready$ kubectl get daemonset calico-node --namespace calico-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-node 3 3 3 3 3 kubernetes.io/os=linux 2m36s
# pod in calico-system
$ kubectl get pod -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7f959f6886-bnhv2 1/1 Running 0 3m24s
calico-node-xvjxs 1/1 Running 0 108s
calico-node-xzjr5 1/1 Running 0 2m9s
calico-node-zg5d9 1/1 Running 0 119s
calico-typha-67bbd4b6c8-44zrj 1/1 Running 0 106s
calico-typha-67bbd4b6c8-qzpbv 1/1 Running 0 2m9s
calico-typha-67bbd4b6c8-tgxkl 1/1 Running 0 2m9s
kube-system
# Before upgrade
$ kubectl get all -n kube-system | grep calico
pod/calico-node-k7pz5 1/1 Running 0 40s
pod/calico-node-s2zmq 1/1 Running 0 40s
pod/calico-node-xhgw7 1/1 Running 0 40s
pod/calico-typha-7c5b5df5d7-p9xw4 1/1 Running 0 40s
pod/calico-typha-horizontal-autoscaler-869dbcdddb-kjf29 1/1 Running 0 39s
service/calico-typha ClusterIP 172.20.145.26 <none> 5473/TCP 39s
daemonset.apps/calico-node 3 3 3 3 3 beta.kubernetes.io/os=linux 42s
deployment.apps/calico-typha 1/1 1 1 40s
deployment.apps/calico-typha-horizontal-autoscaler 1/1 1 1 40s
replicaset.apps/calico-typha-7c5b5df5d7 1 1 1 40s
replicaset.apps/calico-typha-horizontal-autoscaler-869dbcdddb 1 1 1 40s
# After upgrade
$ kubectl get all -n kube-system | grep calico
pod/calico-typha-horizontal-autoscaler-869dbcdddb-kjf29 1/1 Running 0 28m
service/calico-typha ClusterIP 172.20.145.26 <none> 5473/TCP 28m
deployment.apps/calico-typha-horizontal-autoscaler 1/1 1 1 28m
replicaset.apps/calico-typha-horizontal-autoscaler-869dbcdddb 1 1 1 28m
$ kubectl logs tigera-operator-6db99fb878-pgpps -n tigera-operator
2021/10/15 08:35:55 [INFO] Version: v1.13.2
2021/10/15 08:35:55 [INFO] Go Version: go1.14.4
2021/10/15 08:35:55 [INFO] Go OS/Arch: linux/amd64
{"level":"info","ts":1634286955.861958,"logger":"setup","msg":"Checking type of cluster","provider":"EKS"}
{"level":"info","ts":1634286955.8648527,"logger":"setup","msg":"Checking if TSEE controllers are required","required":false}
{"level":"info","ts":1634286955.9689445,"logger":"setup","msg":"starting manager"}
I1015 08:35:55.969173 1 leaderelection.go:243] attempting to acquire leader lease tigera-operator/operator-lock...
{"level":"error","ts":1634286955.9694507,"logger":"typha_autoscaler","msg":"Failed to autoscale typha","error":"could not get number of nodes: the cache is not started, can not read objects","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*typhaAutoscaler).start.func1\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/typha_autoscaler.go:122"}
I1015 08:35:55.986564 1 leaderelection.go:253] successfully acquired lease tigera-operator/operator-lock
{"level":"info","ts":1634286956.0697503,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.1701388,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1634286956.270546,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1634286956.3710735,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.4715445,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.5719342,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.6724467,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.7729394,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.873332,"logger":"controller","msg":"Starting EventSource","controller":"tigera-installation-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1634286956.973727,"logger":"controller","msg":"Starting Controller","controller":"tigera-installation-controller"}
{"level":"info","ts":1634286956.9738533,"logger":"controller","msg":"Starting workers","controller":"tigera-installation-controller","worker count":1}
{"level":"info","ts":1634286956.9739733,"logger":"controller_installation","msg":"Installation config not found","Request.Namespace":"tigera-operator","Request.Name":"default-token-jt62m"}
{"level":"info","ts":1634286956.974015,"logger":"controller_installation","msg":"Installation config not found","Request.Namespace":"tigera-operator","Request.Name":"tigera-operator-token-2khgm"}
{"level":"info","ts":1634287131.4939146,"logger":"migration_convert","msg":"did not detect kube-controllers"}
{"level":"info","ts":1634287131.4939778,"logger":"migration_convert","msg":"did not detect kube-controllers"}
{"level":"info","ts":1634287132.1048486,"logger":"render","msg":"Creating certificate secret","secret":"node-certs"}
{"level":"info","ts":1634287132.2683523,"logger":"render","msg":"Creating certificate secret","secret":"typha-certs"}
{"level":"error","ts":1634287133.0472322,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.2847915,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.5205033,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.7560074,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287133.9838965,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.2245877,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.4569678,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.6893094,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287134.9266615,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.1565535,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.3856297,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.6329298,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287135.8653526,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.2036061,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.440355,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.6671417,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287136.8996701,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.133558,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.3731642,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.6077807,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287137.8459415,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.0817683,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.3175223,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.5825086,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287138.9529684,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287139.186407,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287139.4511397,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287140.4645057,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287140.7104788,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287140.9744563,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287143.2503622,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287143.4994097,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287143.766432,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"failed to wait for operator typha deployment to be ready: waiting for typha to have 1 replicas, currently at 0","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
2021/10/15 08:39:08 [INFO] Patch NodeSelector with: [{"op":"add","path":"/spec/template/spec/nodeSelector/projectcalico.org~1operator-node-migration","value":"pre-operator"}]
{"level":"error","ts":1634287148.6741517,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 3/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287148.9497187,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287149.223919,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287159.190339,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"","Request.Name":"default","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287159.4546425,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"typha-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1634287159.7325585,"logger":"controller_installation","msg":"error migrating resources to calico-system","Request.Namespace":"tigera-operator","Request.Name":"node-certs","error":"the kube-system node DaemonSet is not ready with the updated nodeSelector: not all pods are ready yet: 2/3","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:979\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:866\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"info","ts":1634287195.9691308,"logger":"typha_autoscaler","msg":"Updating typha replicas from 1 to 3"}
Operating System and version:
Link to your project (optional):
We have to use apiregistration.k8s.io/v1 instead of apiregistration.k8s.io/v1beta1.
The fix is already included in release-v1.18 branch.
I am not familiar with tigera-operator's versioning policy.
However, according to https://kubernetes.io/docs/reference/using-api/deprecation-guide/#apiservice-v122,
we have to release tigera/operator v1.18 with or before kubernetes v1.22.
It will be no side effect because apiregistration.k8s.io/v1 has been available since kubernetes v1.10(from long time ago).
Tigera operator should be available in kubernetes v1.22.
Tigera operator fails at kubernetes v1.22.0-alpha.3
Release tigera/operator v1.18
Looks like it's impossible right now to create the calico-system
namespace with a custom label.
Tigera operator should merge the labels of an existing namespace (or any other resource) with the wanted labels.
Tigera operator ignores labels in the existing namespace, overriding them with the wanted ones.
Annotations are already merged in the mergeState
util function, we could easily do the same for labels:
operator/pkg/controller/utils/component.go
Lines 214 to 217 in 756210b
calico-system
namespaceopenpolicyagent.org/webhook: ignore
openpolicyagent.org/webhook
is gone.We're trying to install the calico using the tigera operator. However, tigera is having issues creating deployments in the calico-system
namespace because we have OPA running and configured to block deployment creations without the app.kubernetes.io/name
label.
We were already expecting this to happen and created the calico-system
namespace beforehand with the label openpolicyagent.org/webhook: ignore
(namespaces labeled bypass the OPA admission webhook).
However, that label is removedby the tigera-operator.
rbac lacks patch verb
We are managing a pipeline to provide on-demand deployments of preconfigured EKS clusters. The created EKS clusters are initially deployed with a small footprint which in a two-AZ deployment would only be two nodes (one managed node group per AZ with one node each). From there users can scale up or use cluster-autoscaling according to their needs. To follow the recommended deployment approach for Calico (also on AWS: https://docs.aws.amazon.com/eks/latest/userguide/calico.html), we switched from the helm-based deployment to the operator-based installation method.
When a new EKS version is released, we want to upgrade these two-node cluster to a new version which includes an upgrade of the control plane and the managed worker nodes. The expectation is that this also works with tigera-operator installed in the cluster.
When we upgrade the managed worker nodes of a two-node EKS cluster (e.g. from version 1.20 to 1.21) the worker node upgrade fails with a PodEvictionFailure in terraform after almost 30 minutes.
Error: error waiting for EKS Node Group (...) version update (...): EKS Node Group (...) update (...) status (Failed) not successful: Errors:
Error 1: Code: PodEvictionFailure / Message: Reached max retries while trying to evict pods from nodes in node group xyz
The assumed cause here is that for a two-node cluster the typha-autoscaler is "incompatible" with the EKS managed worker node upgrade behavior which is described here: https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html
It says:
...
- Checks the nodes in the node group for the eks.amazonaws.com/nodegroup-image label, and applies a eks.amazonaws.com/nodegroup=unschedulable:NoSchedule taint on all of the nodes in the node group that aren't labeled with the latest AMI ID. This prevents nodes that have already been updated from a previous failed update from being tainted.
- Randomly selects up to max nodes to upgrade in parallel.
- Cordons the node after all of the pods are evicted. This is done so that the service controller doesn't send any new requests to this node and removes this node from its list of healthy, active nodes.
- ...
Only when we do a manual kubectl cordon
of nodes with a eks.amazonaws.com/nodegroup=unschedulable:NoSchedule
taint while the worker node upgrade is performed, the upgrade is successful.
If the typha-autoscaler (https://github.com/tigera/operator/blob/master/pkg/controller/installation/typha_autoscaler.go#L243) would exclude tainted nodes from the node count for required typha instances like it excludes unschedulable nodes, the upgrade could work without manual intervention.
We also noticed a similar issue with cluster-autoscaler just as described in #1295. With excluding tainted nodes from the calculation in typha-autoscaler this should also solve the issue with cluster-autoscaler because the documentation of CA says:
What happens when a non-empty node is terminated? As mentioned above, all pods should be migrated elsewhere. Cluster Autoscaler does this by evicting them and tainting the node, so they aren't scheduled there again.
Based on the documentation for controlPlaneNodeSelector it applies to all components which aren't DaemonSets. That means that it should apply to the Typha deployment.
The controlPlaneNodeSelector
doesn't apply to the Typha deployment. I suspect this might be because there is the typhaAffinity
field, but affinity and node selectors can be used in parallel.
Use controlPlaneNodeSelector
with the Typha deployment.
n/a
n/a
n/a
Tigera-operator behaving normally and running on arm64 clusters as intended.
Cannot use tigera-operator on arm64 cluster because of incompatibility with arm64.
The solution would be to deploy a tigera-operator arm64 capable Docker image to the https://quay.io/repository/tigera/operator registry.
I recently started a Raspberry Pi cluster at home to play with k3s and bare-metal Kubernetes. For my CNI, I chose Calico because I heard great things about it, and then tigera-operator failed to run on my cluster. But, Calico still works and i'm not sure if i need tigera operator at all, but the guide I followed (https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart) featured it.
Hi, I work on the Red Hat Operator Enablement team and upon review of the Tigera operator we found the documentation to be severely lacking required information for successful use of the operator; please see attached screenshots
I provided an example of an Operator providing some good documentation as a reference.
Also there is a bug open for this and if required we can provide reasonable assistance where necessary to get this resolved: https://bugzilla.redhat.com/show_bug.cgi?id=1853962
The CR installations deployed by tigera-operator is missing "conditions" as part of the status. This creates issue verifying the resources applied. As an example the cli-utils status poller (through applier) fails to fetch status of this resource due to condition not satisfied.
I can understand that the status of calico installation can be tracked through tigerastatus CR, but then is it possible to have "conditions" added for installations for consistency.
kubectl get installations -o json | jq -r ".items[0].status"
{
"mtu": 1440,
"variant": "Calico"
}
"conditions" should be available as part of the status
Status has just mtu and variant and missing "conditions"
add "conditions" to status in addition to mtu and variant
This creates issue verifying the resources applied.
APIService
The apiregistration.k8s.io/v1beta1 API version of APIService will no longer be served in v1.22.
Migrate manifests and API clients to use the apiregistration.k8s.io/v1 API version, available since v1.10.
All existing persisted objects are accessible via the new API
No notable changes
seems like tigera operator isnt happy on 1.22...
is that because of a feature gate i need to enable
2021/07/08 14:36:49 [INFO] Version: v1.17.4
2021/07/08 14:36:49 [INFO] Go Version: go1.15.2
2021/07/08 14:36:49 [INFO] Go OS/Arch: linux/amd64
{"level":"info","ts":1625755009.7462676,"logger":"setup","msg":"Checking type of cluster","provider":""}
{"level":"info","ts":1625755009.7475083,"logger":"setup","msg":"Checking if TSEE controllers are required","required":false}
{"level":"info","ts":1625755009.856925,"logger":"setup","msg":"starting manager"}
{"level":"info","ts":1625755009.8569715,"logger":"typha_autoscaler","msg":"Starting typha autoscaler","syncPeriod":10}
I0708 14:36:49.858270 1 leaderelection.go:243] attempting to acquire leader lease tigera-operator/operator-lock...
I0708 14:37:06.950468 1 leaderelection.go:253] successfully acquired lease tigera-operator/operator-lock
{"level":"info","ts":1625755026.9514308,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.0525558,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1625755027.154822,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1625755027.256324,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1625755027.2566109,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.3575156,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.4588928,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.559794,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.6607502,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1625755027.7658336,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"error","ts":1625755027.868342,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"APIService.apiregistration.k8s.io","error":"no matches for kind \"APIService\" in version \"apiregistration.k8s.io/v1beta1\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:117\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:159\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1625755027.8686714,"logger":"setup","msg":"problem running manager","error":"no matches for kind \"APIService\" in version \"apiregistration.k8s.io/v1beta1\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:228\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}```
As long as nothing too outlandish is being done, the migration should detect options and migrate to use the calico operator
While migrating I hit a number of areas where it wouldn't let me go while still doing what seem to be very reasonable things:
I should be able to get around it, but the last one is a particular issue because not all of my nodes have identical hardware, so autodetecting by interface name isn't a good option.
It doesn't look like it's possible for me to migrate without losing at least ipv6 support until after the migration; the IP_AUTODETECTION_METHOD issue makes migration iffy at best, since if I make a mistake the networking between my pods could get messed up; I use zerotier on my nodes to allow me to communicate with them from anywhere and sometimes calico tries to use that as the interface.
Ubuntu 20.04, kubeadm cluster 1.22.0
As discussed in the Calico documentation, typha is only recommended in large installations: https://github.com/projectcalico/typha#when-should-i-use-typha. My use case has much smaller installations with far less than 50 nodes.
Have a way to disable to typha component in the InstallationSpec.
Add an API field that allows enable/disable for each component.
I am attempting to install calico with typha disabled using the operator.
The default installation of calico creates a new namespace called calico-system. Some organizations require calico to be installed in an existing namespace. the current Installation CRD does not support this functionality.
The installation CRD should probably have a field for installing calico in an existing namespace
calico-system namespace is created by the operator always.
We do not want to manage another namespace only for calico operations so we cannot deploy the tigera-operator until the Installation spec provides flexibility to install in an existing namespace.
EKS 1.19
The calico-node daemonset should always be scheduled onto a node.
The calico-node pods aren't always scheduled due to the calico-priority
priority class not being higher than system-cluster-critical
.
Use system-node-critical
priority class for the calico-node daemonset. Or even better allow the priority classes to be configured so we can use system-cluster-critical
for the other Calico components.
Deploy resources onto the cluster with system-cluster-critical
priority to saturate a node, this happens when the resources are in place before the calico-node pod but it should also work to deschedule the pod.
This is currently blocking our adoption of the Tigera operator.
AWS EKS v1.21, v1.20, v1.19 & v1.18
Hi,
I am deploying the tigera operator but I see that resources can be set only for a predefined list of components in the Installer CRD (Node, Typha, KubeControllers)
Most of the other components doesn't have any resources and my issue is especially with fluentd-node daemonset deployed in tigera-fluentd, which can be quite hungry. From
operator/pkg/render/fluentd.go
Line 357 in bf52ad0
Current workaround is to change the daemonset resources and annotate it to avoid being reverted by the operator, but it is not very clean
Thanks
When using the v3.21 operator to install calico on a K3D cluster, the pod network is failing to start. This bug is a result of investigations done with the k3d team at rancher. k3d-io/k3d#898
The pod network should be up and running successfully in all namespaces. All pods are in the running state.
The calico-nodes are able to run without issue but other containers are stuck in the ContainerCreating state (coredns, metrics, calico-kube-controller)
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tigera-operator tigera-operator-7dc6bc5777-jqgj6 1/1 Running 0 6m36s 172.29.0.3 k3d-test-cluster-3-21-server-0 <none> <none>
calico-system calico-typha-786fc79b-hm5sr 1/1 Running 0 6m17s 172.29.0.3 k3d-test-cluster-3-21-server-0 <none> <none>
calico-system calico-kube-controllers-78cc777977-trgbz 0/1 ContainerCreating 0 6m17s <none> k3d-test-cluster-3-21-server-0 <none> <none>
kube-system metrics-server-86cbb8457f-59s6k 0/1 ContainerCreating 0 6m36s <none> k3d-test-cluster-3-21-server-0 <none> <none>
kube-system local-path-provisioner-5ff76fc89d-w7bf6 0/1 ContainerCreating 0 6m36s <none> k3d-test-cluster-3-21-server-0 <none> <none>
kube-system coredns-7448499f4d-7rwx9 0/1 ContainerCreating 0 6m36s <none> k3d-test-cluster-3-21-server-0 <none> <none>
calico-system calico-node-99jc6 1/1 Running 0 6m17s 172.29.0.3 k3d-test-cluster-3-21-server-0 <none> <none>
When describing the stuck pods, I see this in its events:
$ kubectl describe pod/coredns-7448499f4d-7rwx9 -n calico-system
Warning FailedCreatePodSandBox 6s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "51947047f29820ea93c486fe4c18f5a31e9c9c9418e859e320b8d3b2c43bd383": netplugin failed with no error message: fork/exec /opt/cni/bin/calico: no such file or directory
Based on the error above, I went to check /opt/cni/bin/calico to see if the calico binary existed in the container, which it does:
glen@glen-tigera: $ docker exec -ti k3d-test-cluster-3-21-server-0 /bin/sh
/ # ls
bin dev etc k3d lib opt output proc run sbin sys tmp usr var
/ # cd /opt/cni/bin/
/opt/cni/bin # ls -a
. .. bandwidth **calico** calico-ipam flannel host-local install loopback portmap tags.txt tuning
CNI Config Yaml:
kubectl get cm cni-config -n calico-system -o yaml
apiVersion: v1
data:
config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"datastore_type": "kubernetes",
"mtu": 0,
"nodename_file_optional": false,
"log_level": "Info",
"log_file_path": "/var/log/calico/cni/cni.log",
"ipam": { "type": "calico-ipam", "assign_ipv4" : "true", "assign_ipv6" : "false"},
"container_settings": {
"allow_ip_forwarding": true
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"k8s_api_root":"https://10.43.0.1:443",
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
},
{"type": "portmap", "snat": true, "capabilities": {"portMappings": true}}
]
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-12-22T16:00:02Z"
name: cni-config
namespace: calico-system
ownerReferences:
- apiVersion: operator.tigera.io/v1
blockOwnerDeletion: true
controller: true
kind: Installation
name: default
uid: 90769081-24a2-440d-9666-a9c3b94ebd34
resourceVersion: "635"
uid: 609157c5-c43b-42d9-bb5b-7053a8673a49
This is only occuring in v3.21 of the operator. I tested prior versions of operator and it sets up the pod network successfully. See k3d-calico-operator-install-findings.txt
. The issue should lie between v3.20 and v3.21 changes of operator.
k3d cluster create "test-cluster-3-21" --k3s-arg "--flannel-backend=none@server:*" --k3s-arg "--no-deploy=traefik@server:*"
kubectl apply -f https://docs.projectcalico.org/archive/v3.21/manifests/tigera-operator.yaml
curl -L https://docs.projectcalico.org/archive/v3.21/manifests/custom-resources.yaml > k3d-custom-res.yaml
yq e '.spec.calicoNetwork.containerIPForwarding="Enabled"' -i k3d-custom-res.yaml
kubectl apply -f k3d-custom-res.yaml
This should try to install calico through the operator on your k3d cluster with IP forwarding enabled.
kubectl get pods -A
Delivery engineering wants to be able to support k3d provisioning and install in Banzai to expand our E2E coverage of supported provisioners. This would help our engineering team for testing their features on a local k3d cluster as it is much faster to setup.
OS: GNU/Linux
Kernel Version: 20.04.2-Ubuntu SMP
Kernel Release: 5.11.0-40-generic
Processor/HW Platform/Machine Architecture: x86_64
The Tigera operator running in AKS when selecting Calico causes a lot of 404s when trying to delete the following resource /apis/operator.tigera.io/v1/tigerastatuses/apiserver
we are getting over 17500 404s from the operator trying to delete that exact same resource over 24h, those shouldnt be present
0 404s in 24h
17k 404s in 24h
Issue running locally due to documentation issues
User should be able to successfully deploy calioc via tigera operator following the steps in the README for "Running it locally".
Executing the steps mentioned in the README leads to error due to incorrect path
Issues:
KUBECONFIG=./kubeconfig.yaml go run ./cmd/manager => incorrect path
kubectl create -f ./deploy/crds/operator_v1_installation_cr.yaml => incorrect path
Fix the documentation with correct path and validate the deployment locally
Follow the steps mentioned under "Running it Locally"
We make use of pod labels and annotations to tag metrics and configure checks for pods with DataDog. However, due to the operator managing all of the underlying Kubernetes resources, there is in't a good way to automatically add the labels and configuration during the installation. We would like to see support for passing additional labels and annotations to each of the underlying components that get deployed by this operator, in particular for the Node
, Typha
, and Kubecontrollers
.
The operator should allow passing user specified custom labels and annotations to the pods
We are currently not able to propogate additional labels or annotations to the operator managed pods.
The Installation
CRD should have a field like:
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
namespace: management
spec:
componentLabels:
- componentName: Node
podLabels:
foo: bar
componentAnnotations:
- componentName: Typha
podAnnotations:
cool: awesome
N/A
We would like to add additional labels and annotations to the pods for monitoring and running checks through DataDog
What use cases are supported by Calico Operator and how is that being achieved.
Some details/steps for the same would be very helpful - like upgrades of version, changing to calico enterprise ..
Calling kubectl -n tigera-operator get lease
should return a lease named for the operator and not a generically named one.
NAME HOLDER AGE
tigera-operator-lock tigera-operator-5dcfb9df8c-bfkvv_31aa92d3-9894-4720-88b0-9414067296b3 89d
The LeaderElectionID value hasn't been customized so we get a generic lock name that could collide with another operator and is ambiguous.
NAME HOLDER AGE
operator-lock tigera-operator-5dcfb9df8c-bfkvv_31aa92d3-9894-4720-88b0-9414067296b3 89d
I'd be happy to open a PR to rename the LeaderElectionID
value to tigera-operator-lock
.
n/a
n/a
n/a
Calico-node containers to show as ready especially when health see this in the logs
2021-05-25 23:19:36.079 [INFO][66] felix/health.go 133: Health of component changed lastReport=health.HealthReport{Live:true, Ready:false} name="int_dataplane" newReport=&health.HealthReport{Live:true, Ready:true}
``
## Current Behavior
On some busy machiens we get unready with no data from readiness probe. this appears to be a timeout because if we squelch the operator and manually change it to 5
eadinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 2m20s (x6668 over 19h) kubelet, aks-kub8-10616369-vmss000003 Readiness probe failed:
## Possible Solution
Just bump to 3-5 seconds. Period is 10 so that should be fine.
Alternatively http probe instead of /bin/calico-node -felix-ready
## Steps to Reproduce (for bugs)
Don't have anytnhing great. Maybe gets some machines and stress them then t
## Context
The nodes actually seem fine but customers complain when they see a not ready daemonset and could effect rollout in the future.
## Your Environment
Random azure kubernetes service customer.
When installing calico from the private registry by digest as described here the tigera installation object claims that it requires calico/windows-upgrade to be present in the imageSet.
windows related image should not be required when installing calico on linux
when calico/windows-upgrade is not provided, the tigera installation degrades
Add to mine imageSet calico/windows-upgrade entry with fake digest
follow the instructions from https://docs.projectcalico.org/maintenance/image-options/imageset
I'd like to install calico from a private registry.
kubernetes 1.21
calico 3.21
If I set the natOutgoing
in the IPv6 pool installation, it should work. The CALICO_IPV6POOL_NAT_OUTGOING
environment variable of the calico-node
pods should be set to true
.
Example installation:
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 100.64.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
- blockSize: 122
cidr: fc00::/48
natOutgoing: Enabled
nodeSelector: all()
The calico-node
pods don't have CALICO_IPV6POOL_NAT_OUTGOING
set, even if natOutgoing
of the IPv6 pool installation is Enabled
.
As a result, IPv6 NAT doesn't work. I have to manually edit the IPPool
configuration to enable it.
By default, the NAT Outgoing setting for the IPv6 Pool created at startup is false
(see the manual).
However, the operator only sets CALICO_IPV6POOL_NAT_OUTGOING
to false
when NATOutgoing
in an IPv6 pool is disabled. Thus CALICO_IPV6POOL_NAT_OUTGOING
is either absent or false
, causing NAT for IPv6 pools never to be enabled.
I've created a PR to fix this (#1038), but the build failed on Semaphore, while the tests all passed on my machine.
Apply an IPv6 installation with natOutgoing
enabled.
The container image does not support the armhf architecture. I am unable to run it on my Raspberry Pi 3 b+
The container image only supports the amd64 architecture. I am unable to run it on my Raspberry Pi or AWS Graviton instances.
Ideally it would be great to have a multi-architecture "manifest list" image which allows seamless running on multiple CPU architectures (including arm64): https://docs.docker.com/registry/spec/manifest-v2-2/
It looks like binaries for multiple architectures are getting built but only a single image. It might be as easy as using Docker Buildx.
I expect to be able to start the operator in my cluster.
I'm trying to run kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
and getting an image pull backoff. Inspecting the manifest, it looks like the image it's trying to use is quay.io/tigera/operator:v1.10.8. Unfortunately the "v1.10.8" tag doesn't exist in Quay, although "1.10.8" does. Did someone miss a "v" somewhere in the build process?
Tag the image correctly :)
Container image - quay.io/tigera/operator does not support arm64
quay.io/tigera/operator to work on arm64 architecture
Getting the error
standard_init_linux.go:219: exec user process caused: exec format error
Create a container image using arm64 architecture
Create a cluster on AWS using graviton nodes
I have a hybrid cluster and I got it to work by adding a nodeSelector for the operator to run on X86_64, but I was planning to only have ARM nodes in the future.
AWS X86_64 and Graviton arm64 nodes.
First, let me clarify that I'm aware that my cluster is misconfigured =] Unfortunately it seems there is no way to fix that without completely rebuilding the cluster, which I don't have time to do right now.
I did not realize that calico's IPPool
resources still needed to be within the cluster pod CIDR, so I'm using pools that are outside of it -- e.g. 172.25.64.0/20
when my cluster CIDR is 10.172.0.0/16
. I didn't realize it mattered, and since I'm using ToR bgp peering 99% of everything still works fine. However, after migrating to the tigera operator (during which I attempted to change my cluster CIDR and discovered that it's either not possible or at least more difficult than expected) the operator won't update anything because it's constantly in an "error" state:
Could not resolve CalicoNetwork IPPool and kubeadm configuration: IPPool 172.25.64.0/20 is not within the platform's configured pod network CIDR(s) [10.172.0.0/16 2607:fa18:1000:21::10:0/108]
I'm using VPNs and linking multiple clusters together, so unfortunately moving my IPPools isn't an option.
A warning should be thrown, but there should be a way to tell it "yeah, I know this is wrong, but that's how everything is set up so please ignore it"
I absolutely 100% agree that there should be warnings to tell uneducated folks like me that they are Doing Something Stupid, but since it can actually work it should let you do it if you really want.
The Tigera operator is completely nonfunctional due to the error state.
I think I've clarified this above, but it seems like it should be an easy fix.
Kubeadm bare metal cluster, 7 servers; opnsense ToR routers with bgp peering. Dual IPv4/IPv6 stack.
Metrics port in tigera-operator should be configurable, and default value should be different than 8383 which is used by nmstate-handler pods from CNV (Container Native Virtualization in RedHat Openshift 4.x. Instead it is hardcoded to 8383 here: https://github.com/tigera/operator/blob/master/pkg/daemon/daemon.go#L35
Currently it is not possible to properly deploy CNV (Container Native Virtualization) on RedHat Openshift 4.x with Calico and tigera-operator. Both tigera-operator and one of nmstate-handler pods (part of CNC) run with host network and try to bind to port 8383. Since tigera-operator is deployed first, nmstate-handler keeps crashing and CNV deployment cannot finish properly.
The operator should be able to be installed into any namespace.
When installing the operator into the kube-system namespace the operator errors because it can't find the tigera-operator namespace.
{"level":"error","ts":1628066134.605089,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Reconciler error","name":"default","namespace":"","error":"namespaces "tigera-operator" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:267\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"}
Remove the hardcoded namespace.
Change the namespace in the deployment manifests.
I'd like more control over how the operator runs.
AWS EKS v1.21, v1.20, v1.19 & v1.18
When installing the Helm Chart, the user expects that namespace is configurable by flag --namespace
. However the current version fixes namespace and doesn't respect the convention.
https://helm.sh/docs/faq/changes_since_helm2/#release-names-are-now-scoped-to-the-namespace
If you render the current chart version, you'll see that many resources are created with hard-coded namespace tigera-operator
.
helm template calico projectcalico/tigera-operator --namespace dummy
Update the Helm Chart to respect the namespace definition provided by the user when supplied --namespace
, and recommend the usage of tigera-operator
in the docs. Also, drop the creation of the namespace and recommend the usage of the flag --create-namespace
instead.
The way the chart is written today is breaking the GitOps workflow using FluxCD.
Installing the Tigera operator on a fresh EKS cluster with AWS VPC CNI Networking, the BPF data plane enabled and kube-proxy disabled is able to connect to the Kubernetes API Server using a domain name and run successfully.
The operator gets stuck in a crash loop due to DNS lookup timeouts. The DNS lookup is attempting to use CoreDNS, which is not yet running as the AWS VPC CNI pod is not running. The AWS VPC CNI pod is not running because it cannot connect to the API server without kube-proxy or calico running. The operator should be able to connect to the API server using a domain name without CoreDNS as per the docs - https://projectcalico.docs.tigera.io/maintenance/ebpf/enabling-bpf#configure-calico-to-talk-directly-to-the-api-server
Create an EKS cluster on Kubernetes 1.21 with the default AWS VPC CNI networking enabled. Do not scale out any nodes.
Apply the Tigera Operator to the cluster, including CRDs so the Installation resource will get created successfully:
helm template calico projectcalico/tigera-operator --include-crds --values=helm-values.yml > all.yml
kubectl apply -f all.yml
My helm-values.yml file looks like:
installation:
kubernetesProvider: "EKS"
calicoNetwork:
linuxDataplane: "BPF"
hostPorts: null
Disable kube-proxy as per the instructions in https://projectcalico.docs.tigera.io/maintenance/ebpf/enabling-bpf. Setup the kubernetes-services-endpoint
configmap with the EKS API server endpoint. This can be pulled using kubectl get configmap -n kube-system kube-proxy -o yaml | grep server
.
Scale up one node for the operator to run on.
When the operator comes up, it attempts to connect to the EKS API server endpoint, which needs to use DNS. The operator is running with the ClusterFirstWithHostNet
DNS policy, so it tries to resolve this using cluster DNS. Because cluster DNS isn't running, this fails, and the operator degrades to a crash loop.
EKS using Kubernetes 1.21 with AWS VPC CNI networking
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.