Comments (6)
I've manually deleted all the calico related resources remained in kube-system
namespace, and it's done successfully. calico managed by tigera-operator is working in calico-system
namespace. This is a workaround for now.
kubectl delete deploy calico-typha-horizontal-autoscaler -n kube-system
kubectl delete cm calico-typha-horizontal-autoscaler -n kube-system
kubectl delete role typha-cpha -n kube-system
kubectl delete rolebinding typha-cpha -n kube-system
kubectl delete sa typha-cpha -n kube-system
kubectl delete pdb calico-typha -n kube-system
kubectl delete svc calico-typha -n kube-system
Do not delete cluster resources, otherwise calico in calico-system
doesn't work.
from operator.
The standard Calico manifest based install does not install an auto-scaler so the operator migration does not try to find and clean those up. So since the EKS install adds the auto-scalers you will need to clean that up manually.
As for the calico-typha service in the kube-system namespace, we should have the operator clean that up because it is a standard part of a Calico and is not needed after the migration.
from operator.
Other than that configMap
secret
serviceAccount
podDisruptionBudget
etc is remained. Maybe we should delete them all by kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.6/calico.yaml
?
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n kube-system | grep calico
configmap/calico-typha-horizontal-autoscaler 1 3h36m
endpoints/calico-typha <none> 3h36m
5m28s Normal Scheduled pod/calico-node-27hdt Successfully assigned kube-system/calico-node-27hdt to ip-10-0-11-90.ap-northeast-1.compute.internal
5m28s Normal Pulled pod/calico-node-27hdt Container image "quay.io/calico/node:v3.13.4" already present on machine
5m28s Normal Created pod/calico-node-27hdt Created container calico-node
5m28s Normal Started pod/calico-node-27hdt Started container calico-node
4m53s Normal Killing pod/calico-node-27hdt Stopping container calico-node
5m43s Normal Scheduled pod/calico-node-2fqqp Successfully assigned kube-system/calico-node-2fqqp to ip-10-0-65-67.ap-northeast-1.compute.internal
5m42s Normal Pulled pod/calico-node-2fqqp Container image "quay.io/calico/node:v3.13.4" already present on machine
5m42s Normal Created pod/calico-node-2fqqp Created container calico-node
5m42s Normal Started pod/calico-node-2fqqp Started container calico-node
5m11s Normal Killing pod/calico-node-2fqqp Stopping container calico-node
5m34s Normal Killing pod/calico-node-7sh7v Stopping container calico-node
5m28s Normal Killing pod/calico-node-9pkg4 Stopping container calico-node
5m34s Normal Scheduled pod/calico-node-kmcx8 Successfully assigned kube-system/calico-node-kmcx8 to ip-10-0-41-159.ap-northeast-1.compute.internal
5m33s Normal Pulled pod/calico-node-kmcx8 Container image "quay.io/calico/node:v3.13.4" already present on machine
5m33s Normal Created pod/calico-node-kmcx8 Created container calico-node
5m33s Normal Started pod/calico-node-kmcx8 Started container calico-node
4m35s Normal Killing pod/calico-node-kmcx8 Stopping container calico-node
5m43s Normal Killing pod/calico-node-mpxvt Stopping container calico-node
5m43s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-mpxvt
5m43s Normal SuccessfulCreate daemonset/calico-node Created pod: calico-node-2fqqp
5m34s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-7sh7v
5m34s Normal SuccessfulCreate daemonset/calico-node Created pod: calico-node-kmcx8
5m28s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-9pkg4
5m28s Normal SuccessfulCreate daemonset/calico-node Created pod: calico-node-27hdt
5m11s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-2fqqp
4m53s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-27hdt
4m35s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-kmcx8
4m34s Normal Killing pod/calico-typha-7c5b5df5d7-ghp5k Stopping container calico-typha
6s Normal NoPods poddisruptionbudget/calico-typha No matching pods found
4m27s Warning NoControllers poddisruptionbudget/calico-typha found no controllers for pod "calico-typha-7c5b5df5d7-ghp5k"
4m27s Warning CalculateExpectedPodCountFailed poddisruptionbudget/calico-typha Failed to calculate the number of expected pods: found no controllers for pod "calico-typha-7c5b5df5d7-ghp5k"
pod/calico-typha-horizontal-autoscaler-869dbcdddb-mxl24 1/1 Running 0 3h36m
secret/calico-node-token-bg6sp kubernetes.io/service-account-token 3 3h36m
serviceaccount/calico-node 1 3h36m
service/calico-typha ClusterIP 172.20.17.99 <none> 5473/TCP 3h36m
deployment.apps/calico-typha-horizontal-autoscaler 1/1 1 1 3h36m
replicaset.apps/calico-typha-horizontal-autoscaler-869dbcdddb 1 1 1 3h36m
5m36s Normal Scheduled pod/calico-node-27hdt Successfully assigned kube-system/calico-node-27hdt to ip-10-0-11-90.ap-northeast-1.compute.internal
5m36s Normal Pulled pod/calico-node-27hdt Container image "quay.io/calico/node:v3.13.4" already present on machine
5m36s Normal Created pod/calico-node-27hdt Created container calico-node
5m36s Normal Started pod/calico-node-27hdt Started container calico-node
5m1s Normal Killing pod/calico-node-27hdt Stopping container calico-node
5m51s Normal Scheduled pod/calico-node-2fqqp Successfully assigned kube-system/calico-node-2fqqp to ip-10-0-65-67.ap-northeast-1.compute.internal
5m50s Normal Pulled pod/calico-node-2fqqp Container image "quay.io/calico/node:v3.13.4" already present on machine
5m50s Normal Created pod/calico-node-2fqqp Created container calico-node
5m50s Normal Started pod/calico-node-2fqqp Started container calico-node
5m19s Normal Killing pod/calico-node-2fqqp Stopping container calico-node
5m42s Normal Killing pod/calico-node-7sh7v Stopping container calico-node
5m36s Normal Killing pod/calico-node-9pkg4 Stopping container calico-node
5m42s Normal Scheduled pod/calico-node-kmcx8 Successfully assigned kube-system/calico-node-kmcx8 to ip-10-0-41-159.ap-northeast-1.compute.internal
5m41s Normal Pulled pod/calico-node-kmcx8 Container image "quay.io/calico/node:v3.13.4" already present on machine
5m41s Normal Created pod/calico-node-kmcx8 Created container calico-node
5m41s Normal Started pod/calico-node-kmcx8 Started container calico-node
4m43s Normal Killing pod/calico-node-kmcx8 Stopping container calico-node
5m51s Normal Killing pod/calico-node-mpxvt Stopping container calico-node
5m51s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-mpxvt
5m51s Normal SuccessfulCreate daemonset/calico-node Created pod: calico-node-2fqqp
5m42s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-7sh7v
5m42s Normal SuccessfulCreate daemonset/calico-node Created pod: calico-node-kmcx8
5m36s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-9pkg4
5m36s Normal SuccessfulCreate daemonset/calico-node Created pod: calico-node-27hdt
5m19s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-2fqqp
5m1s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-27hdt
4m43s Normal SuccessfulDelete daemonset/calico-node Deleted pod: calico-node-kmcx8
4m42s Normal Killing pod/calico-typha-7c5b5df5d7-ghp5k Stopping container calico-typha
14s Normal NoPods poddisruptionbudget/calico-typha No matching pods found
4m35s Warning NoControllers poddisruptionbudget/calico-typha found no controllers for pod "calico-typha-7c5b5df5d7-ghp5k"
4m35s Warning CalculateExpectedPodCountFailed poddisruptionbudget/calico-typha Failed to calculate the number of expected pods: found no controllers for pod "calico-typha-7c5b5df5d7-ghp5k"
poddisruptionbudget.policy/calico-typha N/A 1 0 3h36m
from operator.
Maybe we should delete them all by kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.6/calico.yaml ?
I found this is not possible, because this also deletes cluster resources (e.g. clusterinformations.crd.projectcalico.org
CustomResourceDefinition), and causes error on Calico in calico-system
namespace.
from operator.
Removing calico.yaml would definitely be a bad thing to do because it contains the CRDs that an operator install depends on also. Removing the CRDs would also result in removing important configuration information.
Those resources you mentioned (configMap secret serviceAccount podDisruptionBudget) should be removable, anything that is namespaced and calico releated can be removed from kube-system.
from operator.
I think the fix here is to make the operator migration logic aware of the autoscaler resources - they don't exist on every (or even most) clusters, but they do on some. It's not a bug per-se to leave them around, but it would be nice to be a bit more thorough in our cleanup.
I'd happily review a PR to add this if someone is interested in giving it a whirl!
from operator.
Related Issues (20)
- AutoDiscoverProvider leads to wrong result
- Error running cluster on M1 / ARM Mac OS for local development HOT 13
- Calico Operator should support running different dataplanes on different nodes in the same Kubernetes cluster HOT 2
- v1.31.1 showing HIGH vulnerability CVE-2023-44487 HOT 1
- Tigera operator violates PodSecurity "baseline:latest" HOT 2
- Tigera Operator pod keeps restarting. HOT 1
- Pod fails to start when 'sysctl' tuning configured
- Typha autoscaler's autoscaling profile to be configurable
- Propose Windows operator updates HOT 7
- Calico v3.27.0 not working with Tigera v1.32.3 HOT 5
- Uninstallation Failure: Calico Module Leaves Remaining Jobs Blocking Deletion HOT 1
- Can't use calico on windows on EKS due to forced network mode HOT 1
- Calico APIServer does not find certs secret HOT 2
- With Tigera operator, applicative pod lost network after windows nodes reboot HOT 2
- Calico or Tigera operator should create CRDs automatically HOT 1
- Calico v3.27.2 is not working with TigeraOperator v1.32.5 HOT 2
- is there anyway to config labels for calico-system and calico-apiserver using tigera operator
- Expose CNI path for configuration
- [SOLVED] Issue migrating to Tigera Operator, IPAMCONFIGURATION not found HOT 8
- Tigera Operator installation causing significant growth in kube-apiserver-audit and operator workload logs HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from operator.