What steps did you take and what happened:
caaph-controller crashes with invalid memory address or nil pointer dereference.
kgpwide -n caaph-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES caaph-controller-manager-7747fbcb95-vb2sp 1/2 CrashLoopBackOff 13 (4m51s ago) 61m 10.244.0.13 capi-test-control-plane <none> <none>
Relevant log lines:
[manager] I0206 09:57:30.284401 14 controller.go:117] "Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" HelmReleaseProxy="default/metallb-config-gkhatri-wkcl1-jmqbf" namespace="default" name="metallb-config-gkhatri-wkcl1-jmqbf" reconcileID=d0609e86-5599-4236-be98-9056247b6425 [manager] panic: runtime error: invalid memory address or nil pointer dereference [recovered] [manager] panic: runtime error: invalid memory address or nil pointer dereference [manager] [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e6b622]
Full log:
[manager] I0206 09:57:12.888703 14 request.go:601] Waited for 1.035578641s due to client-side throttling, not priority and fairness, request: GET:[https://10.96.0.1:443/apis/cluster.x-k8s.io/v1alpha4?timeout=32s](https://10.96.0.1/apis/cluster.x-k8s.io/v1alpha4?timeout=32s) [manager] I0206 09:57:13.042136 14 listener.go:44] "controller-runtime/metrics: Metrics server is starting to listen" addr="127.0.0.1:8080" [manager] I0206 09:57:13.043339 14 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="addons.cluster.x-k8s.io/v1alpha1, Kind=HelmChartProxy" path="/mutate-addons-cluster-x-k8s-io-v1alpha1-helmchartproxy" [manager] I0206 09:57:13.043496 14 server.go:148] "controller-runtime/webhook: Registering webhook" path="/mutate-addons-cluster-x-k8s-io-v1alpha1-helmchartproxy" [manager] I0206 09:57:13.043606 14 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="addons.cluster.x-k8s.io/v1alpha1, Kind=HelmChartProxy" path="/validate-addons-cluster-x-k8s-io-v1alpha1-helmchartproxy" [manager] I0206 09:57:13.043681 14 server.go:148] "controller-runtime/webhook: Registering webhook" path="/validate-addons-cluster-x-k8s-io-v1alpha1-helmchartproxy" [manager] I0206 09:57:13.043851 14 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="addons.cluster.x-k8s.io/v1alpha1, Kind=HelmReleaseProxy" path="/mutate-addons-cluster-x-k8s-io-v1alpha1-helmreleaseproxy" [manager] I0206 09:57:13.043931 14 server.go:148] "controller-runtime/webhook: Registering webhook" path="/mutate-addons-cluster-x-k8s-io-v1alpha1-helmreleaseproxy" [manager] I0206 09:57:13.044037 14 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="addons.cluster.x-k8s.io/v1alpha1, Kind=HelmReleaseProxy" path="/validate-addons-cluster-x-k8s-io-v1alpha1-helmreleaseproxy" [manager] I0206 09:57:13.044107 14 server.go:148] "controller-runtime/webhook: Registering webhook" path="/validate-addons-cluster-x-k8s-io-v1alpha1-helmreleaseproxy" [manager] I0206 09:57:13.044202 14 main.go:133] "setup: starting manager" [manager] I0206 09:57:13.044270 14 server.go:216] "controller-runtime/webhook/webhooks: Starting webhook server" [manager] I0206 09:57:13.044320 14 internal.go:366] "Starting server" path="/metrics" kind="metrics" addr="127.0.0.1:8080" [manager] I0206 09:57:13.044327 14 internal.go:366] "Starting server" kind="health probe" addr="[::]:8081" [manager] I0206 09:57:13.044543 14 leaderelection.go:248] attempting to acquire leader lease caaph-system/5a2dee3e.cluster.x-k8s.io... [manager] I0206 09:57:13.044692 14 certwatcher.go:131] "controller-runtime/certwatcher: Updated current TLS certificate" [manager] I0206 09:57:13.044866 14 server.go:270] "controller-runtime/webhook: Serving webhook server" host="" port=9443 [manager] I0206 09:57:13.044897 14 certwatcher.go:85] "controller-runtime/certwatcher: Starting certificate watcher" [manager] I0206 09:57:30.039582 14 leaderelection.go:258] successfully acquired lease caaph-system/5a2dee3e.cluster.x-k8s.io [manager] I0206 09:57:30.039822 14 controller.go:185] "Starting EventSource" controller="helmchartproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmChartProxy" source="kind source: *v1alpha1.HelmChartProxy" [manager] I0206 09:57:30.039887 14 controller.go:185] "Starting EventSource" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" source="kind source: *v1alpha1.HelmReleaseProxy" [manager] I0206 09:57:30.039897 14 controller.go:185] "Starting EventSource" controller="helmchartproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmChartProxy" source="kind source: *v1beta1.Cluster" [manager] I0206 09:57:30.039926 14 controller.go:185] "Starting EventSource" controller="helmchartproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmChartProxy" source="kind source: *v1alpha1.HelmReleaseProxy" [manager] I0206 09:57:30.039949 14 controller.go:193] "Starting Controller" controller="helmchartproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmChartProxy" [manager] I0206 09:57:30.039928 14 controller.go:193] "Starting Controller" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" [manager] I0206 09:57:30.141318 14 controller.go:227] "Starting workers" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" worker count=10 [manager] I0206 09:57:30.141369 14 controller.go:227] "Starting workers" controller="helmchartproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmChartProxy" worker count=10 [manager] I0206 09:57:30.284401 14 controller.go:117] "Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" HelmReleaseProxy="default/metallb-config-gkhatri-wkcl1-jmqbf" namespace="default" name="metallb-config-gkhatri-wkcl1-jmqbf" reconcileID=d0609e86-5599-4236-be98-9056247b6425 [manager] panic: runtime error: invalid memory address or nil pointer dereference [recovered] [manager] panic: runtime error: invalid memory address or nil pointer dereference [manager] [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e6b622] [manager] [manager] goroutine 490 [running]: [manager] sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1() [manager] /home/administrator/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:118 +0x1f4 [manager] panic({0x210c220, 0x3ba7850}) [manager] /home/administrator/sdk/go1.19.4/src/runtime/panic.go:884 +0x212 [manager] cluster-api-addon-provider-helm/internal.UpgradeHelmReleaseIfChanged({0x2839e08, 0xc000f2e900}, {0xc000865800, 0x15da}, {{{0xc000c17c90, 0x7}, {0xc000c17c97, 0x7}, {0xc000c17ca0, 0xd}, ...}, ...}, ...) [manager] /home/administrator/dev/cluster-api-addon-provider-helm/internal/helm_operations.go:240 +0x802 [manager] cluster-api-addon-provider-helm/internal.InstallOrUpgradeHelmRelease({0x2839e08, 0xc000f2e900}, {0xc000865800, 0x15da}, {{{0xc000c17c90, 0x7}, {0xc000c17c97, 0x7}, {0xc000c17ca0, 0xd}, ...}, ...}) [manager] /home/administrator/dev/cluster-api-addon-provider-helm/internal/helm_operations.go:121 +0x13b [manager] cluster-api-addon-provider-helm/controllers/helmreleaseproxy.(*HelmReleaseProxyReconciler).reconcileNormal(0x28408e0?, {0x2839e08, 0xc000f2e900}, 0xc00110e240, {0xc000865800, 0x15da}) [manager] /home/administrator/dev/cluster-api-addon-provider-helm/controllers/helmreleaseproxy/helmreleaseproxy_controller.go:212 +0x398 [manager] cluster-api-addon-provider-helm/controllers/helmreleaseproxy.(*HelmReleaseProxyReconciler).Reconcile(0xc0008a3f80, {0x2839e08, 0xc000f2e900}, {{{0xc000c17c50, 0x7}, {0xc0008885d0, 0x22}}}) [manager] /home/administrator/dev/cluster-api-addon-provider-helm/controllers/helmreleaseproxy/helmreleaseproxy_controller.go:193 +0x109a [manager] sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x2839d60?, {0x2839e08?, 0xc000f2e900?}, {{{0xc000c17c50?, 0x234fde0?}, {0xc0008885d0?, 0x404554?}}}) [manager] /home/administrator/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:121 +0xc8 [manager] sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0001e9e00, {0x2839d60, 0xc000d19e80}, {0x21d5100?, 0xc000718180?}) [manager] /home/administrator/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:320 +0x33c [manager] sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0001e9e00, {0x2839d60, 0xc000d19e80}) [manager] /home/administrator/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273 +0x1d9 [manager] sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2() [manager] /home/administrator/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234 +0x85 [manager] created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 [manager] /home/administrator/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:230 +0x333 [manager] Exiting with code 2 [event: pod caaph-system/caaph-controller-manager-7747fbcb95-vb2sp] Back-off restarting failed container [event: pod caaph-system/caaph-controller-manager-7747fbcb95-vb2sp] Back-off restarting failed container [event: pod caaph-system/caaph-controller-manager-7747fbcb95-vb2sp] Back-off restarting failed container Detected container restart. Pod: caaph-controller-manager-7747fbcb95-vb2sp. Container: manager.
What did you expect to happen:
Expected caaph-controller pod to be in running state.
Anything else you would like to add:
The issue happens immediately after deploying caaph-controller
Environment:
- Cluster API version: 1.3.1, 1.3.3
- Cluster API Add-on Provider for Helm version:
- Kubernetes version: (use
kubectl version
): 1.24.1
- OS (e.g. from
/etc/os-release
): ubuntu 20.04
/kind bug