digitalocean / clusterlint Goto Github PK
View Code? Open in Web Editor NEWA best practices checker for Kubernetes clusters. ๐ค
License: Apache License 2.0
A best practices checker for Kubernetes clusters. ๐ค
License: Apache License 2.0
For example, kube-proxy is a static-pod and should not be warned by checking "bare-pods".
Is there any way to exclude this?
Right now, clusterlint analyzes the workloads after they have been deployed on a managed/self hosted platform. This is great because:
Adding a feature to lint the manifests before attempting to deploy the workloads on a cluster can be useful to prevent bad configs. This will be particularly useful if there is CI/CD in place to automatically deploy the workloads after making sure that all the configs are fine. This can act as a sanity check before the config is merged in a SCM repository.
I have a cluster with two helm charts on it (almost default values):
The cluster linter from the dashboard is pointing out some problems for future upgrades (I know there are duplicates but but this is the exact output that I'm getting):
validating webhook configuration: nginx-ingress-ingress-nginx-admission
validating webhook configuration: nginx-ingress-ingress-nginx-admission
mutating webhook configuration: cert-manager-webhook
validating webhook configuration: nginx-ingress-ingress-nginx-admission
mutating webhook configuration: cert-manager-webhook
All links point to missing anchors inside the target page and I can't find much online about the given messages (I even checked the TimeoutSeconds
values of my resources but it seems to be set to 1
).
Do you have any suggestion?
Thank you for your time.
We just found that one of our users used "nightly" tag in his manifest so I think that it should check for any non-fixed versions of images, not just latest
Add a check for pod deployment strategy for pods that reference DOBS volumes. This can be a blocker for worker node upgrades when draining a node
We currently have a variety of webhook checks in the doks
group, since various webhook configurations can be problematic for DOKS upgrades. However, there are also some generic best practices around admission control webhooks, documented in the upstream docs. For example, it's a generic best practice to set timeouts to small values (definitely less than 30 seconds, since that's the default apiserver request timeout).
We should build some generic webhook best practice checks that can be included in the basic
group as well as the doks
group.
Add list groups command to offer a description of what each group means, what checks belong to the each group.
This can be used from DOKS when we provide a list of groups for users to choose while running clusterlint.
Using latest version run clusterlint --version
.
It should return the actual version number.
It returns clusterlint version 0.0.0
Anyone using cert manager currently will get this error when upgrading their cluster:
There are issues that will cause your pods to stop working. We recommend you fix them before upgrading this cluster.
Validating webhook is configured in such a way that it may be problematic during upgrades.
Mutating webhook is configured in such a way that it may be problematic during upgrades.
Should these be be marked as errors since api group rules are specified?
https://github.com/jetstack/cert-manager/blob/87989dbfe35bed99a9e031c71ad3a7d49030a8bf/deploy/charts/cert-manager/templates/webhook-mutating-webhook.yaml#L26-L28
https://github.com/jetstack/cert-manager/blob/87989dbfe35bed99a9e031c71ad3a7d49030a8bf/deploy/charts/cert-manager/templates/webhook-validating-webhook.yaml#L36-L38
Pods referencing DOBS volumes should always use a StatefulSet. Add this check to the doks group.
I'm seeing the following fatal error when building clusterlint on Go 1.17:
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0xb01dfacedebac1e pc=0x7fff203e0c9e]
runtime stack:
runtime: unexpected return pc for runtime.sigpanic called from 0x7fff203e0c9e
stack: frame={sp:0x7ffeefbfeb08, fp:0x7ffeefbfeb58} stack=[0x7ffeefb7fba8,0x7ffeefbfec10)
0x00007ffeefbfea08: 0x01007ffeefbfea28 0x0000000000000004
0x00007ffeefbfea18: 0x000000000000001f 0x00007fff203e0c9e
0x00007ffeefbfea28: 0x0b01dfacedebac1e 0x0000000000000001
0x00007ffeefbfea38: 0x0000000004035a91 <runtime.throw+0x0000000000000071> 0x00007ffeefbfead8
0x00007ffeefbfea48: 0x00000000052148a1 0x00007ffeefbfea90
0x00007ffeefbfea58: 0x0000000004035d48 <runtime.fatalthrow.func1+0x0000000000000048> 0x0000000005f15220
0x00007ffeefbfea68: 0x0000000000000001 0x0000000000000001
0x00007ffeefbfea78: 0x00007ffeefbfead8 0x0000000004035a91 <runtime.throw+0x0000000000000071>
0x00007ffeefbfea88: 0x0000000005f15220 0x00007ffeefbfeac8
0x00007ffeefbfea98: 0x0000000004035cd0 <runtime.fatalthrow+0x0000000000000050> 0x00007ffeefbfeaa8
0x00007ffeefbfeaa8: 0x0000000004035d00 <runtime.fatalthrow.func1+0x0000000000000000> 0x0000000005f15220
0x00007ffeefbfeab8: 0x0000000004035a91 <runtime.throw+0x0000000000000071> 0x00007ffeefbfead8
0x00007ffeefbfeac8: 0x00007ffeefbfeaf8 0x0000000004035a91 <runtime.throw+0x0000000000000071>
0x00007ffeefbfead8: 0x00007ffeefbfeae0 0x0000000004035ac0 <runtime.throw.func1+0x0000000000000000>
0x00007ffeefbfeae8: 0x00000000052241aa 0x000000000000002a
0x00007ffeefbfeaf8: 0x00007ffeefbfeb48 0x000000000404b4f6 <runtime.sigpanic+0x0000000000000396>
0x00007ffeefbfeb08: <0x00000000052241aa 0x0000000005e6a248
0x00007ffeefbfeb18: 0x00007ffeefbfeb88 0x00000000040289a6 <runtime.(*mheap).allocSpan+0x0000000000000546>
0x00007ffeefbfeb28: 0x000000c000494000 0x0000000000002000
0x00007ffeefbfeb38: 0x000000c000000008 0x0000000000000000
0x00007ffeefbfeb48: 0x00007ffeefbfeb90 !0x00007fff203e0c9e
0x00007ffeefbfeb58: >0x00007ffeefbfeb90 0x0000000005e85000
0x00007ffeefbfeb68: 0x000000000000047c 0x0000000004521545 <golang.org/x/sys/unix.libc_ioctl_trampoline+0x0000000000000005>
0x00007ffeefbfeb78: 0x000000000406835f <runtime.syscall+0x000000000000001f> 0x000000c0001cdbf0
0x00007ffeefbfeb88: 0x00007ffeefbfebd0 0x000000c0001cdbc0
0x00007ffeefbfeb98: 0x00000000040661d0 <runtime.asmcgocall+0x0000000000000070> 0x0000000000000001
0x00007ffeefbfeba8: 0x000000c000410a00 0x0a00000000000001
0x00007ffeefbfebb8: 0x0000000000000000 0x0000000005f49b78
0x00007ffeefbfebc8: 0x0000000000000468 0x000000c0000001a0
0x00007ffeefbfebd8: 0x00000000040642e9 <runtime.systemstack+0x0000000000000049> 0x0000000000000004
0x00007ffeefbfebe8: 0x00000000053adb30 0x0000000005f15220
0x00007ffeefbfebf8: 0x00007ffeefbfec40 0x00000000040641e5 <runtime.mstart+0x0000000000000005>
0x00007ffeefbfec08: 0x000000000406419d <runtime.rt0_go+0x000000000000013d>
runtime.throw({0x52241aa, 0x5e6a248})
/usr/local/Cellar/go/1.17/libexec/src/runtime/panic.go:1198 +0x71
runtime: unexpected return pc for runtime.sigpanic called from 0x7fff203e0c9e
stack: frame={sp:0x7ffeefbfeb08, fp:0x7ffeefbfeb58} stack=[0x7ffeefb7fba8,0x7ffeefbfec10)
0x00007ffeefbfea08: 0x01007ffeefbfea28 0x0000000000000004
0x00007ffeefbfea18: 0x000000000000001f 0x00007fff203e0c9e
0x00007ffeefbfea28: 0x0b01dfacedebac1e 0x0000000000000001
0x00007ffeefbfea38: 0x0000000004035a91 <runtime.throw+0x0000000000000071> 0x00007ffeefbfead8
0x00007ffeefbfea48: 0x00000000052148a1 0x00007ffeefbfea90
0x00007ffeefbfea58: 0x0000000004035d48 <runtime.fatalthrow.func1+0x0000000000000048> 0x0000000005f15220
0x00007ffeefbfea68: 0x0000000000000001 0x0000000000000001
0x00007ffeefbfea78: 0x00007ffeefbfead8 0x0000000004035a91 <runtime.throw+0x0000000000000071>
0x00007ffeefbfea88: 0x0000000005f15220 0x00007ffeefbfeac8
0x00007ffeefbfea98: 0x0000000004035cd0 <runtime.fatalthrow+0x0000000000000050> 0x00007ffeefbfeaa8
0x00007ffeefbfeaa8: 0x0000000004035d00 <runtime.fatalthrow.func1+0x0000000000000000> 0x0000000005f15220
0x00007ffeefbfeab8: 0x0000000004035a91 <runtime.throw+0x0000000000000071> 0x00007ffeefbfead8
0x00007ffeefbfeac8: 0x00007ffeefbfeaf8 0x0000000004035a91 <runtime.throw+0x0000000000000071>
0x00007ffeefbfead8: 0x00007ffeefbfeae0 0x0000000004035ac0 <runtime.throw.func1+0x0000000000000000>
0x00007ffeefbfeae8: 0x00000000052241aa 0x000000000000002a
0x00007ffeefbfeaf8: 0x00007ffeefbfeb48 0x000000000404b4f6 <runtime.sigpanic+0x0000000000000396>
0x00007ffeefbfeb08: <0x00000000052241aa 0x0000000005e6a248
0x00007ffeefbfeb18: 0x00007ffeefbfeb88 0x00000000040289a6 <runtime.(*mheap).allocSpan+0x0000000000000546>
0x00007ffeefbfeb28: 0x000000c000494000 0x0000000000002000
0x00007ffeefbfeb38: 0x000000c000000008 0x0000000000000000
0x00007ffeefbfeb48: 0x00007ffeefbfeb90 !0x00007fff203e0c9e
0x00007ffeefbfeb58: >0x00007ffeefbfeb90 0x0000000005e85000
0x00007ffeefbfeb68: 0x000000000000047c 0x0000000004521545 <golang.org/x/sys/unix.libc_ioctl_trampoline+0x0000000000000005>
0x00007ffeefbfeb78: 0x000000000406835f <runtime.syscall+0x000000000000001f> 0x000000c0001cdbf0
0x00007ffeefbfeb88: 0x00007ffeefbfebd0 0x000000c0001cdbc0
0x00007ffeefbfeb98: 0x00000000040661d0 <runtime.asmcgocall+0x0000000000000070> 0x0000000000000001
0x00007ffeefbfeba8: 0x000000c000410a00 0x0a00000000000001
0x00007ffeefbfebb8: 0x0000000000000000 0x0000000005f49b78
0x00007ffeefbfebc8: 0x0000000000000468 0x000000c0000001a0
0x00007ffeefbfebd8: 0x00000000040642e9 <runtime.systemstack+0x0000000000000049> 0x0000000000000004
0x00007ffeefbfebe8: 0x00000000053adb30 0x0000000005f15220
0x00007ffeefbfebf8: 0x00007ffeefbfec40 0x00000000040641e5 <runtime.mstart+0x0000000000000005>
0x00007ffeefbfec08: 0x000000000406419d <runtime.rt0_go+0x000000000000013d>
runtime.sigpanic()
/usr/local/Cellar/go/1.17/libexec/src/runtime/signal_unix.go:719 +0x396
goroutine 1 [syscall, locked to thread]:
syscall.syscall(0x4521540, 0x1, 0x40487413, 0xc0001cdc80)
/usr/local/Cellar/go/1.17/libexec/src/runtime/sys_darwin.go:22 +0x3b fp=0xc0001cdbf0 sp=0xc0001cdbd0 pc=0x4062d9b
syscall.syscall(0x4072686, 0x10, 0xc0001cdca8, 0x40725b8)
<autogenerated>:1 +0x26 fp=0xc0001cdc38 sp=0xc0001cdbf0 pc=0x4068b26
golang.org/x/sys/unix.ioctl(0x51fa97a, 0x4, 0x100c00008aca0)
/Users/treimann/go/pkg/mod/golang.org/x/[email protected]/unix/zsyscall_darwin_amd64.go:690 +0x39 fp=0xc0001cdc68 sp=0xc0001cdc38 pc=0x4521099
golang.org/x/sys/unix.IoctlGetTermios(...)
/Users/treimann/go/pkg/mod/golang.org/x/[email protected]/unix/ioctl.go:73
github.com/mattn/go-isatty.IsTerminal(0x51fa97a)
/Users/treimann/go/pkg/mod/github.com/mattn/[email protected]/isatty_bsd.go:10 +0x50 fp=0xc0001cdcd8 sp=0xc0001cdc68 pc=0x4ebe490
github.com/fatih/color.init()
/Users/treimann/go/pkg/mod/github.com/fatih/[email protected]/color.go:21 +0x7a fp=0xc0001cdd10 sp=0xc0001cdcd8 pc=0x4ebe9fa
runtime.doInit(0x5e8d8e0)
/usr/local/Cellar/go/1.17/libexec/src/runtime/proc.go:6498 +0x123 fp=0xc0001cde48 sp=0xc0001cdd10 pc=0x4045403
runtime.doInit(0x5e90620)
/usr/local/Cellar/go/1.17/libexec/src/runtime/proc.go:6475 +0x71 fp=0xc0001cdf80 sp=0xc0001cde48 pc=0x4045351
runtime.main()
/usr/local/Cellar/go/1.17/libexec/src/runtime/proc.go:238 +0x1e6 fp=0xc0001cdfe0 sp=0xc0001cdf80 pc=0x40383a6
runtime.goexit()
/usr/local/Cellar/go/1.17/libexec/src/runtime/asm_amd64.s:1581 +0x1 fp=0xc0001cdfe8 sp=0xc0001cdfe0 pc=0x40664c1
goroutine 35 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
/Users/treimann/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1164 +0x6a
created by k8s.io/klog/v2.init.0
/Users/treimann/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:418 +0xfb
On 1.16, it does not occur.
Currently the lint
check in CI downloads and builds golint
on every run. This is slow and error-prone in case of GitHub or other outages as seen in this recent failure. We should build an image with golint installed and use it for this CI task.
The resource-requirements check is in the basic group right now. However, many of doks users face resource contention issues because they may not have followed this best practice of setting resource limits on their pods. We currently run all the checks in the doks group before an upgrade. We can show this warning to users if this check is added to the doks group as well.
Hi, each time digitalocean tries to upgrade my cluster automatically, they send me an alert about upgrade check failure :
Mutating webhook with a TimeoutSeconds value greater than 29 seconds will block upgrades
The issue is I find no such webhook with TimeoutSeconds value greater than 29 seconds.
My cluster is running version 1.20.7-do.0 and DO is trying to upgrade to 1.20.8-do.0.
Let me know if you need more details
Upgraded my DOKS 1.19 to 1.20 cluster yesterday and found out the hard way docker.pkg.github.com
can't be pulled by containerd
.
A cronjob configured with a concurrencyPolicy
of Allow
has the potential for overloading a cluster if a cronjob-managed pod cannot manage to finish in time. The impact we are occasionally seeing is that job-pods created from subsequent cron intervals pile up over time, sometimes leading to hundreds or thousands of pods that stay pending.
We should implement a check that triggers a warning for Allow
cronjobs and suggests to use one of the safer alternative modes Forbid
or Replace
.
Dear DO,
I upgraded my cluster on DO, and the ingress/load balancer failed after the upgrade.
The incompatibility was not detected by clusterlint, I expect other customers will also run into the same issue
see https://stackoverflow.com/questions/70908774/nginx-ingress-controller-fails-to-start-after-aks-upgrade-to-v1-22/70974010 for background, my ingress was also 0.34.1
I also updated cert-manager as it had some problems too, but I do not have that many details on it, the error was error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1
This check just outputs that the webhook is configured incorrectly and can cause problems with DOKS cluster upgrades. But, adding in more details as to how the webhook is configured wrong or what config changes can be done to fix it will help users more.
Context: https://kubernetes.slack.com/archives/CCPETNUCA/p1598549516051000
This check can also use more test cases that can cause it to fail. This also serves as documentation for anyone trying to understand how their webhook config can cause the check to fail.
I'm trying to use clusterlink on a fairly new cluster, and I don't understand why clusterlint is complaining about these things in my setup. (Also the issues found differs between runs with -n namespace
and | grep namespace/
which is odd...)
% clusterlint run | grep staging/
[warning] staging/pod/postgres-0: Pod referencing DOBS volumes must be owned by StatefulSet
[warning] staging/pod/postgres-0: Use fully qualified image for container 'postgres'
But it is owned by a StatefulSet, and it does use the fully qualified image name:
% kubectl describe pod -n staging postgres-0 | egrep 'Controlled|Image:'
Controlled By: StatefulSet/postgres
Image: docker.io/postgres:13.2
However,
% clusterlint run -n staging
[warning] staging/pod/postgres-0: Use fully qualified image for container 'postgres'
(no complaint about the DOBS volume).
I'm confused.
Though it is deprecated in upstream, Kubernetes still supports being able to supply the storageClassName of a pvc using an annotation like volume.beta.kubernetes.io/storage-class
. clusterlint should be able to detect storageClassNames supplied in this way and provide correct warnings for the dobs-volume checks.
Add a global flag to lint only one namespace. (Or namespace exclude option)
Useful if you want to separate warning and remove for example from Kube-system namespace.
We should provide binaries for releases so users don't need to build clusterlint themselves. For bonus points, it would be nice to build binaries automatically via actions or CircleCI when we tag a release.
Hi, it's been a while since you added a Dockerfile to this repo but still don't have an official channel for that Docker image. I though it was planned, isn't it? I'd like to use that image from an official channel rather than building it myself. Thanks
Hi, I like the way clusterlint helps me fixing issues in my cluster and I'd like to make it a continuous process. Since I don't manage my cluster locally but with Terraform, I was wondering if it could be possible to run clusterlint in-cluster and retrieve scan results via some metrics (JSON, Prometheus, ...) or web UI for example. This could be useful for collaboration as well. I'd be happy to work on that, is it relevant ? Is it an anti-pattern to run the tool in-cluster ?
I got the following error:
$ clusterlint run
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x11862b5]
goroutine 193 [running]:
github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference.WithTag(0x0, 0x0, 0x13f96c7, 0x6, 0x0, 0x1340600, 0x1, 0xc0010ae000)
/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference/reference.go:280 +0x3f5
github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference.TagNameOnly(0x0, 0x0, 0x0, 0x0)
/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference/normalize.go:130 +0xa5
github.com/digitalocean/clusterlint/checks/basic.(*latestTagCheck).checkTags(0x2285958, 0xc000bf5e40, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0xc000ca83e0, 0x18, ...)
/home/joanne/go/src/github.com/digitalocean/clusterlint/checks/basic/latest_tag.go:70 +0x164
github.com/digitalocean/clusterlint/checks/basic.(*latestTagCheck).Run(0x2285958, 0xc000517280, 0x2268080, 0x1010000015c58c0, 0x226d180, 0xc0000e86e8, 0x44a86b)
/home/joanne/go/src/github.com/digitalocean/clusterlint/checks/basic/latest_tag.go:57 +0x14f
github.com/digitalocean/clusterlint/checks.Run.func1(0x8, 0x148ace0)
/home/joanne/go/src/github.com/digitalocean/clusterlint/checks/run_checks.go:51 +0xc3
github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1(0xc0004c9c20, 0xc000519180)
/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup/errgroup.go:57 +0x57
created by github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup.(*Group).Go
/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup/errgroup.go:54 +0x66
It looks like clusterlint errored on a latest tag because clusterlint run ignore-checks latest-tag
ran successfully.
The problem looks like it occurs because of a pod on my cluster that refers to a latest tag:
apiVersion: v1
kind: Pod
creationTimestamp: "2019-10-24T08:47:41Z"
generateName: jaeger-698f8b8cf4-
labels:
app: jaeger
app.kubernetes.io/component: all-in-one
app.kubernetes.io/name: jaeger
pod-template-hash: 698f8b8cf4
name: jaeger-698f8b8cf4-nmjcg
...
...
...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-10-24T08:47:41Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-10-24T08:47:59Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-10-24T08:47:59Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-10-24T08:47:41Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://1e012a3f85a056a0674877ce93fdb4ad54bc6a6151e58611f7058739f270cab0
image: jaegertracing/all-in-one:latest
imageID: docker-pullable://jaegertracing/all-in-one@sha256:4cb2598b80d4f37b1d66fbe35b2f7488fa04f4d269e301919e8c45526f2d73c3
lastState: {}
name: jaeger
ready: true
restartCount: 0
state:
running:
startedAt: "2019-10-24T08:47:45Z"
hostIP: 192.168.176.31
phase: Running
podIP: 192.168.181.143
qosClass: Burstable
startTime: "2019-10-24T08:47:41Z"
See status.containerStatuses.containerID.image for where the problem is.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b7174d", GitCommit:"b7174db5ee0e30c94a0b9899c20ac980c0850fc8", GitTreeState:"clean", BuildDate:"2019-10-18T17:56:01Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Hi,
I have few warnings such as: Pod referencing dobs volumes must be owned by statefulset
which links to: https://www.digitalocean.com/docs/kubernetes/resources/clusterlint-errors/#dobs-pod-owner
However the link don't seem correct and there don't seem to have documentation about this issue.
Looking at the function that checks for env vars in a pod that correspond to a ConfigMap, this considers ConfigMapRef, but not configMapKeyRef
. The latter per the docs is also a valid usage of a ConfigMap.
configMapKeyRef
configMapKeyRef
when the yaml is reviewed.The clusterlint API should take context arguments where appropriate to allow for cancellation. In particular, we should try to avoid hanging on stuck requests to the k8s API.
As suggestion, add verbose mode with flag.
As example:
In fully_qualified_image check, without Verbose mode: Use fully qualified image for container 'prometheus-node-exporter'
With Verbose mode: Use fully qualified image for container 'prometheus-node-exporter', linted image name be docker name check: $correct_image_name
.
In script any way you call ParseAnyReference
function, why not to share the output of this function?
Currently, the check is invalidated by metadata.
However, this method does not allow you to disable checks on a per container basis.
Is there a way to solve this?
The admission control webhook check in the doks
group will currently throw an error for webhooks that apply only to CRDs, but such webhooks would never actually cause a problem for DOKS upgrades since they won't prevent pods from starting. The admission control webhook check should ignore any webhook configuration that doesn't apply to resources in the v1
or apps/v1
apiGroups.
I'm trying to upgrade from dok8s v1.28 to v1.29, but clusterlint is telling me that cert-manager's webhook, which uses a timeoutSeconds
of 30, will block the upgrade. Would someone mind filling me in on why that's the case suddenly? I've been using cert-manager for years, across several k8s versions, without issue. I see in the upstream docs where it says:
The timeout value must be between 1 and 30 seconds.
But it's unclear whether that's inclusive or not. Clusterlint has seemingly decided that it's 1-inclusive and 30-exclusive. Is that actually based in reality? Is there another document I've missed? 29 seems so random, haha!
For now I will customize my deployment and set this to 29 to make clusterlint happy, but I'd like to get to the bottom of this. I've logged an issue against cert-manager as well. If clusterlint is correct, cert-manager should probably make the default value 29. On the other hand, if 30 is indeed a valid value, clusterlint probably shouldn't complain about it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.