Giter VIP home page Giter VIP logo

charts's Introduction

Cowboy Sysop Charts

License Conventional Commits Semantic Versioning

Renovate Release Charts Downloads

Charts

Name Description
dolibarr A modern software package to manage your company or foundation's activity
flowise Drag & drop UI to build your customized LLM flow
kroki Creates diagrams from textual descriptions
kubebox Terminal and Web console for Kubernetes
kubeview Kubernetes cluster visualiser and graphical explorer
lighthouse-ci Enables running a server to display Lighthouse CI results
local-ai A drop-in replacement REST API compatible with OpenAI API specifications for local inferencing
mongo-express Web-based MongoDB admin interface, written with Node.js and express
ollama Get up and running with large language models, locally
qdrant Vector similarity search engine and vector database
quickchart Chart image and QR code web API
vertical-pod-autoscaler Set of components that automatically adjust the amount of CPU and memory requested by pods running in the Kubernetes Cluster
whoami Tiny Go webserver that prints os information and HTTP request to output

Quality

All these charts are tested with ct on multiple Kubernetes versions, from v1.24 to v1.27, with the help of kind.

Contributing

As this is a personal project, as I want to keep some consistency between these charts and as I don't have enough time to write down the best practices I use, I'm not accepting any pull requests.

However, I'll be happy to add some new features to these charts, so don't hesitate to open issues to submit your needs.

Repository Settings

Add a secret named RENOVATE_TOKEN containing a Personal Access Token with repo and workflow scopes to make Renovate GitHub Action work.

charts's People

Contributors

agrevtsev avatar artificial-aidan avatar gacko avatar mikebryant avatar nervo avatar renovate-bot avatar renovate[bot] avatar robinelfrink avatar sebastien-prudhomme avatar starlightromero avatar ymrsmns avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

charts's Issues

Finer labelling on mongo-express ?

Hello,

Some needs to have a finer label handling (specific for services, ingress or even shared by all the objects)

Here is some example with bitnami/mongodb:
https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml#L35
https://github.com/bitnami/charts/blob/master/bitnami/memcached/values.yaml#L44
bitnami/charts#6815

Would it be possible to add this feature in the mongo-express charts ?

Here is the link to a pull request:

#73

You can reuse my fork to have a starter

Thanks in advance

Consume k8s secrets

Hello.

I would like to deploy a mongodb and mongo-express with Skaffold (and my app) with development mode. In order to avoid scripts for creating secrets manually I would really like to parametrize/use the secret generated by mongo directly with mongo-express.

A proposal is made in #89 , which could be extended for any other chart too

vpa-updater throws error about non-existing container

Hello!
We are using latest vpa chart:

---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: vertical-pod-autoscaler
  namespace: kube-system
spec:
  interval: 14m
  url: "https://cowboysysop.github.io/charts/"
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: vertical-pod-autoscaler
  namespace: kube-system
spec:
  install:
    createNamespace: true
    crds: CreateReplace
  upgrade:
    crds: CreateReplace
  releaseName: vertical-pod-autoscaler
  interval: 9m
  chart:
    spec:
      # renovate: registryUrl=https://cowboysysop.github.io/charts/
      chart: vertical-pod-autoscaler
      version: 7.2.0
      sourceRef:
        kind: HelmRepository
        name: vertical-pod-autoscaler
        namespace: kube-system
      interval: 14m
  values:
    admissionController:
      tolerations:
      - key: "arch"
        operator: "Equal"
        value: "arm64"
        effect: "NoSchedule"
      resources:
        limits:
          memory: 50Mi
        requests:
          cpu: 10m
          memory: 40Mi
    recommender:
      extraArgs:
        pod-recommendation-min-memory-mb: 30
      tolerations:
      - key: "arch"
        operator: "Equal"
        value: "arm64"
        effect: "NoSchedule"
      resources:
        limits:
          memory: 250Mi
        requests:
          cpu: 10m
          memory: 150Mi
    updater:
      tolerations:
      - key: "arch"
        operator: "Equal"
        value: "arm64"
        effect: "NoSchedule"
      resources:
        limits:
          memory: 50Mi
        requests:
          cpu: 10m
          memory: 50Mi

However we see an errors about cert-manager container that vpa-updater pod spamming:

vertical-pod-autoscaler-updater-7747d6547-qbt96 I1020 10:14:53.194314       1 capping.go:79] no matching Container found for recommendation cert-manager
vertical-pod-autoscaler-updater-7747d6547-qbt96 I1020 10:14:53.194621       1 capping.go:79] no matching Container found for recommendation cert-manager
vertical-pod-autoscaler-updater-7747d6547-qbt96 I1020 10:15:53.202479       1 capping.go:79] no matching Container found for recommendation cert-manager
vertical-pod-autoscaler-updater-7747d6547-qbt96 I1020 10:15:53.232326       1 capping.go:79] no matching Container found for recommendation cert-manager
vertical-pod-autoscaler-updater-7747d6547-qbt96 I1020 10:16:53.193146       1 capping.go:79] no matching Container found for recommendation cert-manager

Here is a cer-manager deployment and it's vpa:

---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  interval: 14m
  url: "https://charts.jetstack.io/"
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  install:
    createNamespace: true
    crds: CreateReplace
  upgrade:
    crds: CreateReplace
  interval: 9m
  chart:
    spec:
      # renovate: registryUrl=https://charts.jetstack.io/
      chart: cert-manager
      version: v1.13.1
      sourceRef:
        kind: HelmRepository
        name: cert-manager
        namespace: cert-manager
      interval: 14m
  values:
    installCRDs: true
    serviceAccount:
      create: false
      name: certmanager-oidc
    global:
      priorityClassName: above-average
    prometheus:
      enabled: true
      servicemonitor:
        enabled: true
        prometheusInstance: prometheus-kube-prometheus-prometheus
    ingressShim:
      defaultIssuerName: letsencrypt-prod
      defaultIssuerKind: ClusterIssuer
    webhook:
      tolerations:
      - key: "arch"
        operator: "Equal"
        value: "arm64"
        effect: "NoSchedule"
      resources:
        limits:
          memory: 64Mi
        requests:
          memory: 32Mi
          cpu: 10m
    cainjector:
      tolerations:
      - key: "arch"
        operator: "Equal"
        value: "arm64"
        effect: "NoSchedule"
      extraArgs:
      - "--leader-elect=false"
      resources:
        limits:
          memory: 512Mi
        requests:
          memory: 128Mi
          cpu: 10m
    resources:
      limits:
        memory: 384Mi
      requests:
        memory: 160Mi
        cpu: 10m
    tolerations:
    - key: "arch"
      operator: "Equal"
      value: "arm64"
      effect: "NoSchedule"
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: cert-manager
  updatePolicy:
    updateMode: Recreate
    minReplicas: 1
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: cert-manager-cainjector
  namespace: cert-manager
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: cert-manager-cainjector
  updatePolicy:
    updateMode: Recreate
    minReplicas: 1
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: cert-manager-webhook
  namespace: cert-manager
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: cert-manager-webhook
  updatePolicy:
    updateMode: Recreate
    minReplicas: 1

And there is no cert-manager container name it refers to in deployment:

Name:                   cert-manager
Namespace:              cert-manager
CreationTimestamp:      Wed, 18 Oct 2023 17:14:32 +0300
Labels:                 app=cert-manager
                        app.kubernetes.io/component=controller
                        app.kubernetes.io/instance=cert-manager
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=cert-manager
                        app.kubernetes.io/version=v1.13.1
                        helm.sh/chart=cert-manager-v1.13.1
                        helm.toolkit.fluxcd.io/name=cert-manager
                        helm.toolkit.fluxcd.io/namespace=cert-manager
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: cert-manager
                        meta.helm.sh/release-namespace: cert-manager
Selector:               app.kubernetes.io/component=controller,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cert-manager
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=cert-manager
                    app.kubernetes.io/component=controller
                    app.kubernetes.io/instance=cert-manager
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=cert-manager
                    app.kubernetes.io/version=v1.13.1
                    helm.sh/chart=cert-manager-v1.13.1
  Service Account:  certmanager-oidc
  Containers:
   cert-manager-controller:
    Image:       quay.io/jetstack/cert-manager-controller:v1.13.1
    Ports:       9402/TCP, 9403/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      --v=2
      --cluster-resource-namespace=$(POD_NAMESPACE)
      --leader-election-namespace=kube-system
      --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.13.1
      --default-issuer-name=letsencrypt-prod
      --default-issuer-kind=ClusterIssuer
      --max-concurrent-challenges=60
    Limits:
      memory:  384Mi
    Requests:
      cpu:     10m
      memory:  160Mi
    Environment:
      POD_NAMESPACE:     (v1:metadata.namespace)
    Mounts:             <none>
  Volumes:              <none>
  Priority Class Name:  above-average
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   cert-manager-7d47d666f8 (1/1 replicas created)
Events:          <none>

VPA hostNetwork

Hi,

thank you for the VPA chart! I'm using AWS EKS with Calico CNI (common practice to get rid of ridiculous pod per node limits), and this setup needs hostNetworking enabled for trusted services, see the text in the blue box here or the issue with a solution at the bottom here.

Can you please add an option to the chart values so I don't have to patch the manifests after installing? Should be really easy, this is what I'm currently using to make it work:

helm install vpa cowboysysop/vertical-pod-autoscaler \
    --namespace=kube-system \
    --values values.vpa.yaml
kubectl patch deployment vpa-vertical-pod-autoscaler-admission-controller \
    --namespace=kube-system \
    --patch "$(cat hostnetwork.patch.yaml)"
kubectl patch deployment vpa-vertical-pod-autoscaler-recommender \
    --namespace=kube-system \
    --patch "$(cat hostnetwork.patch.yaml)"
kubectl patch deployment vpa-vertical-pod-autoscaler-updater \
    --namespace=kube-system \
    --patch "$(cat hostnetwork.patch.yaml)"

And the patch is:

spec:
    template:
        spec:
            hostNetwork: true
            dnsPolicy: ClusterFirstWithHostNet

A simple setting in values could drive this:

hostNetwork:
    enabled: true

Example here

Thank you in advance

Mongo-express Chart dependencies missing

I'm not sure the bitnami dependencies declared in the Chart.yaml for mongo-express are available any longer (they seem to be quite old versions and I can't see them listed in the bitnami repo). I don't seem to be able to use the helm chart because of this.

checksum/secret is recalculated every helm diff causing helm to mark it as changed

The checksum/secret changes every time we diff the chart which causes it to show as changed. This causes problems for our CI/CD process. I'm not really sure what this checksum is used for but I'm wondering if there is a way to avoid it changing every run. Maybe we could use the existing certs instead of generating new certs every run or something like that?

This is the line in question.

checksum/secret: {{ include (print $.Template.BasePath "/admission-controller/tls-secret.yaml") . | sha256sum }}

Thank you for maintaining this helm chart and taking a moment to consider my issue.

Istio service mesh blocks outbound http traffic because of wrong service name

Issue

Deployed istio 1.6 version and noticed that external https requests were failing from my pod

Click to expand!
curl -Iv https://api.example.com/_health
* Expire in 0 ms for 6 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 0 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 4 ms for 1 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 1 ms for 1 (transfer 0x56328e019f50)
* Expire in 4 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 2 ms for 1 (transfer 0x56328e019f50)
* Expire in 8 ms for 1 (transfer 0x56328e019f50)
* Expire in 4 ms for 1 (transfer 0x56328e019f50)
* Expire in 4 ms for 1 (transfer 0x56328e019f50)
* Expire in 8 ms for 1 (transfer 0x56328e019f50)
* Expire in 5 ms for 1 (transfer 0x56328e019f50)
* Expire in 5 ms for 1 (transfer 0x56328e019f50)
* Expire in 8 ms for 1 (transfer 0x56328e019f50)
* Expire in 7 ms for 1 (transfer 0x56328e019f50)
* Expire in 7 ms for 1 (transfer 0x56328e019f50)
* Expire in 8 ms for 1 (transfer 0x56328e019f50)
* Expire in 9 ms for 1 (transfer 0x56328e019f50)
* Expire in 9 ms for 1 (transfer 0x56328e019f50)
* Expire in 12 ms for 1 (transfer 0x56328e019f50)
*   Trying 1.2.3.4...
* TCP_NODELAY set
* Expire in 149980 ms for 3 (transfer 0x56328e019f50)
* Expire in 200 ms for 4 (transfer 0x56328e019f50)
* Connected to api.example.com (1.2.3.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number

Rolled back the istio upgrade and I noticed this error in the istiod proxy status

2020-06-10T05:45:06.283908Z info  ads Push Status: {
    "pilot_conflict_outbound_listener_http_over_https": {
        "vpa-webhook.kube-system.svc.cluster.local": {
            "proxy": "pod-f857f6ffb-hhzx6.staging",
            "message": "listener conflict detected: service vpa-webhook.kube-system.svc.cluster.local specifies an HTTP service on HTTPS only port 443."
        }
    },
    "pilot_conflict_outbound_listener_tcp_over_current_tcp": {},
    "pilot_eds_no_instances": {}
}

Resolution

Manually updating the admission-controller service port name to https

Add option to serve mongo-express in a subdirectory

I'd like to use this chart, but serve mongo express in a subdirectory. I have used an ingress rewrite for that, but then all CSS and JS files fail to load because it tries to load them from / instead of /subdir

Lighthouse Invalid project slug "example"

Hi

I have deployed the cowboysysop/lighthouse-ci helm chart and added the a site

  sites:
    - urls:
        - https://www.mysite.com
      schedule: 00 14 * * *
      projectSlug: "example"

When the schedule runs I get this message

PSI collection failure for Site #0: Invalid project slug "example"

Also when I login to the UI I get a message saying
Run lhci wizard to setup your first project.

Generate helm repo index

As this repository is a helm chart repository, could you create a helm repository index using helm repo index and eventually your CI to update the index so charts can be imported as dependencies?

feat: add ability to specify liveness and readiness path

PR #35 added liveness and readiness probe configuration. However, It does not work properly if you also configure ingress with a path.
To get ingress to work with a path, I found that I had to provide a values.yaml override file with:

siteBaseUrl: /mongoexpress/

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /mongoexpress/$2
  hosts:
    - host: minikube.mshome.net
      paths:
        - /mongoexpress(/|$)(.*)

This works.
However, because this changes the siteBasePath, the hardcoded path in templates/deployment.yaml is incorrect:

          livenessProbe:
            httpGet:
              path: /

(similarly, with readinesProbe. )
Path should instead be configurable:
path: {{ .Values.livenessProbe.path }}

Application version update

Hi,
Current Kroki application version used in Chart is quite old, more than a year. Are there any chances for upgrade to one of latest version?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

Renovate tried to run on this repository, but found these problems.

  • WARN: Found renovate config warnings

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/lint-test.yaml
  • actions/checkout v4.1.7
  • azure/setup-helm v3.5
  • actions/setup-python v5.1.1
  • helm/chart-testing-action v2.6.1
  • actions/checkout v4.1.7
  • azure/setup-helm v3.5
  • actions/setup-python v5.1.1
  • helm/chart-testing-action v2.6.1
  • helm/kind-action v1.10.0
  • actions/checkout v4.1.7
  • bridgecrewio/checkov-action v12.2845.0
.github/workflows/release.yml
  • actions/checkout v4.1.7
  • crazy-max/ghaction-import-gpg v6.1.0
  • azure/setup-helm v3.5
  • helm/chart-releaser-action v1.6.0
.github/workflows/renovate.yml
  • actions/checkout v4.1.7
  • renovatebot/github-action v40.2.5
helm-values
charts/dolibarr/values.yaml
  • docker.io/tuxgasy/dolibarr 19.0.2
  • docker.io/atkrad/wait4x 2.14.0
  • ghcr.io/cowboysysop/pytest 1.0.41
charts/flowise/values.yaml
  • docker.io/flowiseai/flowise 2.0.4
  • docker.io/atkrad/wait4x 2.14.1
  • ghcr.io/cowboysysop/pytest 1.0.41
charts/kroki/values.yaml
  • docker.io/yuzutech/kroki 0.25.0
  • docker.io/yuzutech/kroki-bpmn 0.25.0
  • docker.io/yuzutech/kroki-diagramsnet 0.25.0
  • docker.io/yuzutech/kroki-excalidraw 0.25.0
  • docker.io/yuzutech/kroki-mermaid 0.25.0
  • ghcr.io/cowboysysop/pytest 1.0.41
charts/kubebox/values.yaml
  • docker.io/astefanutti/kubebox 0.9.0-server
  • ghcr.io/cowboysysop/pytest 1.0.35
charts/kubeview/values.yaml
  • ghcr.io/benc-uk/kubeview 0.1.31
  • ghcr.io/cowboysysop/pytest 1.0.35
charts/lighthouse-ci/values.yaml
  • docker.io/patrickhulce/lhci-server 0.8.1
  • docker.io/atkrad/wait4x 2.14.0
  • ghcr.io/cowboysysop/pytest 1.0.35
charts/local-ai/values.yaml
  • quay.io/go-skynet/local-ai v1.25.0-ffmpeg
  • ghcr.io/cowboysysop/pytest 1.0.35
charts/mongo-express/values.yaml
  • docker.io/mongo-express 1.0.2
  • ghcr.io/cowboysysop/pytest 1.0.41
charts/ollama/values.yaml
  • docker.io/ollama/ollama 0.3.5
  • ghcr.io/cowboysysop/pytest 1.0.41
charts/qdrant/values.yaml
  • docker.io/qdrant/qdrant v1.4.1
  • ghcr.io/cowboysysop/pytest 1.0.35
charts/quickchart/values.yaml
  • docker.io/ianw/quickchart v1.7.1
  • ghcr.io/cowboysysop/pytest 1.0.35
charts/vertical-pod-autoscaler/values.yaml
  • registry.k8s.io/autoscaling/vpa-admission-controller 1.1.2
  • registry.k8s.io/autoscaling/vpa-recommender 1.1.2
  • registry.k8s.io/autoscaling/vpa-updater 1.1.2
  • docker.io/bitnami/kubectl 1.29.3
  • ghcr.io/cowboysysop/pytest 1.0.41
charts/whoami/values.yaml
  • docker.io/traefik/whoami v1.10.2
  • ghcr.io/cowboysysop/pytest 1.0.35
helmv3
charts/dolibarr/Chart.yaml
  • common 2.19.0
  • mariadb 17.0.1
charts/flowise/Chart.yaml
  • common 2.21.0
  • mariadb 18.2.6
  • postgresql 15.5.7
charts/kroki/Chart.yaml
  • common 2.20.3
charts/kubebox/Chart.yaml
  • common 2.9.0
charts/kubeview/Chart.yaml
  • common 2.9.0
charts/lighthouse-ci/Chart.yaml
  • common 2.19.0
  • mariadb 17.0.1
  • postgresql 15.1.4
charts/local-ai/Chart.yaml
  • common 2.9.0
charts/mongo-express/Chart.yaml
  • common 2.19.1
  • mongodb 15.1.5
charts/ollama/Chart.yaml
  • common 2.21.0
charts/qdrant/Chart.yaml
  • common 2.9.0
charts/quickchart/Chart.yaml
  • common 2.9.0
charts/vertical-pod-autoscaler/Chart.yaml
  • common 2.19.1
charts/whoami/Chart.yaml
  • common 2.9.0
regex
.github/workflows/lint-test.yaml
  • helm v3.9.4
  • helm v3.10.3
  • helm v3.11.3
  • helm v3.12.3
  • helm v3.12.3
.github/workflows/release.yml
  • helm v3.12.3
.github/workflows/lint-test.yaml
  • kindest/node v1.24.17
  • kindest/node v1.25.16
  • kindest/node v1.26.15
  • kindest/node v1.27.13
.github/workflows/lint-test.yaml
  • python 3.12.4
  • python 3.12.4
.github/workflows/renovate.yml
  • ghcr.io/renovatebot/renovate 38.24.1
.github/workflows/lint-test.yaml
  • helm v3.9.4
  • helm v3.10.3
  • helm v3.11.3
  • helm v3.12.3
  • helm v3.12.3
.github/workflows/release.yml
  • helm v3.12.3
.github/workflows/lint-test.yaml
  • kindest/node v1.24.17
  • kindest/node v1.25.16
  • kindest/node v1.26.15
  • kindest/node v1.27.13
.github/workflows/lint-test.yaml
  • python 3.12.4
  • python 3.12.4
.github/workflows/renovate.yml
  • ghcr.io/renovatebot/renovate 38.24.1

Support for custom labels on ClusterRole resources

I would like to add some dedicated labels on the ClusterRole resources deployed with the vertical-pod-autoscaler chart.

Here is an example:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
  name: vertical-pod-autoscaler-vertical-pod-autoscaler-recommender
rules:
  ...

If you want me to provide a PR, don't hesitate to give me a wink. Thanks a bunch.

Support for priorityClassName

Hello. We'd like to leverage this chart to install VPA in our clusters, but need to ensure that random workloads in the cluster do not deschedule the VPA pods by using priority classes.

I opened a PR before reading your README about not accepting PRs, but I believe the change is very straightforward.
#53

Unclear appVersion references in vertical-pod-autoscaler HelmChart Image tags

The image tags specified in the values.yaml were not updated for a long time and now in the current version all with 1.0.0
Example tags:

It is hard to derive if this is really the appVersion of the chart. Maybe they can be just left empty and derived from appVersion in Chart.yaml?

Cannot instal vertical-pod-autoscaler in AWS EKS

Hello

I am using AWS EKS and I had previously installed vertical-pod-autoscaler
Now I am trying to update it to the latest version but I am getting this error.

Error: chart requires kubeVersion: >=1.17 which is incompatible with Kubernetes v1.21.5-eks-bc4871b

I tried to overwrite kubeVersion in values with 1.21.5 and 1.21 and did not work.

I was able to install it after helm pull and removing kubeVersion from Chart.yaml and manually installing the release from the folder

Thanks

Making image registry configurable in vertical-pod-autoscaler

Currently in vertical-pod-autoscaler, each component's deployment (admission-controller, recommender and updater) image is set to "{{ .Values.admissionController.image.repository }}:{{ .Values.admissionController.image.tag }}" in the templates (for the admission-controller here as example).

However, in some cases, one may need to change the registry name without changing the repository itself. This would be especially useful in cases where one has the same repository in multiple registries (GCR, Harbor...) and has to use a specific registry for a chart deployed in a specific environment.

I would thus suggest to split the image section as follows :

image:
    registry: ""
    repository: ""
    tag: ""
    pullPolicy: ""

And update the image: field of each component's deployment template : "{{ .Values.admissionController.image.registry }}/{{ .Values.admissionController.image.repository }}:{{ .Values.admissionController.image.tag }}"

Keeping the admission-controller example, it would lead to this section in the values :

image:
    registry: k8s.gcr.io
    repository: autoscaling/vpa-admission-controller
    tag: 0.13.0
    pullPolicy: IfNotPresent

I can create a PR for this if needed.

Documentation for ingress.hosts quotes a wrong field name

Hi
Reference for ingress.hosts[0].name should instead be to ingress.hosts[0].host

E.g., the actual working values would be:

ingress:
  enabled: true
  hosts:
    - host: me.example.com
      paths: 
       - /

Note ingress.hosts.host rather than ingress.hosts.name.

Cant Connect to MongoReplicaSet

Hi there,
i am running a mongodb replicaset inside my k8s-cluster, but unfortunately i can not make a proper connection between mongo experess and mongodb.
My mongodb replicaset is deployed in a namespace called mongo-replicaset. According to the documentation i need to set the variable mongodbServer inside the mongo-express values file like so:

mongodbServer: "mongodb-0.mongodb-headless.mongo-replicaset.svc.cluster.local,mongodb-1.mongodb-headless.mongo-replicaset.svc.cluster.local,mongodb-2.mongodb-headless.mongo-replicaset.svc.cluster.local/admin?replicaSet=rs0"

But i get the following error inside the mongo express container:

mongodb://<user>:<password>@mongodb-0.mongodb-headless.mongo-replica
set.svc.cluster.local,mongodb-1.mongodb-headless.mongo-replicaset.svc.cluster.local,mongodb-2.mongodb-headless.mongo-replicaset.svc.cl
uster.local/admin?replicaSet=rs0:27017/
MongoError: setName from ismaster does not match provided connection setName [rs0] != [{ rs0: '27017/' }]`

For some reason the value :27017/ gets always attached at the end of my connectionsstring. So this error is basically clear to me due the incorrect connectionsstring.

I am currently using the 1.0.0-alpha.4 image within the mongo express Helm chart.

Is there something i am missing here ?

Any help on that would just be awesome.

Best Regards
Martin

Make mongo-express service.nodePort configurable

I am using mongo-express for a personal project and I am really happy with it. Great work:)
There is a small enhancement that I would benefit from, namely to make the service.targetPort configurable. I am using service.type=NodePort.

Image scan has detect several vulnerabilites in vertical-pod-autoscaler

Hi,

trivy image scanner has detect several vulnerabilites in vertical-pod-autoscaler in the latest images. Could you please fix the "HIGH" vulnerabilities?

I think the report is the same for all images:

k8s.gcr.io/autoscaling/vpa-admission-controller:0.10.0
k8s.gcr.io/autoscaling/vpa-recommender:0.10.0
k8s.gcr.io/autoscaling/vpa-updater:0.10.0

E.g.:

trivy image k8s.gcr.io/autoscaling/vpa-admission-controller:0.10.0
2022-02-25T09:51:33.315+0100	INFO	Detected OS: debian
2022-02-25T09:51:33.315+0100	INFO	Detecting Debian vulnerabilities...
2022-02-25T09:51:33.316+0100	INFO	Number of language-specific files: 1
2022-02-25T09:51:33.316+0100	INFO	Detecting gobinary vulnerabilities...

k8s.gcr.io/autoscaling/vpa-admission-controller:0.10.0 (debian 11.2)
====================================================================
Total: 0 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 0, CRITICAL: 0)


admission-controller (gobinary)
===============================
Total: 4 (UNKNOWN: 1, LOW: 0, MEDIUM: 1, HIGH: 2, CRITICAL: 0)

+--------------------------+------------------+----------+-------------------+----------------+---------------------------------------+
|         LIBRARY          | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION  |                 TITLE                 |
+--------------------------+------------------+----------+-------------------+----------------+---------------------------------------+
| github.com/gogo/protobuf | CVE-2021-3121    | HIGH     | v1.3.1            | 1.3.2          | gogo/protobuf:                        |
|                          |                  |          |                   |                | plugin/unmarshal/unmarshal.go         |
|                          |                  |          |                   |                | lacks certain index validation        |
|                          |                  |          |                   |                | -->avd.aquasec.com/nvd/cve-2021-3121  |
+--------------------------+------------------+          +-------------------+----------------+---------------------------------------+
| golang.org/x/text        | CVE-2020-14040   |          | v0.3.2            | 0.3.3          | golang.org/x/text: possibility        |
|                          |                  |          |                   |                | to trigger an infinite loop in        |
|                          |                  |          |                   |                | encoding/unicode could lead to...     |
|                          |                  |          |                   |                | -->avd.aquasec.com/nvd/cve-2020-14040 |
+                          +------------------+----------+                   +----------------+---------------------------------------+
|                          | CVE-2021-38561   | UNKNOWN  |                   | 0.3.7          | Due to improper index calculation,    |
|                          |                  |          |                   |                | an incorrectly formatted              |
|                          |                  |          |                   |                | language tag can cause...             |
|                          |                  |          |                   |                | -->avd.aquasec.com/nvd/cve-2021-38561 |
+--------------------------+------------------+----------+-------------------+----------------+---------------------------------------+
| k8s.io/client-go         | CVE-2020-8565    | MEDIUM   | v0.18.3           | 0.20.0-alpha.2 | kubernetes: Incomplete fix            |
|                          |                  |          |                   |                | for CVE-2019-11250 allows for         |
|                          |                  |          |                   |                | token leak in logs when...            |
|                          |                  |          |                   |                | -->avd.aquasec.com/nvd/cve-2020-8565  |
+--------------------------+------------------+----------+-------------------+----------------+---------------------------------------+

Add podLabels to deployments

I would like to be able to add a list of custom labels to my deployment.

See my pull request here: #17

This allows me to deploy this (for example the mongo-express) chart in my organisation and have it communicate the the mongodb pod. There is a network security policy that restricts communication between pods if they are not labeled correctly.

Mongo-express: Support unauthorized liveness/readiness check

I am trying to deploy mongo express to GKE with a GCP Ingress LoadBalancer to make it available to the internet.

In doing so I ran into an issue where all pods are ready, but the LoadBalancer creates its own health check, which supposedly cannot be authenticated. Looking at the generated resources, the liveness & readiness probes seem to require an Authentication header and I assume without it, the health check would fail.

Would it be possible to support a separate health check which does not require any authentication?

Also, are my assumptions correct and do you see another workaround for this issue?

VPA: Unable to install on EKS fargate node (toleration issue)

Please add the ability for custom tolerations on the job that installs the CDRs (as you already do for the 3 vpa controllers)

https://github.com/cowboysysop/charts/blob/master/charts/vertical-pod-autoscaler/templates/crds/job.yaml

Without this, fargate only clusters are unable to operate:

4m53s       Warning   FailedScheduling       pod/vertical-pod-autoscaler-crds-9sp7w                               0/13 nodes are available: 13 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.


⚠ Dependency Lookup Warnings ⚠

  • Renovate failed to look up the following dependencies: bridgecrewio/checkov-action.

Files affected: .github/workflows/lint-test.yaml


Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/lint-test.yaml
  • actions/checkout v3.0.2
  • azure/setup-helm v3.3
  • actions/setup-python v4.2.0
  • helm/chart-testing-action v2.3.0
  • actions/checkout v3.0.2
  • azure/setup-helm v3.3
  • actions/setup-python v4.2.0
  • helm/chart-testing-action v2.3.0
  • helm/kind-action v1.3.0
  • actions/checkout v3.0.2
  • bridgecrewio/checkov-action v12.1774.0
.github/workflows/release.yml
  • actions/checkout v3.0.2
  • crazy-max/ghaction-import-gpg v5.1.0
  • azure/setup-helm v3.3
  • helm/chart-releaser-action v1.4.0
regex
.github/workflows/lint-test.yaml
  • cert-manager v1.6.1
  • cert-manager v1.6.1
.github/workflows/lint-test.yaml
  • helm v3.1.3
  • helm v3.2.4
  • helm v3.3.4
  • helm v3.4.2
  • helm v3.5.4
  • helm v3.6.3
  • helm v3.7.2
  • helm v3.7.2
.github/workflows/release.yml
  • helm v3.7.2
.github/workflows/lint-test.yaml
  • istio 1.12.2
.github/workflows/lint-test.yaml
  • kindest/node v1.19.16
  • kindest/node v1.20.15
  • kindest/node v1.21.14
  • kindest/node v1.22.13
  • kindest/node v1.23.10
  • kindest/node v1.24.4
  • kindest/node v1.25.0
.github/workflows/lint-test.yaml
  • knative-serving v1.2.0
.github/workflows/lint-test.yaml
  • python 3.10.7
  • python 3.10.7

  • Check this box to trigger a request for Renovate to run again on this repository

Parametrize VPA probes

I'm observing big amount of pods restarts in VPA chart.

NAME                                                            READY   STATUS    RESTARTS   AGE
vertical-pod-autoscaler-admission-controller-6b9c78df6c-n865d   1/1     Running   0          129m
vertical-pod-autoscaler-recommender-59bc9bfd9c-92vks            1/1     Running   73         6d3h
vertical-pod-autoscaler-updater-7b694f8796-cwk2x                1/1     Running   50         6d3h

I suspect that's matter of default probes parameters, 1s timeout might be too narrow.

    Liveness:     http-get http://:metrics/metrics delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:metrics/metrics delay=0s timeout=1s period=10s #success=1 #failure=3

Could you add probe parameters to the chart or at least increase timeout value?

feat(vertical-pod-autoscaler): add the ability to choose the containerPort for https service in admission-controller

Hi,

VPA does not work correctly in EKS when using aws eks module due to the aws eks module default security group rules not allowing port 8000 from clusterapi to node, admission-controller webhook service and container uses port 8000 which causes the kube-apiserver to error since it cannot do a post request to https://vpa-webhook.kube-system.svc:443.

failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": context deadline exceeded

There are three solutions:

  1. Allow the modification of containerPort in the admission controller deployment, so it can be changed to either 443 or 10250 which are one of the allowed ports by default in AWS EKS terraform module in the helm chart.
  2. allowing the traffic from the cluster API to the nodes for port 8000 via the following code
module "eks" {
  ...
  node_security_group_additional_rules = {
    ...
    ingress_cluster_vpa = {
      description                   = "Cluster to node 8000"
      protocol                      = "tcp"
      from_port                     = 8000
      to_port                       = 8000
      type                          = "ingress"
      source_cluster_security_group = true
    }
  }
}
  1. using helm post rendering to adjust the manifests before applying with the following files:
  • ./kustomization.yaml
resources:
  - all.yaml
patches:
  - patch: |-
      - op: replace
        path: /spec/template/spec/containers/0/ports/0/containerPort
        value: 10250
    target:
      kind: Deployment
      name: '.*admission-controller'
  • ./kustomize
#!/bin/bash

CURRENT_DIR=$( dirname -- "$( readlink -f -- "$0"; )"; )

cat <&0 > $CURRENT_DIR/all.yaml

kubectl kustomize $CURRENT_DIR && rm $CURRENT_DIR/all.yaml

then in the installation use helm install ... --post-renderer path/to/our/kustomize/script
or in terraform

module "helm_install" "vpa" {
  ...
  post_render {
    binary_path = "path/to/our/kustomize/script"
  }
}

I prefer the first solution since it is the most flexible and doesn't require infrastructure changes. I will create a pull request soon for it.
This issue is related to #47 kubernetes/autoscaler#2789 and kubernetes/autoscaler#1547

VPA repository switch from `k8s.gcr.io` to `registry.k8s.io`

Following the deprecation announcement of k8s.gcr.io and the move to registry.k8s.io by the Kubernetes project, it would be nice to update the repositories in the VPA helm chart to switch to this as well.

This will break future updates to the VPA helm chart in the future.

  • k8s.gcr.io/autoscaling/vpa-admission-controller -> registry.k8s.io/autoscaling/vpa-admission-controller
  • k8s.gcr.io/autoscaling/vpa-recommender -> registry.k8s.io/autoscaling/vpa-recommender
  • k8s.gcr.io/autoscaling/vpa-updater -> registry.k8s.io/autoscaling/vpa-updater

Official blog post: https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/

(cluster-autoscaler helm chart has been updated this way: https://github.com/kubernetes/autoscaler/blob/master/charts/cluster-autoscaler/values.yaml#L231)

Thanks in advance :)

vpa kubectl image arm64

VPA chart runs great on arm64 however recently added bitnami kubectl image is not multi-arch.

$ kubectl -n vpa get pods -o wide
NAME                                                              READY   STATUS    RESTARTS   AGE    IP            NODE   NOMINATED NODE   READINESS GATES
vpa-vertical-pod-autoscaler-crds-8xh68                            0/1     Error     0          4d6h   10.42.83.55   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-crds-7jpx4                            0/1     Error     0          4d6h   10.42.83.36   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-crds-z5xrj                            0/1     Error     0          4d6h   10.42.83.56   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-crds-dq77n                            0/1     Error     0          4d6h   10.42.83.35   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-crds-x29hg                            0/1     Error     0          4d6h   10.42.83.61   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-crds-bxff2                            0/1     Error     0          4d6h   10.42.83.59   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-crds-8n29l                            0/1     Error     0          4d6h   10.42.83.12   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-admission-controller-7947d6cfbgb98r   1/1     Running   1          4d6h   10.42.83.36   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-recommender-6dd8bdf665-tmfjp          1/1     Running   3          10d    10.42.83.63   pi2    <none>           <none>
vpa-vertical-pod-autoscaler-updater-84d4485c6c-r8qvj              1/1     Running   1          4d6h   10.42.83.47   pi2    <none>           <none>
$ kubectl -n vpa logs vpa-vertical-pod-autoscaler-crds-8xh68
standard_init_linux.go:228: exec user process caused: exec format error

I can easily work around this as I have multi-arch cluster, but maybe best if kubectl image is multi-arch and runs on arm too.

VPA Certificate validity duration

Hello,
First of all, thank you for your work!
We're using VPA chart and it would be very useful to have ability to set certificate validity duration.

At the moment certificates are valid for 365 days and it's an issue for long-lived clusters, even though extended validity can be a security concern.
We would love to submit a PR if you're interested in such changes.

Thanks.

{{- $ca := genCA (include "vertical-pod-autoscaler.admissionController.fullname" .) 365 }}
# {{- $cn := printf "%s.%s.svc" (include "vertical-pod-autoscaler.admissionController.fullname" .) .Release.Namespace }}
{{- $cn := printf "%s.%s.svc" "vpa-webhook" .Release.Namespace }}
{{- $cert := genSignedCert $cn nil (list $cn) 365 $ca }}

[vertical-pod-autoscaler] CRD job is missing template labels

Hello @sebastien-prudhomme the new CRD job that you introduced some time back is missing job template labels as can be seen here

so it is difficult to target this job with a network policy.

Could you add it the same way you added them for the deployments in the chart?

{{- include "vertical-pod-autoscaler.admissionController.selectorLabels" . | nindent 8 }}

[VPA] Argocd CRD sync issue

I'm having a problem with using VPA chart on ArgoCD. The verticalpodautoscalers.autoscaling.k8s.io CRD cannot reach Synced status because of the required field. I suspect empty array is the default value for this field and K8s just it then causing the difference in specs, which ArgoCD is constantly trying to reconcile. I think this field can be easily removed.

Screenshot:
image

Invalid path in ingress

mongo-express:
  basicAuthUsername: hello
  basicAuthPassword: hello
  mongodbAdminUsername: root
  mongodbAdminPassword: password
  ingress:
    enabled: true
    hosts:
      - name: http://my.domain.com
         paths:
          - path: /

Hi, thanks for this chart :)

I'm trying to setup the ingress. However, it gives me an Invalid value: "map[path:/]": must be an absolute path
Wondering if you could help figuring out what I'm missing. Thanks

Add support for customizing containerPort in whoami chart

Hi, i am using whoami chart on an OpenShift cluster, where ssc cannot be added to a service account by a normal user. Therefore i cannot use containerPort 80.

It would be convenient if i can use unprivileged port like 8080 by setting it with values.yaml:

extraArgs:
  port: 2021
containerPort: 8080

VPA CRDs not installed correctly

I'm trying to use your Helm chart to install the VPA with ArgoCD and the two CRDs don't seem to be installed correctly. ArgoCD wants to prune them as it thinks they shouldn't exist:

image

It looks like you're using a Job resource to install them so I'm guessing ArgoCD doesn't see the CRDs as belonging directly to the chart.

If it matters, at the moment I've only enabled the Recommender component just to see what it highlights before I enable the Admission Controller and/or Updater components.

Enable sequelize support in the lighthouse-ci

Firstly, congratulations on this life saver chart.

In the lighthouse-ci configuration docs, they mention that is possible to configure sequelize to provide additional storage options to the server command, such as an external host database:

storage.sequelizeOptions
Additional raw options object to pass to sequelize. Refer to the sequelize documentation for more information on available settings.

Currently, the server docker image uses npm start as the entry point, which represents this command:

$ lhci server --config=./lighthouserc.json

And then it is possible to customize it by passing an extra --storage.sequelizeOptions, something like:

$ npm start -- --storage.sequelizeOptions

We could add the sequelize options to the chart values, and then configuring them as args in the container initialization.

What do you think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.