Giter VIP home page Giter VIP logo

zitadel-charts's Introduction

Artifact Hub

ZITADEL

A Better Identity and Access Management Solution

Identity infrastructure, simplified for you.

Learn more about ZITADEL by checking out the source repository on GitHub

What's in the Chart

By default, this chart installs a highly available ZITADEL deployment.

Install the Chart

Either follow the guide for deploying ZITADEL on Kubernetes or follow one of the example guides:

Upgrade from v6

  • Now, you have the flexibility to define resource requests and limits separately for the machineKeyWriter, distinct from the setupJob. If you don't specify resource requests and limits for the machineKeyWriter, it will automatically inherit the values used by the setupJob.

  • To maintain consistency in the structure of the values.yaml file, certain properties have been renamed. If you are using any of the following properties, kindly review the updated names and adjust the values accordingly:

    Old Value New Value
    setupJob.machinekeyWriterImage.repository setupJob.machinekeyWriter.image.repository
    setupJob.machinekeyWriterImage.tag setupJob.machinekeyWriter.image.tag

Upgrade from v5

  • CockroachDB is not in the default configuration anymore. If you use CockroachDB, please check the host and ssl mode in your ZITADEL Database configuration section.

  • The properties for database certificates are renamed and the defaults are removed. If you use one of the following properties, please check the new names and set the values accordingly:

    Old Value New Value
    zitadel.dbSslRootCrt zitadel.dbSslCaCrt
    zitadel.dbSslRootCrtSecret zitadel.dbSslCaCrtSecret
    zitadel.dbSslClientCrtSecret zitadel.dbSslAdminCrtSecret
    - zitadel.dbSslUserCrtSecret

Uninstalling the Chart

The ZITADEL chart uses Helm hooks, which are not garbage collected by helm uninstall, yet. Therefore, to also remove hooks installed by the ZITADEL Helm chart, delete them manually:

helm uninstall my-zitadel
for k8sresourcetype in job configmap secret rolebinding role serviceaccount; do
    kubectl delete $k8sresourcetype --selector app.kubernetes.io/name=zitadel,app.kubernetes.io/managed-by=Helm
done

Troubleshooting

Debug Pod

For troubleshooting, you can deploy a debug pod by setting the zitadel.debug.enabled property to true. You can then use this pod to inspect the ZITADEL configuration and run zitadel commands using the zitadel binary. For more information, print the debug pods logs using something like the following command:

kubectl logs rs/my-zitadel-debug

migration already started, will check again in 5 seconds

If you see this error message in the logs of the setup job, you need to reset the last migration step once you resolved the issue. To do so, start a debug pod and run something like the following command:

kubectl exec -it my-zitadel-debug -- zitadel setup cleanup --config /config/zitadel-config-yaml

Contributing

Lint the chart:

docker run -it --network host --workdir=/data --rm --volume $(pwd):/data quay.io/helmpack/chart-testing:v3.5.0 ct lint --charts charts/zitadel --target-branch main

Test the chart:

# Create a local Kubernetes cluster
kind create cluster --image kindest/node:v1.27.2

# Test the chart
go test ./...

Watch the Kubernetes pods if you want to see progress.

kubectl get pods --all-namespaces --watch

# Or if you have the watch binary installed
watch -n .1 "kubectl get pods --all-namespaces"

Contributors

zitadel-charts's People

Contributors

ansarhun avatar arnouthoebreckx avatar bitfactory-sem-denbroeder avatar bjw-s avatar dbourasseau avatar dependabot[bot] avatar dogab avatar eliobischof avatar fdelucchijr avatar hifabienne avatar hreddy-klaviyo avatar jessebot avatar livio-a avatar lusu007 avatar mdarii avatar meyfa avatar mffap avatar mheers avatar oliverbaehler avatar orblazer avatar pisarz avatar pursechicken avatar raunodepasquale avatar rud avatar seandlg avatar skeletorxvi avatar stebenz avatar thesephirot avatar thomaspetit avatar wesjdj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

zitadel-charts's Issues

No real way to use Postgres with this chart

The chart configures Cockroach by default. By having cockroach config in the chart, the app will always choose cockroach. There's no way that I can see to unconfigure cockroach so it will pick postgres.

Make charts compatible with older Helm versions

As a developer, I don't want Helm to throw nil pointers, even when I use an old Helm version.

Acceptance Criteria

Details:
#53 (comment)

Helm install error when following the documentation

I want to install Zitadel in a fresh kubernets cluster and following the documentation:

https://zitadel.com/docs/self-hosting/deploy/kubernetes

Installing CRDB works, but the Zitadel helm commands throws an error:

Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org

CRDB:
helm upgrade --install crdb cockroachdb/cockroachdb \ --set fullnameOverride=crdb \ --set single-node=true \ --set statefulset.replicas=1 --set tls.enabled=true

Zitadel:

helm install my-zitadel zitadel/zitadel \ --set zitadel.masterkey="MasterkeyNeedsToHave32Characters" \ --set zitadel.configmapConfig.ExternalSecure=false \ --set zitadel.configmapConfig.TLS.Enabled=false \ --set zitadel.secretConfig.Database.cockroach.User.Password="a-zitadel-db-user-password" \ --set replicaCount=1 WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020. WARNING: You should switch to "https://charts.helm.sh/stable" via: WARNING: helm repo add "stable" "https://charts.helm.sh/stable" --force-update Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org

I also tested the second option which worked without placing it in a dedicated namespace. However I want to be able to login via the UI, so not just with a service account.

Thanks in advance for any help :)

Unable to use helm chart with postgresql database

Hi,

the helm chart is focusing on cockroachdb deployments, which may be fine for cloud based setup.

We want to use zitadel on-premise with a baremetal postgresql cluster. Unfortunately the chart requires a number of cockroachdb related secret, certificates etc.

Do you have instructions how to use the chart with postgresql? Is it sufficient to fake the content of the required secrets?

TLS error showing while connecting to the postgres

          Hi,

now following error comes, what other configs are required?

Error: cannot start client for projection: ID=DATAB-0pIWD Message=Errors.Database.Connection.Failed Parent=(cannot parse host=xxxxxx.xxxxxx.svc.cluster.local port=5432 user= application_name=zitadel sslmode= dbname= sslrootcert=: failed to configure TLS (sslmode is invalid))

Originally posted by @siddharthgaur2590 in #71 (comment)

Version v2.1.1 can't parse the values

Example values.yaml

zitadel:
  configmapConfig:
    Log:
      Level: 'debug'
      Formatter:  
        Format: json
    ExternalSecure: false
    ExternalDomain: localhost
    FirstInstance:
      Org:
        Human:
          Username: 'root'
          Password: 'RootPassword1!'
    TLS:
      Enabled: false
  masterkey: '.....'
  secretConfig:
    Database:
      cockroach:
        User:
          Username: 'zitadel_user'
          Password: '....'      
        Admin:
          Username: 'root'
          Password: '....'

Error
Values.zitadel.secretConfig.Database.User.Password is mandatory for tls enabled cockroach

Issue with setup and a Service Account Admin

Preflight Checklist

  • I could not find a solution in the existing issues, docs, nor discussions
  • I have joined the ZITADEL chat

Describe the docs your are missing or that are wrong

As a user we wanted to set up Zitadel with only one service account. Because we want to fill all data only with terraform. However, this does not seem to work because the setup pod says that it is waiting for the container to be terminated. Only unfortunately this never happens until Helm or flux runs into a timeout.

If we take out the appropriate part in the values.yaml that only the service account should be created, the installation works right away.

The MachineKeyPath was not configured, because it was removed in the current version. Unfortunately, this is still in the docs: https://zitadel.com/docs/self-hosting/deploy/kubernetes#setup-zitadel-and-a-service-account-admin.

We use the Helm chart in version 5.0.0 and CockroachDB 11.0.3.

We are now setting up Zitadel with a human user and creating a service account at the beginning so we can use the instance as much as possible with Terraform.

Additional Context

No response

v4 incompatible with rke2

The issue

At https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/templates/setupjob.yaml#L90 it tries to pull alpine/k8s with the kubernetesVersion.

Unfortunately on rke2 (and possibly k3s etc. too?) it tries to fetch alpine/k8s:1.24.8+rke2r1 instead of alpine/k8s:1.24.8.

Temp workaround

  1. Clone https://github.com/zitadel/zitadel-charts
  2. Edit charts/zitadel/templates/setupjob.yaml
  3. Replace alpine/k8s:{{ include "zitadel.kubernetesVersion" . }} with alpine/k8s:1.24.8 where 1.24.8 is your k8s version
  4. helm install -f values.yaml my-zitadel ./zitadel-charts/charts/zitadel

Create a stable release channel

For being able to confidently and automatically updating the charts, we could create a stable release channel.

Please give a thumb up ๐Ÿ‘ for voting for this.

Values from dependency chart not being propagated to zitadel

Environment

  • Parent Chart: my-service version 0.1.0
  • Zitadel Chart: version 5.0.0 from the official repo
  • Kubernetes: v1.24

Steps to Reproduce

  • Added Zitadel chart as a dependency in my-Service Chart.yaml
  • Trying to set .zitadel.masterkey in my-Service values.yaml
  • Getting error on template render that masterkey is required

Expected Behavior
The masterkey value should propagate from my-Service to Zitadel and resolve the error.

Actual Behavior
The Zitadel chart is not picking up the masterkey value from dependency and fails on render.

Configuration
my-Service/Chart.yaml

dependencies:
- name: zitadel
  version: 5.0.0
  repository: https://charts.zitadel.com

my-Service/values.yaml

zitadel:
  enabled: true 
  masterkey: <masterkey>

Debug Details

  • Enabled debug logging in Zitadel - no relevant errors observed
  • Tried setting other values like replicaCount but issue persists
  • Works fine if Zitadel is installed directly

Let me know if any other details are needed to help troubleshoot this further. Looking forward to your insights on resolving this issue. Thanks!

Helm Charts: List ready for review checks in PR template

As a developer, I want to check against ready for review expectations, so that I am sure that my PR is easily reviewable and merged fast.

Acceptance Criteria

  • The PR Template lists the following ready for review checks:
    • Acceptance tests ensure that installation works and ZITADEL becomes ready.

The requested redirect_uri is missing in the client configuration

When installing Zitadel from the helm chart with the following configuration :

    ingress:
      enabled: true
      className: "nginx"
      annotations:
        cert-manager.io/cluster-issuer: "letsencrypt-production"
        kubernetes.io/tls-acme: "true"
        ingress.kubernetes.io/ssl-redirect: "true"
      hosts:
        - host: id.vhome.fr
          paths:
            - path: /
              pathType: Prefix
      tls:
        - secretName: id-domain-fr-tls
          hosts:
            - id.domain.fr
    replicaCount: 1
    zitadel:
      masterkey: "****"
      dbSslRootCrtSecret: "crdb-ca-secret"

      secretConfig:
        Database:
          cockroach:
            User:
              Password: "*****"
      configmapConfig:
        FirstInstance:
          Org:
            Name: domain

        ExternalDomain: id.domain.fr
        ExternalSecure: true
        WebAuthNName: VDOMAIN-IDENTITY
        TLS:
          Enabled: false
        Machine:
          Identification:
            PrivateIp:
              enabled: false
            Hostname:
              enabled: true
            Webhook:
              enabled: false

When i go in my browser to https://id.domain.fr i get the following error message:

The requested redirect_uri is missing in the client configuration. If you have any questions, you may contact the administrator of the application.

It's unclear but i guess i have to finish some part of the installation process... What am I missing ?

Error on `.Values.zitadel.configmapConfig.FirstInstance.MachineKeyPath` (still in values.yaml)

Currently in the values.yaml you have the following:

## If you want to setup ZITADEL with a service account
## instead of a human admin user, uncomment the following
## by deleting each lines' first hash and space
# FirstInstance:
# # path used for volume mounts and to write the secret
# MachineKeyPath: /machinekey/zitadel-admin-sa.json

But if you uncomment that line, you get an error which is triggered by the setupjob.yaml template here:

{{- if include "deepCheck" (dict "root" .Values "path" (splitList "." "zitadel.configmapConfig.FirstInstance.MachineKeyPath")) -}}
{{- fail "Specifying .Values.zitadel.configmapConfig.FirstInstance.MachineKeyPath is not supported" -}}
{{- end -}}

It's not specified in your web docs here:
https://zitadel.com/docs/self-hosting/deploy/kubernetes#setup-zitadel-and-a-service-account-admin

If that's removed, we'll still have the zitadel-admin-sa secret, correct? So perhaps it needs to be removed from the values.yaml to reduce confusion? I can submit a PR to remove that bit from the values.yaml if you'd like.

Thanks!

Outdated instructions on charts.zitadel.com

The instructions on charts.zitadel.com are seemingly outdated, as it states:

By default, this chart installs a highly available ZITADEL instance. Also it installs a highly available and secure CockroachDB by default.

The Readme in this repo however states:

In v4, the cockroachdb chart dependency is removed.

is there anyway to use zitadel without giving database admin access?

Preflight Checklist

  • I could not find a solution in the existing issues, docs, nor discussions
  • I have joined the ZITADEL chat

Describe the docs your are missing or that are wrong

Thanks for continuing to work on zitadel!

I use a lot of other apps that use postgresql, and zitadel is the only one that requires database admin access, which seems like a security issue. I generally don't give admin access on clusters to applications, as it opens another hole in my infra. What is that you need the admin access for? If it's setting permissions on specific tables, this should be something we can setup ourselves ahead of time.

Additional Context

No response

Update the Chart always to the latest stable

Can we update the Chart to the latest stable version i.e. v2.27.2 and not stay on 2.23.1 for a long time. I don't think it makes sense that we stay on old releases.

The version bump in the workflow doesn't seem to function properly.

I've set the latest stable in the deployment on k8s but after that the dashboard and the cluster aren't any more reachable. I think the Chart itself need some adjustments to work properly with any higher version than 2.23.1.

Specifying resources for the setup job machinekey container

Problem

It's good practice to specify resource requests and limits for all workloads. As such, we are using an automated policy check (Datree) to ensure workloads have configured resources.

It is already possible to set resources for the setup job:

Unfortunately, this applies only to container 0 in the setup job template, and not to container 1 (the "machinekey" container). In other words, the following section has no way to define resources:

{{- if include "deepCheck" (dict "root" .Values "path" (splitList "." "zitadel.configmapConfig.FirstInstance.Org.Machine")) }}
- name: "{{ .Chart.Name}}-machinekey"
securityContext:
{{- toYaml .Values.securityContext | nindent 14 }}
image: "{{ .Values.setupJob.machinekeyWriterImage.repository }}:{{ .Values.setupJob.machinekeyWriterImage.tag | default ( printf "%s.%s" .Capabilities.KubeVersion.Major .Capabilities.KubeVersion.Minor ) }}"
command: [ "sh","-c","until [ ! -z $(kubectl -n {{ .Release.Namespace }} get po ${POD_NAME} -o jsonpath=\"{.status.containerStatuses[?(@.name=='{{ .Chart.Name }}-setup')].state.terminated}\") ]; do echo 'waiting for {{ .Chart.Name }}-setup container to terminate'; sleep 5; done && echo '{{ .Chart.Name }}-setup container terminated' && if [ -f /machinekey/sa.json ]; then kubectl -n {{ .Release.Namespace }} create secret generic {{ .Values.zitadel.configmapConfig.FirstInstance.Org.Machine.Machine.Username }} --from-file={{ .Values.zitadel.configmapConfig.FirstInstance.Org.Machine.Machine.Username }}.json=/machinekey/sa.json; fi;" ]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: machinekey
mountPath: "/machinekey"
{{- end }}

This leads to Datree reporting the following 2 errors:

โŒ  Ensure each container has a configured memory limit  [1 occurrence]
    - metadata.name: release-name-zitadel-setup (kind: Job)
      > key: spec.template.spec.containers.1 (line: 565:11)
๐Ÿ’ก  Missing property object `limits.memory` - value should be within the accepted boundaries recommended by the organization
โŒ  Ensure each container has a configured memory request  [1 occurrence]
    - metadata.name: release-name-zitadel-setup (kind: Job)
      > key: spec.template.spec.containers.1 (line: 565:11)
๐Ÿ’ก  Missing property object `requests.memory` - value should be within the accepted boundaries recommended by the organization

Proposed Solution

I think it would be acceptable to re-use the setupJob.resources for the machinekey container. Even better would be the ability to specify its resources separately (e.g., setupJob.machinekeyWriter.resources), with a fallback to setupJob.resources.

Create connection test for firstinstance with machinekey

As a developer, I want to have a test to try to connect to ZITADEL with the created key so that I can be sure that the created key is also usable to connect directly with the API with for example terraform.

Acceptance criteria

  • Machine user is created during setup
  • Key from machine user is created
  • Key can be used to call API with the necessary permissions

Add and test examples for common values

As a user, I want to have an example for my use case, so that I can create a fast proof of concept.

Acceptance Criteria

  • An example for ZITADEL with an insecure Cockroach DB is available and automatically tested
  • An example for ZITADEL with a secure Cockroach DB with TLS authentication is available and automatically tested
  • An example for ZITADEL with a secure Cockroach DB with password authentication is available and automatically tested
  • An example for ZITADEL with an insecure Postgres DB is available and automatically tested
  • An example for ZITADEL with a secure Postgres DB with TLS authentication is available and automatically tested
  • An example for ZITADEL with a secure Postgres DB with password authentication is available and automatically tested

Actually, probably not all the configurations listed in the AC are working already, And it's possible that we have introduce breaking changes and release a new major version.

Related issues:

Installation stuck at PodInitializing

I've installed the chart as described in the guide but it is stuck at PodInitializing:

โฏ k get -n zitadel all
NAME                     READY   STATUS     RESTARTS   AGE
pod/crdb-0               1/1     Running    0          2m35s
pod/zitadel-init-nrfnq   0/1     Init:0/1   0          116s

NAME                  TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)              AGE
service/crdb-public   ClusterIP   10.43.27.85   <none>        26257/TCP,8080/TCP   2m35s
service/crdb          ClusterIP   None          <none>        26257/TCP,8080/TCP   2m35s

NAME                    READY   AGE
statefulset.apps/crdb   1/1     2m35s

NAME                                           SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/crdb-rotate-self-signer-client   0 0 */26 * *   False     0        <none>          2m35s

NAME                     COMPLETIONS   DURATION   AGE
job.batch/zitadel-init   0/1           116s       116s
โฏ k logs -n zitadel pod/zitadel-init-nrfnq
Defaulted container "zitadel-init" out of: zitadel-init, chown (init)
Error from server (BadRequest): container "zitadel-init" in pod "zitadel-init-nrfnq" is waiting to start: PodInitializing

I had a successful test with docker-compose setup, but now with K8s charts have no idea what is going wrong.

helm template fails with given values

When using the given helm values, helm template gives an error with the newest version of the chart:

$ helm template my-zitadel zitadel/zitadel \
  --set zitadel.masterkey="MasterkeyNeedsToHave32Characters" \
  --set zitadel.configmapConfig.ExternalSecure=false \
  --set zitadel.configmapConfig.TLS.Enabled=false \
  --set zitadel.secretConfig.Database.cockroach.User.Password="a-zitadel-db-user-password" \
  --set replicaCount=1 \
  --set cockroachdb.single-node=true \
  --set cockroachdb.statefulset.replicas=1
Error: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org

Use --debug flag to render out invalid YAML

Interestingly, installing works.

Chart fails when not using FirstInstance in values

When installing the chart without defining FirstInstance the template fails with

Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org

`Errors.Org.PolicyNotExisting` after upgrading from chart 7.1.0 to 7.3.0

We upgraded our chart version from 7.1.0 to 7.3.0 this morning, and we get the following error message when attempting to log into the console:

Errors.Org.PolicyNotExisting (Internal)

The logs show these messages:

zitadel-6d8dc4c9f5-8gww2 zitadel time="2023-11-16T12:27:58Z" level=error caller="/home/runner/work/zitadel/zitadel/internal/api/ui/login/renderer.go:342" error="ID=QUERY-bJEsm Message=Errors.Org.PolicyNotExisting Parent=(sql: no rows in result set)"
zitadel-6d8dc4c9f5-8gww2 zitadel time="2023-11-16T12:27:58Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"Errors.Org.PolicyNotExisting\" not found in language \"en\"" id=Errors.Org.PolicyNotExisting

This is a relatively fresh installation (just setup yesterday). I'm not quite sure how much has been setup within Zitadel, as another engineer has been focusing on that.

Nothing else about the deployment configuration changed. Perhaps this should be an issue in the https://github.com/zitadel/zitadel repo?

Image selection for the machineKey container does not work correctly

Currently, the Helm chart utilizes {{ .Values.setupJob.machinekeyWriterImage.tag | default ( trimPrefix "v" .Capabilities.KubeVersion.Version ) }} to select the version of the alpine/k8s image when no tag is specified in the values.yaml file.

However, this approach may not work reliably since alpine/k8s does not release a Docker image for every patch version of Kubernetes. For instance, if you're using Kubernetes 1.27.0, no corresponding alpine/k8s image is available for that particular version.

I'm unsure how this problem could be solved. Setting a static version in the Chart would restrict the Kubernetes version with which the Chart could be used.

Test unauthenticated gRPC and gRPC-Web calls

As an admin, I want to have a guarantee, that for all examples, all API protocols work as as expected, so I can confidently set up my environment using the examples.

Acceptance Criteria

  • unauthenticated gRPC calls are tested on all examples
  • unauthenticated gRPC-Web calls are tested on all examples
  • on examples that use a machine user setup, authenticated gRPC-Web calls are tested

Only untested cases are listed.

Deployment probes fail when using TLS Enabled

If you enable TLS and leave the readiness, liveness and startup probes enabled (they are by default) then the zitadel pod(s) will not become healthy. This is because by default the scheme is set to HTTP on probes.

To fix this, either look for TLS: Enabled: True and set the scheme to HTTPS, or allow the scheme to be configurable in values.

I can create a PR to address this.

Add ingress examples

As an admin, I want to have configuration examples for ingress configurations, so that I can easily transition from a PoC to production.

Acceptance Criteria

  • An example for a custom domain with an ingress controller is available and tested
  • An example for LB-terminated TLS is available and tested
  • An example for ZITADEL-terminated TLS is available and tested

DefaultInstance vs FirstInstance

Preflight Checklist

  • I could not find a solution in the existing issues, docs, nor discussions
  • I have joined the ZITADEL chat

Describe the docs your are missing or that are wrong

As a administrator I want to be informed what the difference and impact of the DefaultInstance vs FirstInstance is.


I did now approx 1h of research and another 1h of trying and non-succeeding. What is the difference between DefaultInstance vs FirstInstance? ๐Ÿ˜ญ

Start of the problem is this section of the values or here respectively: https://zitadel.com/docs/self-hosting/manage/configure#whats-next.

DefaultInstance:
  InstanceName:
  DefaultLanguage: en
  Org:
    Name:
    Human:
     ...
      UserName: zitadel-admin-1
    ...

What does that do? I could not make the login on a fresh installation work. I expect that the DefaultInstance is also used for the systems own first instance if no FirstInstance is given.

FirstInstance:
  InstanceName:
  DefaultLanguage: en
  Org:
    Name:
    Human:
     ...
      UserName: zitadel-adm1n
    ...

What is now done? Who wins? What is the impact?

Another thing that I observe is that e.g. DefaultInstance.InstanceName does not even work ๐Ÿ˜ข

I see I am not the only one:

  • A
  • B
    • here @eliobischof wrote the correct title but did not explain the difference

Could you either point me to the right docs (and I order the glasses) or provide them here or as docs? โค๏ธ

Thank you ๐Ÿฐ

Additional Context

Discord username: m4mbax

The question could also be asked in the official zitadel repo, please feel free to suggest a move or move it.

FirstInstance Reference Link

For reference my current chart looks like this (I removed all user customisations for now as it was so confusing).

       zitadel:
          # The chart: https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/values.yaml
          masterkey: {{ .Values.zitadel.mainKey | fetchSecretValue | quote }}
          configmapConfig:
            # All values: https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
            ExternalDomain: {{ .Values.zitadel.hostName }} # ! Changing this breaks the system
            ExternalPort: 443 # ! Changing this breaks the system
            ExternalSecure: true # ! Changing this breaks the system
            LogStore:
              Access:
                Stdout:
                  Enabled: true
            TLS:
              Enabled: false # Application Gateway from Azure does this
            DefaultInstance:
              InstanceName: {{ .Values.zitadel.defaultInstanceName }}
          secretConfig:
            Database:
              cockroach:
                User:
                  Password: {{ .Values.zitadel.password | fetchSecretValue | quote }}

Make test-connection pod optional

As a admin insting zitadel in my own infrastucture, I want to disable the test-connection pod as it always fails.

In my environment, installing zitadel takes some minutes, so the test-connection pod always fails. Because the deployment has both readinessProbe and startupProbe, this pod is redundant for me.

Lot of errors in pods

We have deployed zitadel with cockroachdb and fluxcd and did an update today. I don't know exactly if all the error messages were there before. But since then I have noticed that

zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oauth.added\" not found in language \"en\"" id=EventTypes.org.idp.oauth.added                                             
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oauth.changed\" not found in language \"en\"" id=EventTypes.org.idp.oauth.changed                                         
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oidc.added\" not found in language \"en\"" id=EventTypes.org.idp.oidc.added                                               
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oidc.changed\" not found in language \"en\"" id=EventTypes.org.idp.oidc.changed                                           
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.removed\" not found in language \"en\"" id=EventTypes.org.idp.removed                                                     
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.added\" not found in language \"en\"" id=EventTypes.quota.added                                                             
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.notificationdue\" not found in language \"en\"" id=EventTypes.quota.notificationdue                                         
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.notified\" not found in language \"en\"" id=EventTypes.quota.notified                                                       
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.removed\" not found in language \"en\"" id=EventTypes.quota.removed                                                         
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.user.human.password.change.sent\" not found in language \"en\"" id=EventTypes.user.human.password.change.sent                     
zitadel time="2023-06-13T11:17:33Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications                                                                                                                                                                                                                                                                                                                   
zitadel time="2023-06-13T11:18:33Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications                                                                                                                                                                                                                                                                                                                   
zitadel time="2023-06-13T11:19:34Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications                                                                                                                                                                                                                                                                                                                   
zitadel time="2023-06-13T11:19:45Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792                                              
zitadel time="2023-06-13T11:19:46Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792                                              
zitadel time="2023-06-13T11:19:47Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792                                              
zitadel time="2023-06-13T11:19:48Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792                                              
zitadel time="2023-06-13T11:19:49Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792                                              
zitadel time="2023-06-13T11:19:50Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792                                              
zitadel time="2023-06-13T11:19:51Z" level=warning msg="unable to process all events from subscription" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:187" error="some statements failed" projection=projections.notifications                                                 
zitadel time="2023-06-13T11:20:34Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications

appears relatively often in the logs.

The interface sometimes doesn't load at all and I have to reload the page several times. Then the error message Http response at 400 or 500 level, http status code: 0 appears there also quite often.

CRDB and Zitadel have 3 replicas each. The cluster also has more than enough resources available.

Do you have any idea what this could be due to or how to fix this behavior?

Thanks a lot in advance.

selectorLabel "app.kubernetes.io/version" disregards .Values.image.tag override

Quick remark (just to not forget it), I can PR it later.

app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}

This disregards .Values.image.tag and states a wrong version when the tag is overridden.

    name: zitadel
    labels:
-     helm.sh/chart: zitadel-5.0.0
+     helm.sh/chart: zitadel-7.1.0
   ...
-     app.kubernetes.io/version: "v2.28.1"
+     app.kubernetes.io/version: "v2.35.0" <--
    ....
-           image: "ghcr.io/zitadel/zitadel:v2.30.0"
+           image: "ghcr.io/zitadel/zitadel:v2.40.4" <--

Configurable init job for externally managed databases

Hello.

I have found that deploying zitadel in case if the database (with permissions) managed outside of zitadel is difficult. Consider case, where database's credentials for zitadel already exists and we don't want corresponding init steps by zitadel (because we don't want to give zitadel admin access to database). In that case we want zitadel to perform only init zitadel not general init.

To alleviate users to manage this case init job should run init tasks sequentially: init database, init user, init grant, init zitadel. Or probably there are only two cases make sense: run full init or run only init zitadel.

Zitadel Helm install - nothing displayed once logged in

I have installed zitadel using the following steps/configuration in kubernetes

  1. create namespace called zitadel kubectl create ns zitadel
  2. installed cockroach db - helm install crdb cockroachdb/cockroachdb --namespace zitadel --values ./cockroach-values.yaml
  • cockroach-value.yaml contents
  fullnameOverride: crdb
  statefulset:
    replicas: 2
  1. Installed zitadel -helm install --namespace zitadel my-zitadel zitadel/zitadel --values ./zitadel-values.yaml
  • zitadel-values.yaml contents
replicaCount: 3
ingress:
  enabled: true
  className: "nginx"
  annotations:
    kubernetes.io/tls-acme: "true"
    ingress.kubernetes.io/ssl-redirect: "true"
  hosts:
    - host: my-domain.org
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: k8s-ingress
      hosts:
        - my-domain.org
zitadel:
  configmapConfig:
    Log:
      Level: 'info'
    ExternalSecure: true
    ExternalDomain: my-domain.org
    ExternalPort: 443
    TLS:
      Enabled: false
    FirstInstance:
      Org:
        Human:
          Username: 'root'
          Password: 'RootPassword1!'
  masterkey: 'MasterkeyNeedsToHave32Characters'
  secretConfig:
    Database:
      cockroach:
        User:
          Username: 'zitadel_user'
          Password: 'a-zitadel-db-user-password'

I have nginx ingress controller setup and i can login in but once logged in and I reset my password i get this - i see this message

image

then it just spins forever nothing appears
image

Not sure what to check.

configMapConfig docs aren't consistant

while trying to install zitadel with my own values, I notized that the docs for the configMapConfig are different for the tutorial, the values file in this repo and in the config file in the zitadel repo.

My Values:

replicaCount: 1
zitadel: 
  masterkey: "cfMe9lPkty3qFWvX49XgHocLzmStGtyE"
  configmapConfig:
    ExternalPort: 443
    ExternalDomain: id.hobyte.de
    ExternalSecure: true
    TLS:
      Enabled: false
    #FirstInstance:
    DefaultInstance:
      DefaultLanguage: de
      Org:
        Name: cloud
        Human:
          UserName: zitadel-admin
          FirstName: ZITADEL
          LastName: Admin
          Email:
            Address: [email protected]
            Verified: true
          PreferredLanguage: de
      LoginPolicy:
        AllowRegister: false
  secretConfig:
    Database:
      cockroach:
        User:
          Password: "a-zitadel-db-user-password"

Two things that confuse me:

secretConfig

In the tutorial values for the secretConfig are set, but not in the default Values and in the zitadel config file, they don't exist. Can they be removed or are these values still needed?

FirstInstance vs. DefaultInstance

As described in in #66, when using the command and the values from the tutorial installing is possible, but in any other case, it fails with this error: Error: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org Is it DefaultInstance or FirstInstance?

Installation as describe on the website fails with a nil pointer

When I try to install the chart as described on https://zitadel.com/docs/guides/deploy/kubernetes I get a nil pointer error:

helm install my-zitadel zitadel/zitadel \
  --set zitadel.masterkey="my-sophisticated-password" \
  --set zitadel.configmapConfig.ExternalSecure=false \
  --set zitadel.configmapConfig.TLS.Enabled=false \
  --set zitadel.secretConfig.Database.cockroach.User.Password="my-other-sophisticated-password" \
  --set replicaCount=1 \
  --set cockroachdb.single-node=true \
  --set cockroachdb.statefulset.replicas=1
Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org

I have not dug into the logic here but it seems to me that you have to apply a check on those values or use a nested if construction.

Helm doc generation

Idea

Might be a good idea to use a helm documentation generator to render the values.yaml file into a easy to read table in the README

Tools

2 tools that could be used to achieve the goal. Both tools have the possiblity to provide a README template file in which the table containing the values is rendered into.

Bug: If dbSslRootCrtSecret and/or dbSslClientCrtSecret are defined in default values, they will always be tried to mount

Create Documentation for using Postgresql, in addition to (and eventually instead of) Cockroachdb, due to future helm chart deprecation

Cockroachdb's helm chart is being deprecated (date unknown currently), see cockroachdb/helm-charts#230 and the docs here.
The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier. The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs. A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation. Warning: If you are running a secure Helm deployment on Kubernetes 1.22 and later, you must migrate away from using the Kubernetes CA for cluster authentication. For details, see Certificate management https://www.cockroachlabs.com/docs/v23.1/secure-cockroachdb-kubernetes?filters=helm#migration-to-self-signer

It would be nice to see some instructions for Postgresql. I can help with adding a subchart for Bitnami's Postgres helm chart if you'd like. We use it over at the community run Nextcloud helm chart, and it's been pretty smooth sailing so far.

The other issue is that cockroachdb doesn't let you pass in passwords from an existing secret or env var to their init job's provisioning container. See cockroachdb/helm-charts#242 which has not been responded to by anyone from the project in it's year of being open. This means you have to pass in a plain text password if you want to use passwords, which is a non-starter for certain companies and orgs due to security compliance requiring no plain text passwords, even in private repos.

I changed the original title, because it's not yet clear when the helm chart will be deprecated. Cockroachdb also has an Operator which I believe they want you to use instance, but as of now is only tested on GKE and does not have a helm chart at time of writing.

Cannot fresh install : zitadel init is in crashloopbackoff

Container chown outputs: chown: /chowned-secrets/*: No such file or directory

zitadel:
  # https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
  configmapConfig:
    ExternalPort: 443
    ExternalDomain: ${local.domain}
    ExternalSecure: true
    TLS:
      Enabled: false
    Database:
      postgres:
        Host: postgres-postgresql
        Port: 5432
        Database: zitadel
        User:
          Username: zitadel
          Password: "${random_password.zitadelPassword.result}"
          SSL:
            Mode: disable
        Admin:
          Username: postgres
          Password: "${random_password.postgresPassword.result}"
          SSL:
            Mode: disable
    Machine:
      Identification:
        Hostname:
          Enabled: true
        Webhook:
          Enabled: false
    Metrics:
      Type: none
  secretConfig: {}
  masterkey: "${random_password.masterkey.result}"
  dbSslRootCrtSecret: null
  dbSslClientCrtSecret: null

image

Zitadel not connecting to the Postgres DB.

Hi Team, cc: @mheers , @fforootd, @ansarhun @mffap

I'm self-hosting Zitadel on K8s cluster using yamls like deployment, pv, pvc, service, ingress etc but its giving error when I'm connecting it to the self-hosted postgres DB in same cluster.

Error: cannot start client for projection: ID=DATAB-0pIWD Message=Errors.Database.Connection.Failed Parent=(failed to connect to host=localhost user=zitadel database=zitadel: dial error (dial tcp [::1]:26257: connect: connection refused))

It seems due to some issue in connecting it with postgres, it's trying to connect to the default CockroachDB which I don't want.

Please help to give the resolution asap.

So far I've tried below options:

  • Using configmap and referring it to the deployment.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: zitadel-config
    namespace: default
    data:
    ZITADEL_DB_TYPE: "postgresql"
    ZITADEL_DB_HOST: "lkjfkjsfkljskldfjs"
    ZITADEL_DB_PORT: "9897"
    ZITADEL_DB_NAME: "ffwefewfwef"
    ZITADEL_DB_USERNAME: "dsfdsf"
    ZITADEL_DB_PASSWORD: "r32re3"
    ZITADEL_DB_SSL_MODE: "disable"

  • using the env variable in deployment.yaml
    env:
    - name: ZITADEL_DB_CONNECTION_URI
    value: postgresql://:@hostname:/

Also, another thing:
I've used args like below in deployment.yaml, so anything else required to make the Zitadel instance up and running:
args:
- start
- "--masterkey"
- "custommasterkey"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.