zitadel / zitadel-charts Goto Github PK
View Code? Open in Web Editor NEWThis repository contains Helm charts for running ZITADEL in Kubernetes
Home Page: https://zitadel.com
This repository contains Helm charts for running ZITADEL in Kubernetes
Home Page: https://zitadel.com
When installing Zitadel from the helm chart with the following configuration :
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-production"
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
hosts:
- host: id.vhome.fr
paths:
- path: /
pathType: Prefix
tls:
- secretName: id-domain-fr-tls
hosts:
- id.domain.fr
replicaCount: 1
zitadel:
masterkey: "****"
dbSslRootCrtSecret: "crdb-ca-secret"
secretConfig:
Database:
cockroach:
User:
Password: "*****"
configmapConfig:
FirstInstance:
Org:
Name: domain
ExternalDomain: id.domain.fr
ExternalSecure: true
WebAuthNName: VDOMAIN-IDENTITY
TLS:
Enabled: false
Machine:
Identification:
PrivateIp:
enabled: false
Hostname:
enabled: true
Webhook:
enabled: false
When i go in my browser to https://id.domain.fr i get the following error message:
The requested redirect_uri is missing in the client configuration. If you have any questions, you may contact the administrator of the application.
It's unclear but i guess i have to finish some part of the installation process... What am I missing ?
The instructions on charts.zitadel.com are seemingly outdated, as it states:
By default, this chart installs a highly available ZITADEL instance. Also it installs a highly available and secure CockroachDB by default.
The Readme in this repo however states:
In v4, the cockroachdb chart dependency is removed.
Quick remark (just to not forget it), I can PR it later.
This disregards .Values.image.tag
and states a wrong version when the tag is overridden.
name: zitadel
labels:
- helm.sh/chart: zitadel-5.0.0
+ helm.sh/chart: zitadel-7.1.0
...
- app.kubernetes.io/version: "v2.28.1"
+ app.kubernetes.io/version: "v2.35.0" <--
....
- image: "ghcr.io/zitadel/zitadel:v2.30.0"
+ image: "ghcr.io/zitadel/zitadel:v2.40.4" <--
Use commit messages to determine the next version
Cockroachdb's helm chart is being deprecated (date unknown currently), see cockroachdb/helm-charts#230 and the docs here.
It would be nice to see some instructions for Postgresql. I can help with adding a subchart for Bitnami's Postgres helm chart if you'd like. We use it over at the community run Nextcloud helm chart, and it's been pretty smooth sailing so far.
The other issue is that cockroachdb doesn't let you pass in passwords from an existing secret or env var to their init job's provisioning container. See cockroachdb/helm-charts#242 which has not been responded to by anyone from the project in it's year of being open. This means you have to pass in a plain text password if you want to use passwords, which is a non-starter for certain companies and orgs due to security compliance requiring no plain text passwords, even in private repos.
I changed the original title, because it's not yet clear when the helm chart will be deprecated. Cockroachdb also has an Operator which I believe they want you to use instance, but as of now is only tested on GKE and does not have a helm chart at time of writing.
As a developer, I want to check against ready for review expectations, so that I am sure that my PR is easily reviewable and merged fast.
Acceptance Criteria
Might be a good idea to use a helm documentation generator to render the values.yaml file into a easy to read table in the README
2 tools that could be used to achieve the goal. Both tools have the possiblity to provide a README template file in which the table containing the values is rendered into.
I tried to take a look at Zitadel, which looks promising, but the repo's hostname doesn't resolve.
Relates to #29
As a user we wanted to set up Zitadel with only one service account. Because we want to fill all data only with terraform. However, this does not seem to work because the setup pod says that it is waiting for the container to be terminated. Only unfortunately this never happens until Helm or flux runs into a timeout.
If we take out the appropriate part in the values.yaml that only the service account should be created, the installation works right away.
The MachineKeyPath was not configured, because it was removed in the current version. Unfortunately, this is still in the docs: https://zitadel.com/docs/self-hosting/deploy/kubernetes#setup-zitadel-and-a-service-account-admin.
We use the Helm chart in version 5.0.0 and CockroachDB 11.0.3.
We are now setting up Zitadel with a human user and creating a service account at the beginning so we can use the instance as much as possible with Terraform.
No response
When using the given helm values, helm template
gives an error with the newest version of the chart:
$ helm template my-zitadel zitadel/zitadel \
--set zitadel.masterkey="MasterkeyNeedsToHave32Characters" \
--set zitadel.configmapConfig.ExternalSecure=false \
--set zitadel.configmapConfig.TLS.Enabled=false \
--set zitadel.secretConfig.Database.cockroach.User.Password="a-zitadel-db-user-password" \
--set replicaCount=1 \
--set cockroachdb.single-node=true \
--set cockroachdb.statefulset.replicas=1
Error: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org
Use --debug flag to render out invalid YAML
Interestingly, installing works.
As a developer, I want to have a test to try to connect to ZITADEL with the created key so that I can be sure that the created key is also usable to connect directly with the API with for example terraform.
Acceptance criteria
Hi,
the helm chart is focusing on cockroachdb deployments, which may be fine for cloud based setup.
We want to use zitadel on-premise with a baremetal postgresql cluster. Unfortunately the chart requires a number of cockroachdb related secret, certificates etc.
Do you have instructions how to use the chart with postgresql? Is it sufficient to fake the content of the required secrets?
For being able to confidently and automatically updating the charts, we could create a stable release channel.
Please give a thumb up ๐ for voting for this.
It's good practice to specify resource requests and limits for all workloads. As such, we are using an automated policy check (Datree) to ensure workloads have configured resources.
It is already possible to set resources
for the setup job:
zitadel-charts/charts/zitadel/values.yaml
Line 150 in 2e2f920
Unfortunately, this applies only to container 0 in the setup job template, and not to container 1 (the "machinekey" container). In other words, the following section has no way to define resources
:
zitadel-charts/charts/zitadel/templates/setupjob.yaml
Lines 93 to 107 in 2e2f920
This leads to Datree reporting the following 2 errors:
โ Ensure each container has a configured memory limit [1 occurrence]
- metadata.name: release-name-zitadel-setup (kind: Job)
> key: spec.template.spec.containers.1 (line: 565:11)
๐ก Missing property object `limits.memory` - value should be within the accepted boundaries recommended by the organization
โ Ensure each container has a configured memory request [1 occurrence]
- metadata.name: release-name-zitadel-setup (kind: Job)
> key: spec.template.spec.containers.1 (line: 565:11)
๐ก Missing property object `requests.memory` - value should be within the accepted boundaries recommended by the organization
I think it would be acceptable to re-use the setupJob.resources
for the machinekey container. Even better would be the ability to specify its resources separately (e.g., setupJob.machinekeyWriter.resources
), with a fallback to setupJob.resources
.
When installing the chart without defining FirstInstance the template fails with
Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org
Hi,
now following error comes, what other configs are required?
Error: cannot start client for projection: ID=DATAB-0pIWD Message=Errors.Database.Connection.Failed Parent=(cannot parse host=xxxxxx.xxxxxx.svc.cluster.local port=5432 user= application_name=zitadel sslmode= dbname= sslrootcert=
: failed to configure TLS (sslmode is invalid))
Originally posted by @siddharthgaur2590 in #71 (comment)
I want to install Zitadel in a fresh kubernets cluster and following the documentation:
https://zitadel.com/docs/self-hosting/deploy/kubernetes
Installing CRDB works, but the Zitadel helm commands throws an error:
Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org
CRDB:
helm upgrade --install crdb cockroachdb/cockroachdb \ --set fullnameOverride=crdb \ --set single-node=true \ --set statefulset.replicas=1 --set tls.enabled=true
Zitadel:
helm install my-zitadel zitadel/zitadel \ --set zitadel.masterkey="MasterkeyNeedsToHave32Characters" \ --set zitadel.configmapConfig.ExternalSecure=false \ --set zitadel.configmapConfig.TLS.Enabled=false \ --set zitadel.secretConfig.Database.cockroach.User.Password="a-zitadel-db-user-password" \ --set replicaCount=1 WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020. WARNING: You should switch to "https://charts.helm.sh/stable" via: WARNING: helm repo add "stable" "https://charts.helm.sh/stable" --force-update Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org
I also tested the second option which worked without placing it in a dedicated namespace. However I want to be able to login via the UI, so not just with a service account.
Thanks in advance for any help :)
Currently, the Helm chart utilizes {{ .Values.setupJob.machinekeyWriterImage.tag | default ( trimPrefix "v" .Capabilities.KubeVersion.Version ) }}
to select the version of the alpine/k8s
image when no tag is specified in the values.yaml
file.
However, this approach may not work reliably since alpine/k8s
does not release a Docker image for every patch version of Kubernetes. For instance, if you're using Kubernetes 1.27.0, no corresponding alpine/k8s
image is available for that particular version.
I'm unsure how this problem could be solved. Setting a static version in the Chart would restrict the Kubernetes version with which the Chart could be used.
Hello.
I have found that deploying zitadel in case if the database (with permissions) managed outside of zitadel is difficult. Consider case, where database's credentials for zitadel already exists and we don't want corresponding init steps by zitadel (because we don't want to give zitadel admin access to database). In that case we want zitadel to perform only init zitadel
not general init
.
To alleviate users to manage this case init job should run init tasks sequentially: init database
, init user
, init grant
, init zitadel
. Or probably there are only two cases make sense: run full init or run only init zitadel.
If you enable TLS and leave the readiness, liveness and startup probes enabled (they are by default) then the zitadel pod(s) will not become healthy. This is because by default the scheme is set to HTTP on probes.
To fix this, either look for TLS: Enabled: True and set the scheme to HTTPS, or allow the scheme to be configurable in values.
I can create a PR to address this.
As a user, I want to have an example for my use case, so that I can create a fast proof of concept.
Acceptance Criteria
Actually, probably not all the configurations listed in the AC are working already, And it's possible that we have introduce breaking changes and release a new major version.
Related issues:
Hi Team, cc: @mheers , @fforootd, @ansarhun @mffap
I'm self-hosting Zitadel on K8s cluster using yamls like deployment, pv, pvc, service, ingress etc but its giving error when I'm connecting it to the self-hosted postgres DB in same cluster.
Error: cannot start client for projection: ID=DATAB-0pIWD Message=Errors.Database.Connection.Failed Parent=(failed to connect to host=localhost user=zitadel database=zitadel
: dial error (dial tcp [::1]:26257: connect: connection refused))
It seems due to some issue in connecting it with postgres, it's trying to connect to the default CockroachDB which I don't want.
Please help to give the resolution asap.
So far I've tried below options:
Using configmap and referring it to the deployment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: zitadel-config
namespace: default
data:
ZITADEL_DB_TYPE: "postgresql"
ZITADEL_DB_HOST: "lkjfkjsfkljskldfjs"
ZITADEL_DB_PORT: "9897"
ZITADEL_DB_NAME: "ffwefewfwef"
ZITADEL_DB_USERNAME: "dsfdsf"
ZITADEL_DB_PASSWORD: "r32re3"
ZITADEL_DB_SSL_MODE: "disable"
using the env variable in deployment.yaml
env:
- name: ZITADEL_DB_CONNECTION_URI
value: postgresql://:@hostname:/
Also, another thing:
I've used args like below in deployment.yaml, so anything else required to make the Zitadel instance up and running:
args:
- start
- "--masterkey"
- "custommasterkey"
At https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/templates/setupjob.yaml#L90 it tries to pull alpine/k8s
with the kubernetesVersion
.
Unfortunately on rke2 (and possibly k3s etc. too?) it tries to fetch alpine/k8s:1.24.8+rke2r1
instead of alpine/k8s:1.24.8
.
charts/zitadel/templates/setupjob.yaml
alpine/k8s:{{ include "zitadel.kubernetesVersion" . }}
with alpine/k8s:1.24.8
where 1.24.8
is your k8s versionhelm install -f values.yaml my-zitadel ./zitadel-charts/charts/zitadel
We have deployed zitadel with cockroachdb and fluxcd and did an update today. I don't know exactly if all the error messages were there before. But since then I have noticed that
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oauth.added\" not found in language \"en\"" id=EventTypes.org.idp.oauth.added
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oauth.changed\" not found in language \"en\"" id=EventTypes.org.idp.oauth.changed
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oidc.added\" not found in language \"en\"" id=EventTypes.org.idp.oidc.added
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.oidc.changed\" not found in language \"en\"" id=EventTypes.org.idp.oidc.changed
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.org.idp.removed\" not found in language \"en\"" id=EventTypes.org.idp.removed
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.added\" not found in language \"en\"" id=EventTypes.quota.added
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.notificationdue\" not found in language \"en\"" id=EventTypes.quota.notificationdue
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.notified\" not found in language \"en\"" id=EventTypes.quota.notified
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.quota.removed\" not found in language \"en\"" id=EventTypes.quota.removed
zitadel time="2023-06-13T11:16:50Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"EventTypes.user.human.password.change.sent\" not found in language \"en\"" id=EventTypes.user.human.password.change.sent
zitadel time="2023-06-13T11:17:33Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications
zitadel time="2023-06-13T11:18:33Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications
zitadel time="2023-06-13T11:19:34Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications
zitadel time="2023-06-13T11:19:45Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792
zitadel time="2023-06-13T11:19:46Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792
zitadel time="2023-06-13T11:19:47Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792
zitadel time="2023-06-13T11:19:48Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792
zitadel time="2023-06-13T11:19:49Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792
zitadel time="2023-06-13T11:19:50Z" level=warning msg="sequences do not match" aggregateType=user caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/crdb/handler_stmt.go:235" currentSeq=127 prevSeq=791 projection=projections.notifications sequence=792
zitadel time="2023-06-13T11:19:51Z" level=warning msg="unable to process all events from subscription" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:187" error="some statements failed" projection=projections.notifications
zitadel time="2023-06-13T11:20:34Z" level=error msg="trigger failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/handler/handler_projection.go:263" error="ID=QUERY-Dgqd2 Message=Errors.User.NotFound Parent=(sql: no rows in result set)" instanceIDs="[215627884880986413]" projection=projections.noti
fications
appears relatively often in the logs.
The interface sometimes doesn't load at all and I have to reload the page several times. Then the error message Http response at 400 or 500 level, http status code: 0
appears there also quite often.
CRDB and Zitadel have 3 replicas each. The cluster also has more than enough resources available.
Do you have any idea what this could be due to or how to fix this behavior?
Thanks a lot in advance.
Environment
Steps to Reproduce
Expected Behavior
The masterkey value should propagate from my-Service to Zitadel and resolve the error.
Actual Behavior
The Zitadel chart is not picking up the masterkey value from dependency and fails on render.
Configuration
my-Service/Chart.yaml
dependencies:
- name: zitadel
version: 5.0.0
repository: https://charts.zitadel.com
my-Service/values.yaml
zitadel:
enabled: true
masterkey: <masterkey>
Debug Details
Let me know if any other details are needed to help troubleshoot this further. Looking forward to your insights on resolving this issue. Thanks!
while trying to install zitadel with my own values, I notized that the docs for the configMapConfig are different for the tutorial, the values file in this repo and in the config file in the zitadel repo.
My Values:
replicaCount: 1
zitadel:
masterkey: "cfMe9lPkty3qFWvX49XgHocLzmStGtyE"
configmapConfig:
ExternalPort: 443
ExternalDomain: id.hobyte.de
ExternalSecure: true
TLS:
Enabled: false
#FirstInstance:
DefaultInstance:
DefaultLanguage: de
Org:
Name: cloud
Human:
UserName: zitadel-admin
FirstName: ZITADEL
LastName: Admin
Email:
Address: [email protected]
Verified: true
PreferredLanguage: de
LoginPolicy:
AllowRegister: false
secretConfig:
Database:
cockroach:
User:
Password: "a-zitadel-db-user-password"
Two things that confuse me:
In the tutorial values for the secretConfig are set, but not in the default Values and in the zitadel config file, they don't exist. Can they be removed or are these values still needed?
As described in in #66, when using the command and the values from the tutorial installing is possible, but in any other case, it fails with this error: Error: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org
Is it DefaultInstance or FirstInstance?
Currently in the values.yaml you have the following:
zitadel-charts/charts/zitadel/values.yaml
Lines 24 to 29 in a8abf2d
But if you uncomment that line, you get an error which is triggered by the setupjob.yaml
template here:
zitadel-charts/charts/zitadel/templates/setupjob.yaml
Lines 1 to 3 in a8abf2d
It's not specified in your web docs here:
https://zitadel.com/docs/self-hosting/deploy/kubernetes#setup-zitadel-and-a-service-account-admin
If that's removed, we'll still have the zitadel-admin-sa
secret, correct? So perhaps it needs to be removed from the values.yaml to reduce confusion? I can submit a PR to remove that bit from the values.yaml if you'd like.
Thanks!
As an admin, I want to have a guarantee, that for all examples, all API protocols work as as expected, so I can confidently set up my environment using the examples.
Acceptance Criteria
Only untested cases are listed.
As a admin insting zitadel in my own infrastucture, I want to disable the test-connection pod as it always fails.
In my environment, installing zitadel takes some minutes, so the test-connection pod always fails. Because the deployment has both readinessProbe and startupProbe, this pod is redundant for me.
The chart configures Cockroach by default. By having cockroach config in the chart, the app will always choose cockroach. There's no way that I can see to unconfigure cockroach so it will pick postgres.
To have more control on how pods are spread across failure domains we need support for https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ in helm chart
I've installed the chart as described in the guide but it is stuck at PodInitializing:
โฏ k get -n zitadel all
NAME READY STATUS RESTARTS AGE
pod/crdb-0 1/1 Running 0 2m35s
pod/zitadel-init-nrfnq 0/1 Init:0/1 0 116s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/crdb-public ClusterIP 10.43.27.85 <none> 26257/TCP,8080/TCP 2m35s
service/crdb ClusterIP None <none> 26257/TCP,8080/TCP 2m35s
NAME READY AGE
statefulset.apps/crdb 1/1 2m35s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/crdb-rotate-self-signer-client 0 0 */26 * * False 0 <none> 2m35s
NAME COMPLETIONS DURATION AGE
job.batch/zitadel-init 0/1 116s 116s
โฏ k logs -n zitadel pod/zitadel-init-nrfnq
Defaulted container "zitadel-init" out of: zitadel-init, chown (init)
Error from server (BadRequest): container "zitadel-init" in pod "zitadel-init-nrfnq" is waiting to start: PodInitializing
I had a successful test with docker-compose setup, but now with K8s charts have no idea what is going wrong.
As a developer, I don't want Helm to throw nil pointers, even when I use an old Helm version.
Details:
#53 (comment)
Container chown outputs: chown: /chowned-secrets/*: No such file or directory
zitadel:
# https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
configmapConfig:
ExternalPort: 443
ExternalDomain: ${local.domain}
ExternalSecure: true
TLS:
Enabled: false
Database:
postgres:
Host: postgres-postgresql
Port: 5432
Database: zitadel
User:
Username: zitadel
Password: "${random_password.zitadelPassword.result}"
SSL:
Mode: disable
Admin:
Username: postgres
Password: "${random_password.postgresPassword.result}"
SSL:
Mode: disable
Machine:
Identification:
Hostname:
Enabled: true
Webhook:
Enabled: false
Metrics:
Type: none
secretConfig: {}
masterkey: "${random_password.masterkey.result}"
dbSslRootCrtSecret: null
dbSslClientCrtSecret: null
In some scenarios customers want to configure zitadel to send outbound traffic through a proxy server.
We already support defining HTTP_PROXY settings but there is no way mounting a CA file
The conditionals in https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/templates/initjob.yaml#L136 & https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/templates/initjob.yaml#L141 with current default values https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/values.yaml#L60L63 results in the initjob always trying to mount those secrets.
This results in unable to install application.
We upgraded our chart version from 7.1.0 to 7.3.0 this morning, and we get the following error message when attempting to log into the console:
Errors.Org.PolicyNotExisting (Internal)
The logs show these messages:
zitadel-6d8dc4c9f5-8gww2 zitadel time="2023-11-16T12:27:58Z" level=error caller="/home/runner/work/zitadel/zitadel/internal/api/ui/login/renderer.go:342" error="ID=QUERY-bJEsm Message=Errors.Org.PolicyNotExisting Parent=(sql: no rows in result set)"
zitadel-6d8dc4c9f5-8gww2 zitadel time="2023-11-16T12:27:58Z" level=warning msg="missing translation" args="map[]" caller="/home/runner/work/zitadel/zitadel/internal/i18n/i18n.go:210" error="message \"Errors.Org.PolicyNotExisting\" not found in language \"en\"" id=Errors.Org.PolicyNotExisting
This is a relatively fresh installation (just setup yesterday). I'm not quite sure how much has been setup within Zitadel, as another engineer has been focusing on that.
Nothing else about the deployment configuration changed. Perhaps this should be an issue in the https://github.com/zitadel/zitadel repo?
As an admin, I want to have configuration examples for ingress configurations, so that I can easily transition from a PoC to production.
Acceptance Criteria
To be discussed
I have installed zitadel using the following steps/configuration in kubernetes
kubectl create ns zitadel
helm install crdb cockroachdb/cockroachdb --namespace zitadel --values ./cockroach-values.yaml
fullnameOverride: crdb
statefulset:
replicas: 2
helm install --namespace zitadel my-zitadel zitadel/zitadel --values ./zitadel-values.yaml
replicaCount: 3
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
hosts:
- host: my-domain.org
paths:
- path: /
pathType: Prefix
tls:
- secretName: k8s-ingress
hosts:
- my-domain.org
zitadel:
configmapConfig:
Log:
Level: 'info'
ExternalSecure: true
ExternalDomain: my-domain.org
ExternalPort: 443
TLS:
Enabled: false
FirstInstance:
Org:
Human:
Username: 'root'
Password: 'RootPassword1!'
masterkey: 'MasterkeyNeedsToHave32Characters'
secretConfig:
Database:
cockroach:
User:
Username: 'zitadel_user'
Password: 'a-zitadel-db-user-password'
I have nginx ingress controller setup and i can login in but once logged in and I reset my password i get this - i see this message
then it just spins forever nothing appears
Not sure what to check.
Describe the bug
A Change in the Helm Charts configuration and update does not restart the POD
To Reproduce
Steps to reproduce the behavior:
change ZITADEL configuration and helm update
Expected behavior
config shall be updated and pod shall be restarted toi reflect new configuration
Additional context
see solution here
Can we update the Chart to the latest stable version i.e. v2.27.2 and not stay on 2.23.1 for a long time. I don't think it makes sense that we stay on old releases.
The version bump in the workflow doesn't seem to function properly.
I've set the latest stable in the deployment on k8s but after that the dashboard and the cluster aren't any more reachable. I think the Chart itself need some adjustments to work properly with any higher version than 2.23.1.
As a administrator I want to be informed what the difference and impact of the DefaultInstance
vs FirstInstance
is.
I did now approx 1h of research and another 1h of trying and non-succeeding. What is the difference between DefaultInstance
vs FirstInstance
? ๐ญ
Start of the problem is this section of the values or here respectively: https://zitadel.com/docs/self-hosting/manage/configure#whats-next.
DefaultInstance:
InstanceName:
DefaultLanguage: en
Org:
Name:
Human:
...
UserName: zitadel-admin-1
...
What does that do? I could not make the login on a fresh installation work. I expect that the DefaultInstance
is also used for the systems own first instance if no FirstInstance
is given.
FirstInstance:
InstanceName:
DefaultLanguage: en
Org:
Name:
Human:
...
UserName: zitadel-adm1n
...
What is now done? Who wins? What is the impact?
Another thing that I observe is that e.g. DefaultInstance.InstanceName
does not even work ๐ข
I see I am not the only one:
Could you either point me to the right docs (and I order the glasses) or provide them here or as docs? โค๏ธ
Thank you ๐ฐ
Discord username: m4mbax
The question could also be asked in the official zitadel
repo, please feel free to suggest a move or move it.
For reference my current chart looks like this (I removed all user customisations for now as it was so confusing).
zitadel:
# The chart: https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/values.yaml
masterkey: {{ .Values.zitadel.mainKey | fetchSecretValue | quote }}
configmapConfig:
# All values: https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
ExternalDomain: {{ .Values.zitadel.hostName }} # ! Changing this breaks the system
ExternalPort: 443 # ! Changing this breaks the system
ExternalSecure: true # ! Changing this breaks the system
LogStore:
Access:
Stdout:
Enabled: true
TLS:
Enabled: false # Application Gateway from Azure does this
DefaultInstance:
InstanceName: {{ .Values.zitadel.defaultInstanceName }}
secretConfig:
Database:
cockroach:
User:
Password: {{ .Values.zitadel.password | fetchSecretValue | quote }}
Example values.yaml
zitadel:
configmapConfig:
Log:
Level: 'debug'
Formatter:
Format: json
ExternalSecure: false
ExternalDomain: localhost
FirstInstance:
Org:
Human:
Username: 'root'
Password: 'RootPassword1!'
TLS:
Enabled: false
masterkey: '.....'
secretConfig:
Database:
cockroach:
User:
Username: 'zitadel_user'
Password: '....'
Admin:
Username: 'root'
Password: '....'
Error
Values.zitadel.secretConfig.Database.User.Password is mandatory for tls enabled cockroach
When I try to install the chart as described on https://zitadel.com/docs/guides/deploy/kubernetes I get a nil pointer error:
helm install my-zitadel zitadel/zitadel \
--set zitadel.masterkey="my-sophisticated-password" \
--set zitadel.configmapConfig.ExternalSecure=false \
--set zitadel.configmapConfig.TLS.Enabled=false \
--set zitadel.secretConfig.Database.cockroach.User.Password="my-other-sophisticated-password" \
--set replicaCount=1 \
--set cockroachdb.single-node=true \
--set cockroachdb.statefulset.replicas=1
Error: INSTALLATION FAILED: template: zitadel/templates/setupjob.yaml:82:74: executing "zitadel/templates/setupjob.yaml" at <.Values.zitadel.configmapConfig.FirstInstance.Org>: nil pointer evaluating interface {}.Org
I have not dug into the logic here but it seems to me that you have to apply a check on those values or use a nested if construction.
Thanks for continuing to work on zitadel!
I use a lot of other apps that use postgresql, and zitadel is the only one that requires database admin access, which seems like a security issue. I generally don't give admin access on clusters to applications, as it opens another hole in my infra. What is that you need the admin access for? If it's setting permissions on specific tables, this should be something we can setup ourselves ahead of time.
No response
Describe the bug:
If no configuration is provided via secretConfig
the secret gets created, but it contains null
as a value.
Expected behaviour:
The secret should not be created.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.