Giter VIP home page Giter VIP logo

helm-charts's People

Contributors

allenx2018 avatar anna-geller avatar daniellavoie avatar darkedges avatar dependabot[bot] avatar hegerdes avatar jeremycopinlm avatar kaliazur avatar koorikla avatar loicmathieu avatar max-r-clever avatar maximizerr avatar pdemagny avatar skraye avatar tchiotludo avatar thierrygirod avatar wrussell1999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Expose Metric Service

Feature description

Hi,
I want to export the metrics using an opentelemetry service in my k8s.
But the kestra-standalone deployment does not expose a service for the metric in :8081/prometheus
I found a workaround using the kubectl expose deployment ... to make that happen, but it's not really clean and difficult to automate.
Do you think you could implement a service: section in the values.yaml ( or something like that ) to create a service that achieve this purpose ?
Thank you

Kubernetes probes should use /health/ready and /health/live

Feature description

Both Kubernetes readiness and liveness probes use the /health endpoints.

It is a good practice on a Kubernetes environment to distinguish between the liveness probe (which define if the application if alive and restart it if not) and the readiness probe (which define if an application is ready to accept requests and put it in the endpoints list of a service).

So we may change the liveness probe tu use /health/live and the readiness probe to use /health/ready.

See https://micronaut-projects.github.io/micronaut-docs-mn2/2.1.4/guide/#healthEndpoint

Add a startup probe

Feature description

This can be useful for ex for the Executor in EE which can be long to start

DIND not working in standard GKE

Expected Behavior

Trying to install the chart in a standard GKE cluster (Autopilot is out because of DIND, and potentially ES which requires a privileged pod) with basically all default values should run out of the box.

Actual Behaviour

However the worker pod is stuck in a boot loop.

Screenshot from 2024-01-09 16-54-20

I deactivated DIND for now so that I can test Kestra, but I can reactivate it to provide logs in a better format if needed.

Steps To Reproduce

  • Create a standard GKE cluster with default configuration
  • Install the kestra helm chart with the values below

Environment Information

  • Kestra Version: 0.13.8
  • Helm Charts version: 0.13.0
  • Docker Image version: latest-full

values.yaml

deployments:
  webserver:
    enabled: true
  executor:
    enabled: true
  indexer:
    enabled: false
  scheduler:
    enabled: true
  worker:
    enabled: true
  standalone:
    enabled: false

Kestra standalone startup fails due to issue with DIND on Windows

Expected Behavior

Pod should start up with no issues

Actual Behaviour

Pod fails to start with

Device "ip_tables" does not exist.
modprobe: can't change directory to '/lib/modules': No such file or directory
iptables v1.8.10 (nf_tables)
[WARN  tini (99)] Tini is not running as PID 1 and isn't registered as a child subreaper.
Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: unable to add return rule in DOCKER-ISOLATION-STAGE-1 chain:  (iptables failed: iptables --wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN: iptables v1.8.10 (nf_tables):  RULE_APPEND failed (No such file or directory): rule in chain DOCKER-ISOLATION-STAGE-1
 (exit status 4))
[rootlesskit:child ] error: command [docker-init -- dockerd --host=unix:///dind//docker.sock --host=tcp://0.0.0.0:2376 --tlsverify --tlscacert /certs/server/ca.pem --tlscert /certs/server/cert.pem --tlskey /certs/server/key.pem --log-level=fatal --group=1000] exited: exit status 1
[rootlesskit:parent] error: child exited: exit status 1

Steps To Reproduce

  1. create values.yaml containing

    deployments:
      standalone:
        enabled: true
    
  2. execute using helm install kestra kestra/kestra -n kestra --create-namespace -f .\values.yaml

  3. See

    NAME                                 READY   STATUS    RESTARTS      AGE
    kestra-minio-7fdfd75b8c-27f2f        1/1     Running   0             62m
    kestra-postgresql-0                  1/1     Running   0             62m
    kestra-standalone-59b5f7bbb8-25bnc   1/2     CrashLoopBackOff   4 (19s ago)   2m15s
    

    See pod has failed to start with a CrashBackoff. Log contains

    Device "ip_tables" does not exist.
    modprobe: can't change directory to '/lib/modules': No such file or directory
    iptables v1.8.10 (nf_tables)
    [WARN  tini (99)] Tini is not running as PID 1 and isn't registered as a child subreaper.
    Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
    To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
    failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: unable to add return rule in DOCKER-ISOLATION-STAGE-1 chain:  (iptables failed: iptables --wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN: iptables v1.8.10 (nf_tables):  RULE_APPEND failed (No such file or directory): rule in chain DOCKER-ISOLATION-STAGE-1
    (exit status 4))
    [rootlesskit:child ] error: command [docker-init -- dockerd --host=unix:///dind//docker.sock --host=tcp://0.0.0.0:2376 --tlsverify --tlscacert /certs/server/ca.pem --tlscert /certs/server/cert.pem --tlskey /certs/server/key.pem --log-level=fatal --group=1000] exited: exit status 1
    [rootlesskit:parent] error: child exited: exit status 1
    
  4. update 1values.yaml` to include

    dind:
     image:
       tag: dind
     args:
       - --log-level=fatal
     securityContext:
       runAsUser: 0
       runAsGroup: 0
    
    securityContext:
     runAsUser: 0
     runAsGroup: 0
    
  5. upgrade using helm upgrade kestra kestra/kestra -n kestra --create-namespace -f .\values.yaml

  6. Check logs.

    Certificate request self-signature ok
    subject=CN = docker:dind server
    /certs/server/cert.pem: OK
    Certificate request self-signature ok
    subject=CN = docker:dind client
    /certs/client/cert.pem: OK
    iptables v1.8.10 (nf_tables)
    failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: 
    unable to add return rule in DOCKER-ISOLATION-STAGE-1 chain:  (iptables failed: iptables --wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN: iptables v1.8.10 (nf_tables):  RULE_APPEND failed (No such file or directory): rule in chain DOCKER-ISOLATION-STAGE-1
     (exit status 4))
    
  7. update values.yaml to include

    dind:
     image:
       tag: stable-dind
     args:
       - --log-level=fatal
     securityContext:
       runAsUser: 0
       runAsGroup: 0
    
    securityContext:
     runAsUser: 0
     runAsGroup: 0
    
  8. pod starts without any issues

    NAME                                 READY   STATUS    RESTARTS   AGE
    kestra-minio-7fdfd75b8c-27f2f        1/1     Running   0          59m
    kestra-postgresql-0                  1/1     Running   0          59m
    kestra-standalone-7b798467d7-m44g2   2/2     Running   0          45s
    

Environment Information

  • Kestra Version:
    latest
  • Helm Charts version:
    latest
  • Docker Image version:
    latest

Window 11 - Docker in Desktop

Embedded cluster role to make podCreate working out of the box

Feature description

Currently ones need to setup cluster role for Kestra in order to make it able to create Pods and stream execution logs.

We do it like this :

resource "kubernetes_cluster_role" "pod_creator" {
  metadata {
    name = "pod-creator"
  }

  rule {
    api_groups = [""]
    resources  = ["namespaces", "pods"]
    verbs      = ["get", "list", "watch", "create", "delete"]
  }
}

resource "kubernetes_cluster_role" "pod_log_reader" {
  metadata {
    name = "pod-log-reader"
  }

  rule {
    api_groups = [""]
    resources  = ["pods/log"]
    verbs      = ["get", "list"]
  }
}

resource "kubernetes_cluster_role" "pod_executor" {
  metadata {
    name = "pod-executor"
  }

  rule {
    api_groups = [""]
    resources  = ["pods/exec"]
    verbs      = ["get", "post"]
  }
}

resource "kubernetes_cluster_role_binding" "kestra_pod_creator" {
  metadata {
    name = "kestra-pod-creator"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "pod-creator"
  }
  subject {
    kind      = "User"
    name      = "system:serviceaccount:kestra:default"
    namespace = "kestra"
  }
}

resource "kubernetes_cluster_role_binding" "kestra_pod_log_reader" {
  metadata {
    name = "kestra-pod-log-reader"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "pod-log-reader"
  }
  subject {
    kind      = "User"
    name      = "system:serviceaccount:kestra:default"
    namespace = "kestra"
  }
}

resource "kubernetes_cluster_role_binding" "kestra_pod_executor" {
  metadata {
    name = "kestra-pod-executor"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "pod-executor"
  }
  subject {
    kind      = "User"
    name      = "system:serviceaccount:kestra:default"
    namespace = "kestra"
  }
}

Whereas, it could be embedded directly in the Kestra helm chart using templating like Airbyte does here :

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: {{ include "airbyte.serviceAccountName" . }}-role
rules:
  - apiGroups: ["*"]
    resources: ["jobs", "pods", "pods/log", "pods/exec", "pods/attach"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # over-permission for now
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: {{ include "airbyte.serviceAccountName" . }}-binding
roleRef:
  apiGroup: ""
  kind: Role
  name: {{ include "airbyte.serviceAccountName" . }}-role
subjects:
  - kind: ServiceAccount
    name: {{ include "airbyte.serviceAccountName" . }}
{{- end }}

Error when trying to execute a sample flow

Expected Behavior

Expected Behavior is to save the flow successfully

Actual Behaviour

image

When trying to click "Save" button, getting an error message with no description

Steps To Reproduce

  1. AWS EKS cluster: v1.21.14-eks-6d3986b
  2. nodeInfo:
    architecture: amd64
    containerRuntimeVersion: docker://20.10.7
    kernelVersion: 5.4.156-83.273.amzn2.x86_64
    kubeProxyVersion: v1.21.5-eks-bc4871b
    kubeletVersion: v1.21.5-eks-bc4871b
    operatingSystem: linux
    osImage: Amazon Linux 2
  3. Deployed via kestra-helm-chart:v0.5.0
  4. Made the following changes to the values.yaml:
    deployments.executor.enabled: true,
    deployments.webserver.enabled: true,
    deployments.indexer.enabled: true,
    deployments.scheduler.enabled: true,
    deployments.worker.enabled: true,
    kafka.enabled: true,
    elasticsearch.enabled: true,
    
  5. When successfully deployed, try to run the sample flow as on: https://kestra.io/docs/developer-guide/flow/
  6. Try to save the flow
  7. When investigating, found the error in:
    • deployment.apps/kep8ya-kestra-scheduler:
        at io.kestra.core.runners.RunContext.<init>(RunContext.java:94)
        at io.kestra.core.runners.RunContextFactory.of(RunContextFactory.java:30)
        at io.kestra.core.schedulers.AbstractScheduler.lambda$computeSchedulable$4(AbstractScheduler.java:137)
        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)
        at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
        at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
        at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown Source)
        at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
        at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source)
        at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)
        at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
        at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
        at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
        at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
        at io.kestra.core.schedulers.AbstractScheduler.computeSchedulable(AbstractScheduler.java:148)
        at io.kestra.core.runners.FlowListeners.listen(FlowListeners.java:99)
        at io.kestra.core.schedulers.AbstractScheduler.run(AbstractScheduler.java:103)
        at io.kestra.jdbc.runner.JdbcScheduler.run(JdbcScheduler.java:55)
        at io.kestra.cli.commands.servers.SchedulerCommand.call(SchedulerCommand.java:35)
        at io.kestra.cli.commands.servers.SchedulerCommand.call(SchedulerCommand.java:14)
        at picocli.CommandLine.executeUserObject(CommandLine.java:1953)
        at picocli.CommandLine.access$1300(CommandLine.java:145)
        at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2358)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2352)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2314)
        at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179)
        at picocli.CommandLine$RunLast.execute(CommandLine.java:2316)
        at picocli.CommandLine.execute(CommandLine.java:2078)
        at io.kestra.cli.App.execute(App.java:67)
        at io.kestra.cli.App.main(App.java:47)
- pod/kep8ya-kestra-standalone-78bd4bff5d-t9qm9 kestra-standalone:
  2022-10-27 14:43:14,097 ERROR standalone-runner_2 .c.u.ThreadUncaughtExceptionHandlers Caught an exception 
  in Thread[standalone-runner_2,5,main]. Shutting down.
  java.lang.NullPointerException: null
  at io.kestra.core.runners.RunContext.<init>(RunContext.java:94)
  at io.kestra.core.runners.RunContextFactory.of(RunContextFactory.java:30)
  at io.kestra.core.schedulers.AbstractScheduler.lambda$computeSchedulable$4(AbstractScheduler.java:137)
  at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)
  at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
  at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
  at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
  at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
  at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown Source)
  at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown Source)
  at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
  at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source)
  at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)
  at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
  at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)
  at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
  at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
  at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
  at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
  at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
  at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
  at io.kestra.core.schedulers.AbstractScheduler.computeSchedulable(AbstractScheduler.java:148)
  at io.kestra.core.runners.FlowListeners.listen(FlowListeners.java:99)
  at io.kestra.core.schedulers.AbstractScheduler.run(AbstractScheduler.java:103)
  at io.kestra.jdbc.runner.JdbcScheduler.run(JdbcScheduler.java:55)
  at io.micrometer.core.instrument.internal.TimedRunnable.run(TimedRunnable.java:49)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.base/java.lang.Thread.run(Unknown Source)
  2022-10-27 14:43:14,159 WARN  command-shutdown io.kestra.cli.AbstractCommand Receiving shutdown ! Try to graceful 
  exit

Environment Information

  • Kestra Version: v0.5.0
  • Helm Charts version: v0.5.0

Provide support for 2 ingress: public webhook endpoint & private UI

Feature description

As Kestra has these two types of entities accessing the Webserver:

  • users for UI
  • webhooks for triggers

They carry distinct purposes, so makes sense to divide their access.

When going to a production environment, we might want to allow public access of the webhook endpoint /api/v1/executions/webhook but restrict all the other paths to the UI.

Currently a single ingress is provided allowing us to define a specific annotation for the webhook endpoint within Helm chart

ingress:
  enabled: true
  className: ""
  annotations:
    kubernetes.io/ingress.class: nginx
  hosts:
    - host: kestra.webhook.${load-balancer-ip}.nip.io
      paths:
        - path: /api/v1/executions/webhook
          pathType: Prefix

Could be interesting to support another ingress for general UI access in order to restrict access using IAP or to a VPN.

This would facilitate deployment process.

Init flows via helm ?

Feature description

Hello,

I am currently using Kestra via Kubernetes with Helm deployment. I would like to maintain a fairly stateless installation in the sense that my requirements are currently limited (meaning without storage). Is there a way during Helm deployment to create flows using the values.yaml file ?

Thx

Notes on using the Helm Chart

Issue description

  • Image default tag is latest-full and default pullPolicy id ifNotPresent. As the default tag is not latest, the image will never been updated. I recommend moving to default pullPolicy Always.
  • The entrypoint is overridden by an executable property, this seems useless and moreover an issue on EE as the name of the executable is different from the OSS.
  • There is no easy way to create an admin user on the EE

Helm Lint failed with errors

Expected Behavior

Ideally there shouldn't be any error after executing Helm Lint.

Actual Behaviour

helm lint .\charts\kestra\
==> Linting .\charts\kestra\
[ERROR] Chart.yaml: chart type is not valid in apiVersion 'v1'. It is valid in apiVersion 'v2'
[ERROR] templates/: template: kestra/templates/secret.yaml:1:27: executing "kestra/templates/secret.yaml" at <include "kestra.k8s-config" $>: error calling include: template: kestra/templates/_helpers.tpl:95:30: executing "kestra.k8s-config" at <include "kestra.postgres.url" .>: error calling include: template: kestra/templates/_helpers.tpl:84:14: executing "kestra.postgres.url" at <$.Values.postgresql.primary.service.ports.postgresql>: nil pointer evaluating interface {}.ports

Steps To Reproduce

I downloaded the chart on my local system and then ran: helm lint command for the chart.

helm version:

helm version
version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.17.13"}

I am using powershell.

Environment Information

  • Kestra Version: appVersion: "0.13.0"
  • Helm Charts version: version: 0.13.0
  • Chart's source downloaded from: master branch at commit: 92ed39e

Update Postgres Version

Feature description

I have an issue with deploying the your latest kestra chart (0.16.0) to my ARM cluster.
Screenshot from 2024-04-24 14-37-06

This is because the postgres container used in your chart bitnami/postgresql:14.5.0-debian-11-r35 is not build for ARM.
See on dockerhub.
Screenshot from 2024-04-24 14-38-54

Could you eventually update the chart to a version supporting complete ARM setup? That would be awesome.

Regards

Allow to set JAVA_OPTS on specific deployment

Hi !

Description

We aim to configure -XX:MaxRAMPercentage=50.0 specifically on the worker deployments within our environment. However, we encounter a challenge as JAVA_OPTS have been globally set across all deployments through the extraEnv configuration, which includes a specific setting for logback configuration file path.

Current Configuration

Currently, JAVA_OPTS are globally set for all deployments as follows:

extraEnv:
  - name: JAVA_OPTS
    value: "-Dlogback.configurationFile=file:/app/log-format/logback.xml"

Issue

We need to apply -XX:MaxRAMPercentage=50.0 to the workers to better manage their JVM memory usage. However, we want to avoid altering the existing JAVA_OPTS for other deployments, which could lead to unintended changes in their configurations, especially concerning logging.

Requested Feature/Enhancement

The ability to append or set specific JAVA_OPTS for individual deployments without impacting the global settings defined in extraEnv. This feature should allow us to:

  • Set -XX:MaxRAMPercentage=50.0 exclusively on the worker deployments.
  • Preserve the global JAVA_OPTS settings for other deployments, ensuring no disruption in their configurations, particularly regarding logging.

Kestra pods crash one time when releasing it with Helm

Expected Behavior

No restart of the Kestra pod occurs

Actual Behaviour

When you deploy Kestra for the first time with helm install kestra kestra/kestra the kestra-standalone pod crash one time with the following exception:

org.postgresql.util.PSQLException: Connection to kestra-postgresql:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:319)
	at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
	at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)
	at org.postgresql.Driver.makeConnection(Driver.java:434)
	at org.postgresql.Driver.connect(Driver.java:291)
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
	at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364)
	at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
	at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476)
	at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561)
	at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
	at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:81)
	at io.micronaut.configuration.jdbc.hikari.HikariUrlDataSource.<init>(HikariUrlDataSource.java:35)
	at io.micronaut.configuration.jdbc.hikari.DatasourceFactory.dataSource(DatasourceFactory.java:66)
	at io.micronaut.configuration.jdbc.hikari.$DatasourceFactory$DataSource0$Definition.build(Unknown Source)
	at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:161)
	at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2354)
	at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:2305)
	at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:2251)
	at io.micronaut.context.DefaultBeanContext.createRegistration(DefaultBeanContext.java:3016)
	at io.micronaut.context.SingletonScope.getOrCreate(SingletonScope.java:80)
	at io.micronaut.context.DefaultBeanContext.findOrCreateSingletonBeanRegistration(DefaultBeanContext.java:2918)
	at io.micronaut.context.DefaultBeanContext.loadContextScopeBean(DefaultBeanContext.java:2737)
	at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:1915)
	at io.micronaut.context.DefaultApplicationContext.initializeContext(DefaultApplicationContext.java:249)
	at io.micronaut.context.DefaultBeanContext.readAllBeanDefinitionClasses(DefaultBeanContext.java:3326)
	at io.micronaut.context.DefaultBeanContext.finalizeConfiguration(DefaultBeanContext.java:3684)
	at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:341)
	at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:194)
	at io.micronaut.configuration.picocli.MicronautFactory.<init>(MicronautFactory.java:59)
	at io.kestra.cli.App.execute(App.java:67)
	at io.kestra.cli.App.main(App.java:47)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.base/java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
	at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
	at java.base/java.net.AbstractPlainSocketImpl.connect(Unknown Source)
	at java.base/java.net.SocksSocketImpl.connect(Unknown Source)
	at java.base/java.net.Socket.connect(Unknown Source)
	at org.postgresql.core.PGStream.createSocket(PGStream.java:241)
	at org.postgresql.core.PGStream.<init>(PGStream.java:98)
	at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:109)
	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)
	... 31 common frames omitted
Exception in thread "main" io.micronaut.context.exceptions.BeanInstantiationException: Bean definition [javax.sql.DataSource] could not be loaded: Error instantiating bean of type  [javax.sql.DataSource]

Message: Failed to initialize pool: Connection to kestra-postgresql:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Path Taken: DataSource.dataSource(DatasourceConfiguration datasourceConfiguration)
	at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:1921)
	at io.micronaut.context.DefaultApplicationContext.initializeContext(DefaultApplicationContext.java:249)
	at io.micronaut.context.DefaultBeanContext.readAllBeanDefinitionClasses(DefaultBeanContext.java:3326)
	at io.micronaut.context.DefaultBeanContext.finalizeConfiguration(DefaultBeanContext.java:3684)
	at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:341)
	at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:194)
	at io.micronaut.configuration.picocli.MicronautFactory.<init>(MicronautFactory.java:59)
	at io.kestra.cli.App.execute(App.java:67)
	at io.kestra.cli.App.main(App.java:47)
Caused by: io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type  [javax.sql.DataSource]

Message: Failed to initialize pool: Connection to kestra-postgresql:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Path Taken: DataSource.dataSource(DatasourceConfiguration datasourceConfiguration)
	at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2367)
	at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:2305)
	at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:2251)
	at io.micronaut.context.DefaultBeanContext.createRegistration(DefaultBeanContext.java:3016)
	at io.micronaut.context.SingletonScope.getOrCreate(SingletonScope.java:80)
	at io.micronaut.context.DefaultBeanContext.findOrCreateSingletonBeanRegistration(DefaultBeanContext.java:2918)
	at io.micronaut.context.DefaultBeanContext.loadContextScopeBean(DefaultBeanContext.java:2737)
	at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:1915)
	... 8 more
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Connection to kestra-postgresql:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
	at com.zaxxer.hikari.pool.HikariPool.throwPoolInitializationException(HikariPool.java:596)
	at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:582)
	at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
	at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:81)
	at io.micronaut.configuration.jdbc.hikari.HikariUrlDataSource.<init>(HikariUrlDataSource.java:35)
	at io.micronaut.configuration.jdbc.hikari.DatasourceFactory.dataSource(DatasourceFactory.java:66)
	at io.micronaut.configuration.jdbc.hikari.$DatasourceFactory$DataSource0$Definition.build(Unknown Source)
	at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:161)
	at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2354)
	... 15 more
Caused by: org.postgresql.util.PSQLException: Connection to kestra-postgresql:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:319)
	at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
	at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)
	at org.postgresql.Driver.makeConnection(Driver.java:434)
	at org.postgresql.Driver.connect(Driver.java:291)
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
	at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364)
	at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
	at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476)
	at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561)
	... 22 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.base/java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
	at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
	at java.base/java.net.AbstractPlainSocketImpl.connect(Unknown Source)
	at java.base/java.net.SocksSocketImpl.connect(Unknown Source)
	at java.base/java.net.Socket.connect(Unknown Source)
	at org.postgresql.core.PGStream.createSocket(PGStream.java:241)
	at org.postgresql.core.PGStream.<init>(PGStream.java:98)
	at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:109)
	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)
	... 31 more

The pod successfully starts after one restart.

Steps To Reproduce

No response

Environment Information

  • Kestra Version: 0.5.3
  • Helm Charts version:
  • Docker Image version:

Allow providing Ephemeral PVCs on Worker to avoid space limitation of the node disk

Feature description

Hi !

Description

Currently, the worker deployment use an emptyDir volume, which stores temporary data directly on the node's filesystem. This approach can lead to issues with disk space on the node getting filled up by a task and the worker getting killed.

Requested Feature/Enhancement

The ability to set a PVC for the temporary data directory of the worker deployment.

Error "installation failed" on helm install

Expected Behavior

Follow documentation should lead to a working instance

Actual Behaviour

An error message is write to the console . Cf Steps to reproduce.

Steps To Reproduce

➜  ~ helm install kestra
Error: INSTALLATION FAILED: must either provide a name or specify --generate-name
➜  ~ helm install kestra kestra
Error: INSTALLATION FAILED: non-absolute URLs should be in form of repo_name/path_to_chart, got: kestra
➜  ~ helm install --generate-name kestra
Error: INSTALLATION FAILED: non-absolute URLs should be in form of repo_name/path_to_chart, got: kestra

Previously the repo has been successfully add to helm repo :

➜  ~ helm repo list
NAME            	URL
....
kestra          	https://helm.kestra.io/

Environment Information

  • ~ helm version
    version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.19.3"}

crash on postgres replica

Expected Behavior

postgresql:
  enabled: true
  primary:    
    extendedConfiguration: |
        max_connections = 2048
    persistence:
      size: 1024Gi
  readReplicas:
    replicaCount: 2
    extendedConfiguration: |
        max_connections = 2048
    persistence:
      size: 1024Gi  

Should works well

Actual Behaviour

Can't find kestra-postgres dns name.

After look into helm file

{{- define "kestra.postgres.url" }}
{{- $port := $.Values.postgresql.primary.service.ports.postgresql | toString }}
{{- printf "%s-%s:%s" .Release.Name "postgresql" $port -}}

I think no other helm that reads replic settings.

Steps To Reproduce

No response

Environment Information

  • Kestra Version: 0.9.1
  • Helm Charts version: 0.5.1
  • Docker Image version: 0.9.1

Allow passing extra resources in the chart

Feature description

It would be really helpful if we can include additional resources in the chart so that they become associated with it and managed together as one helm release.

The current situation we're facing right now is that we have:

  1. a SealedSecret object that contains our kestra config
  2. the kestra release
    However, the sealed secret is not associated with the release and therefore argocd cannot manage them as one unit.

Helm Release auto-updates outside of given appVersion

Expected Behavior

If the helm chart for a given version (i.e. 0.16.0 with v0.16.0) is pulled it should not update to the another version (v0.17.0).

Actual Behaviour

On pod restart kestra will be updated due to use of the latest-full tag with pullPolicy always in values.yml. The appVersion does not seem to be applied currently.

Steps To Reproduce

  1. Deploy Helm chart with version 0.16.0.
  2. Kestra will be deployed with 0.17.0.

Environment Information

  • Kestra Version: 0.17.0
  • Helm Charts version: 0.16.0
  • Docker Image version: 0.17.0.

Upgrade postgresql engine version

Feature description

Are there any plans to upgrade the PostgreSQL version? The image uses 11.9.2 which only has AMD and does not support ARM.

ingress: Required value: pathType must be specified

Expected Behavior

with kube v1.23.5 which require pathType

        {{- range .paths }}
          - path: {{ . }}
            backend:
              service:
                name: {{ $fullName }}
                port:
                  name: http
        {{- end }}

Actual Behaviour

Failed to upgrade

Steps To Reproduce

No response

Environment Information

  • Kestra Version: master
  • Helm Charts version: 3.8.1
  • Docker Image version: 1.23.5

Error: INSTALLATION FAILED: failed post-install: timed out waiting for the condition

Expected Behavior

Expected it to install and be accessible publicly

Actual Behaviour

i get Error: INSTALLATION FAILED: failed post-install: timed out waiting for the condition when i run helm install kestra kestra/kestra I increased timeout using this command and still got the same error. helm install kestra kestra/kestra --set startupapicheck.timeout=5m

when i run kubectl get pods -w I get

NAME                                   READY   STATUS             RESTARTS       AGE
kestra-minio-686d88c7bb-vdkz5          0/1     Pending            0              7m21s
kestra-minio-make-bucket-job-d9d6l     0/1     CrashLoopBackOff   4 (48s ago)    7m21s
kestra-postgresql-0                    1/1     Running            0              7m21s
kestra-standalone-77884789b7-r7sqf     2/2     Running            2 (7m3s ago)   7m21s

also there's no load balancer exposing it to the public when i run kubectl get service

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
kestra-minio           ClusterIP      10.0.5.205     <none>          9000/TCP                     13m
kestra-minio-console   ClusterIP      10.0.141.35    <none>          9001/TCP                     13m
kestra-postgresql      ClusterIP      10.0.20.52     <none>          5432/TCP                     13m
kestra-postgresql-hl   ClusterIP      None           <none>          5432/TCP                     13m
kestra-service         ClusterIP      10.0.14.78     <none>          8080/TCP                     13m
kubernetes             ClusterIP      10.0.0.1       <none>          443/TCP                      69m

Steps To Reproduce

No response

Environment Information

  • Kestra Version:
  • Helm Charts version: Helm v3.10.3
  • Docker Image version:
  • Azure AKS

Override default minio resources.requests.memory

Issue description

For local k8s setups e.g. minikube, k3d override minio's defaults.
By default minio sets
resources.requests.memory=16gb
resulting in the helm install command waiting for resources

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  10s   default-scheduler  0/2 nodes are available: 2 Insufficient memory.

As a workaround one can set this explicitly with
helm install my-release kestra/kestra --set minio.resources.requests.memory=1Gi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.