Giter VIP home page Giter VIP logo

helm-charts's People

Contributors

aabedraba avatar artem-zherdiev-ingio avatar carlosgova avatar danielbdias avatar dejanzele avatar emamihe avatar emil2k avatar exu avatar fog1985 avatar fulgas avatar gberche-orange avatar jasmingacic avatar jorgeepc avatar kubeshop-bot avatar kylehodgetts avatar mathnogueira avatar mbana avatar michal-jagiello-tmpl avatar nicufk avatar olensmar avatar povilasv avatar rangoo94 avatar schoren avatar tarick avatar tkonieczny avatar vlia avatar vsukhin avatar witodelnat avatar xoscar avatar ypoplavs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Tracetest postgres user error

Describe the bug
Tracetest default helm chart values resulting in the following error

"could not execute migrations: could not get driver from postgres connection: pq: password authentication failed for user "tracetest"

To Reproduce
Steps to reproduce the behavior:

  1. Run helm install tracetest kubeshop/tracetest --values values.yaml (only changes postgres password and collector configs)
  2. See error

Expected behavior
TraceTest running properly

Version / Cluster

  • Tracetest helm chart version 0.2.47
  • kubernetes v.1.25.4 AKS

Screenshots
Screenshot 2023-04-14 at 10 53 10

Additional context

My values.yaml

# Section for configuring the postgres database that will be used by Tracetest
postgresql:
  # For now, this is required to be enabled, otherwise tracetest will not
  # be able to function properly.
  enabled: true
  architecture: standalone
  image:
    tag: 14.6.0-debian-11-r13

  # credentials for accessing the database
  auth:
    database: "tracetest"
    username: "tracetest"
    password: test123
    existingSecret: ""

# Provisioning allows to define initial settings to be loaded into the database on first run. 
# These will only be applied when running tracetest against an empty database. If tracetest has already
# been configured, the provisioning settings are ignored.
provisioning: |
  ---
  # Datastore is where your application stores its traces. You can define several different datastores with
  # different names, however, only one is used by Tracetest.
  # You can see all available datastore configurations at https://kubeshop.github.io/tracetest/supported-backends/
  type: DataStore
  spec:
    name: otlp
    # Indicates that this datastore is a jaeger instance
    type: otlp
    # Configures how tracetest connects to jaeger.
    otlp:
      type: otlp
  ---
  type: Config
  spec:
    analyticsEnabled: false
  ---
  type: PollingProfile
  spec:
    name: Custom Profile
    strategy: periodic
    default: true
    periodic:
      timeout: 2m
      retryDelay: 3s



# This section configures the strategy for pooling traces from the trace backend
poolingConfig:
  # How long tracetest will wait for a complete trace before timing out
  # If you have long-running operations that will generate part of your trace, you have
  # to change this attribute to be greater than the execution time of your operation.
  maxWaitTimeForTrace: 30s

  # How long tracetest will wait to retry fetching the trace. If you define it as 5s it means that
  # tracetest will retrieve the operation trace every 5 seconds and check if it's complete. It will
  # do that until the operation times out.
  retryDelay: 5s

# Section for anonymous analytics. If it's enabled, tracetest will collect anonymous analytics data
# to help us improving our project. If you don't want to send analytics data, set enabled as false.
analytics:
  enabled: true

# Section to configure how telemetry works in Tracetest.
telemetry:
  # exporters are opentelemetry exporters. It configures how tracetest generates and send telemetry to
  # a datastore.
  exporters:
    # This is an exporter called collector, but it could be named anything else.
    collector:
      # configures the service.name attribute in all trace spans generated by tracetest
      serviceName: tracetest
      # indicates the percentage of traces that would be sent to the datastore. 100 = 100%
      sampling: 100
      # configures the exporter
      exporter:
        # Tracetest sends data using the opentelemetry collector. For now there is no other
        # alternative. But the collector is flexible enough to send traces to any other tracing backend
        # you might need.
        type: collector
        collector:
          # endpoint to send traces to the collector
          endpoint: otel-collector.open-telemetry.svc.cluster.local:4317

# Configures the server
server:
  # Indicates the port that Tracetest will run ons
  httpPort: 11633
  # Indicates which telemetry components will be used by Tracetest
  telemetry:
    # The exporter that tracetest will use to send telemetry
    # exporter: collector
    # The exporter that tracetest will send the triggering transaction span. This is optional. If you
    # want to have the trigger transaction span in your trace, you have to configure this field to
    # send the span to the same place your application sends its telemetry.
    # applicationExporter: collector

replicaCount: 1

# You don't have to change anything bellow this line.

image:
  repository: kubeshop/tracetest
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 11633
  annotations: {}

ingress:
  enabled: false
  className: ""
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}


A way to add custom labels into preUpgrade hook

Why should it be implemented?
On some cases there is a need to add custom labels into resource like job but not using global values.

Describe the improvement
User could customize application behavior more than currently.

Additional context
For patch webhook there is already an option to add custom labels

Testkube API endpoint configure

Describe the bug
I deployed the helm chart version 1.9.4 and apparently all is perfect, but when I want to change the Testkube API endpoint from gui option I received "Could not receive data from the specified api endpoint" I tried with http and https, with /results/v1 and without it with"service_name +"."+ service_namespace + ".svc.cluster.local", etc

To Reproduce
Steps to reproduce the behavior:

  1. helm install testkube testkube/testkube -f my-values.yaml --create-namespace --namespace testkube --version 1.9.4
  2. all pods are up except testkube-dashboard-test-connection and testkube-dashboard-test-connection with error wget: bad address 'testkube-operator:80' in log pod and the other with:
Connecting to testkube-api-server:8088 (172.20.201.170:8088)
wget: can't connect to remote host (172.20.201.170): Connection refused

Expected behavior
I want to see my test results

Version / Cluster

  • Which testkube version? helm chart version 1.9.4
  • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube) EKS (AWS)
  • What Kubernetes version? 1.23

Screenshots
If applicable, add CLI commands/output to help explain your problem.
image

Additional context
I use the default namespace testkube and I use nginx internal load balancer

my-values.yaml in chart:

testkube-dashboard:
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: my-nginx
    hosts:
    - testkube.miurl.com
  apiServerEndpoint: https://testkube.myurl.com/results/v1 #not working ## tried with http://testkube-api-server.testkube.svc.cluster.local/results/v1 without success

testkube-api:
  uiIngress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: my-nginx
    hosts:
    - testkube.myurl.com
  cliIngress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: my-nginx
    hosts:
    - testkube.myurl.com

TestTriggers webhook fails on testkube helm chart update with x509 certificate error

Describe the bug
TestTriggers webhook throws the error on Testkube Helm Chart update here: name: vtesttrigger.kb.io

To Reproduce
Steps to reproduce the behavior:

  1. Setup "labels to labels" testkube TestTrigger
  2. Update testkube helm release with a newest version

Expected behavior
Testkube should be updated accordingly, all dependent resources (tests, test suites, triggers, executors) should stay as is

Version / Cluster

  • testkube helm release version 1.10.1
  • Kubernetes cluster in Microsoft Azure
  • Kubernetes version 1.23.5

Additional context
The error from Flux:

TestTrigger/{env}/{testTriggerName} dry-run failed, reason: InternalError, error: Internal error occurred: failed calling webhook "[vtesttrigger.kb.io](http://vtesttrigger.kb.io/)": failed to call webhook: Post "[https://{testKubeHost}/validate-tests-testkube-io-v1-testtrigger?timeout=10s](https://testkube-operator-webhook-service.{env}.svc/validate-tests-testkube-io-v1-testtrigger?timeout=10s)": x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "nil1")

Error: INSTALLATION FAILED

Any fresh installation of the testkube in my local K8's is failing due to helm nil pointer issue.
I think is due to the new PR" https://github.com/kubeshop/helm-charts/pull/258/files#diff-b3d5a53daf2423ea63a61fbf1d77611d46673954cfe7cfcf6e8dd059204708e3

โฏ helm install --create-namespace my-testkube testkube/testkube
Error: INSTALLATION FAILED: template: testkube/charts/testkube-api/templates/deployment.yaml:36:26: executing "testkube/charts/testkube-api/templates/deployment.yaml" at <.Values.nats.secretName>: nil pointer evaluating interface {}.secretName

testkube regression: helm chart fails to install with flux due to duplicate yaml keys

Describe the bug

#449 introduced a regression with duplicate yaml key between the template and default values

readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 12 }}
httpGet:
path: /health
port: {{ .Values.service.port }}
resources:

## Testkube API Liveness probe
livenessProbe:
httpGet:
path: /health
scheme: HTTP

As a result, installation from fluxcd fails with message

      Helm upgrade failed: error while running post render on files: map[string[]interface {}(nil): yaml: unmarshal errors:
        line 107: mapping key "httpGet" already defined at line 100
        line 118: mapping key "httpGet" already defined at line 112

See related fluxcd/helm-controller#283 (comment)

While duplicate keys are mostly ignored by some tools but not others, this problem is unfortunately not detected by helm lint (once #452 gets included)

$ helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

More background explaining why fluxcd fails with duplicated yaml keys can be found at kubernetes-sigs/kustomize#3480

I searched a while how to automatically detect such regressions in the future in the kubeshop/helm-charts github workflows:

  • some python libraries can detect duplicated yaml keys
  • the sigs.k8s.io/kustomize/kyaml cli is likely to detect such errors

https://github.com/kubernetes-sigs/kustomize/blob/2fa944b1cdc1f9ad4f72b3df8e67553e0b97d37d/cmd/kyaml/README.md

This package exists to expose kyaml filters directly as cli commands for the purposes of development of the kyaml package and as a reference implementation for using the libraries.

I for now failed to install kyaml (likely by lack of skills related go build environment)

go install -v sigs.k8s.io/kustomize/cmd/kyaml@latest
go: sigs.k8s.io/kustomize/cmd/kyaml@latest: module sigs.k8s.io/kustomize@latest found (v2.0.3+incompatible), but does not contain package sigs.k8s.io/kustomize/cmd/kyaml

To Reproduce

Steps to reproduce the behavior:

helm install --dry-run testkube-manual kubeshop/testkube --version 1.10.272  | tail -n +7 ../testkube.yaml | head -n -3 > testkube.yaml

Find duplicate keys

          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /health
            initialDelaySeconds: 30
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 10
            httpGet:
              path: /health
              port: 8088

** workaround **

Revert to 1.10.256

Expected behavior
A clear and concise description of what you expected to happen.

Version / Cluster

  • Which testkube version? helm chart 1.10.257 to 1.10.272
  • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube)
  • What Kubernetes version?

Screenshots
If applicable, add CLI commands/output to help explain your problem.

Additional context
Add any other context about the problem here.

Pre-upgrade job image is not configurable

Describe the bug
On environment where we use docker proxy testkube can't be deployed at all

To Reproduce
Steps to reproduce the behavior:

  1. Try to install testkube on env where there is no direct internet connection
  2. It fails on pre-hook job due to ImagePullBackOff error

Expected behavior
I can configure that or imageRepository (which is already used on global) is used

Version / Cluster

  • Which testkube version? 1.10.400
  • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube) local cloud

Load OAuth2 CLIENT_SECRET and COOKIE_SECRET from secret provider

Describe the enhancement you'd like to see
Currently, there is no way to provide the OAuth2 CLIENT_SECRET and COOKIE_SECRET from the secret kind. I use ARGOCD to manage Testkube installation and store the manifest in a Git repo. Anyone having access to the repo will be able to see the values.

It'd be great if the helm chart loads the values from a secret if provided.

I can take a stab and submit a pull request.

value: "{{ .Values.oauth2.env.clientSecret }}"

value: "{{ .Values.oauth2.env.cookieSecret }}"

Sample demo/staging values.yaml result into cors issue when hitting test "run now" button

Describe the bug

Given testkube is deployed with an ingress for dashboard into a distinct domain than the ingress for api ui
When user hits run now in a test page
image
Then an error is displayed: TypeError: failed to fetch resource
image

Javascript console includes

   Access to fetch at 'https://testkube-api.domain.org/results/v1/tests/02-testkube-sample-tests-test-kuttl-data/executions' from origin 'https://testkube-ui.domain.org' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

This seems to be rooted with the CORS preflight request being rejected with a 401 by the Nginx Ingress

image

Preflight request recorded from edge as curl

curl 'https://testkube-api.domain.org/results/v1/tests/02-testkube-sample-tests-test-kuttl-data/executions' \
-X 'OPTIONS' \
-H 'Accept: */*' \
-H 'Accept-Language: en,fr;q=0.9,fr-FR;q=0.8,en-GB;q=0.7,en-US;q=0.6' \
-H 'Access-Control-Request-Headers: content-type' \
-H 'Access-Control-Request-Method: POST' \
-H 'Connection: keep-alive' \
-H 'Origin: https://testkube-ui.domain.org' \
-H 'Referer: https://testkube-ui.domain.org/' \
-H 'Sec-Fetch-Dest: empty' \
-H 'Sec-Fetch-Mode: cors' \
-H 'Sec-Fetch-Site: same-site' \
-H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69' \
--compressed
HTTP/1.1 401 Unauthorized
Date: Wed, 05 Apr 2023 08:52:15 GMT
Content-Type: text/html
Content-Length: 574
Connection: keep-alive

Root cause in the values.yaml which does not accept the OPTIONS http method used by cors preflight, triggered before the POST

nginx.ingress.kubernetes.io/server-snippet: |
set $methodallowed "";
set $pathallowed "";
if ( $request_method = GET ){
set $methodallowed "true";
set $pathallowed "true";
}
if ( $request_method = POST ){
set $methodallowed "true";
}
if ( $request_method = PATCH ){
set $methodallowed "true";
}

To Reproduce
Steps to reproduce the behavior:

  1. Run '...'
  2. Specify '...'
  3. See error

Expected behavior
A clear and concise description of what you expected to happen.

Version / Cluster

  • Which testkube version? chart 1.10.257
  • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube)
  • What Kubernetes version?

Screenshots
If applicable, add CLI commands/output to help explain your problem.

Additional context
Add any other context about the problem here.

Executors json for priv registries.

Describe the enhancement you'd like to see
Operator needs imagepullsecret definitions in values

Api chart needs method of 'bringing your own' executors json for priv image definitions

Additional context

          - name: TESTKUBE_DEFAULT_EXECUTORS
            value: "{{ .Files.Get "executors.json" | b64enc }}"

this can be modified to pull from values so private registry utilizing users can still use the tool

Helm Chart not compatible with kustomize from version >1.7.63

Describe the bug
Helm Chart cannot be rendered with kustomize build from helm chart verisons greater than 1.7.63

To Reproduce
Steps to reproduce the behavior:

  1. Create kustomization.yaml file, e.g.
kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1

helmCharts:
- name: testkube
  releaseName: testkube
  repo: https://kubeshop.github.io/helm-charts
  1. In directory of kustomize.yaml, run kustomize build . --enable-helm
  2. See error
Error: map[string]interface {}(nil): yaml: unmarshal errors:
  line 12: mapping key "helm.sh/chart" already defined at line 8
  line 13: mapping key "app.kubernetes.io/managed-by" already defined at line 9

Expected behavior
Build goes through without errors

Version / Cluster

  • Which testkube version? Helm Chart >1.7.63
  • Version:kustomize/v4.5.7

Additional context
It is likely that labels of testkube-operator are responsible for this behaviour: a399e4e
Add version: 1.7.63 under helmCharts: to verify that this worked correctly in earlier versions.

Support private Slack channels

I configured Slack notifications for private channel but I get no messages. It works for public channels

Describe the solution you'd like
Whatever which channel I select messages should be sent

Describe alternatives you've considered
None -- working on public channels is an accepted workaround, but should work on private as well

Hardcoded ingress annotations

In our cluster, work is supported only through the ingressClassName resource, respectively, we used the className option in the values.yaml for configuration ingress, however, due to the fact that the nginx annotation of the ingressclass is hardcoded in your chart, a deployment error occurs.

Config in my values.yaml:

testkube-api:
  uiIngress:
    enabled: true
    className: nginx
    hosts:
      - testkube-example.com

  cliIngress:
    enabled: true
    className: nginx
    hosts:
      - testkube-example.com

testkube-dashboard:
  enabled: true

  ingress:
    enabled: true
    className: nginx
    hosts:
    - testkube-example.com

After helm install:

$ kubectl -n testkube get ingress
No resources found in testkube namespace.

Errors from argocd logs:

time="2022-11-03T00:18:50Z" level=info msg="Apply failed" application=infra-digital-addons-testkube dryRun=false message="Ingress.extensions \"cli-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set" syncId=00410-kpgCE task="Sync/0 resource networking.k8s.io/Ingress:testkube/cli-testkube-api-server-testkube nil->obj (,,)"
time="2022-11-03T00:18:50Z" level=info msg="Adding resource result, status: 'SyncFailed', phase: 'Failed', message: 'Ingress.extensions \"cli-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set'" application=infra-digital-addons-testkube kind=Ingress name=cli-testkube-api-server-testkube namespace=testkube phase=Sync syncId=00410-kpgCE
time="2022-11-03T00:18:50Z" level=info msg="Apply failed" application=infra-digital-addons-testkube dryRun=false message="Ingress.extensions \"ui-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set" syncId=00410-kpgCE task="Sync/0 resource networking.k8s.io/Ingress:testkube/ui-testkube-api-server-testkube nil->obj (,,)"
time="2022-11-03T00:18:50Z" level=info msg="Adding resource result, status: 'SyncFailed', phase: 'Failed', message: 'Ingress.extensions \"ui-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set'" application=infra-digital-addons-testkube kind=Ingress name=ui-testkube-api-server-testkube namespace=testkube phase=Sync syncId=00410-kpgCE
time="2022-11-03T00:18:50Z" level=info msg="Apply failed" application=infra-digital-addons-testkube dryRun=false message="Ingress.extensions \"testkube-dashboard-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set" syncId=00410-kpgCE task="Sync/0 resource networking.k8s.io/Ingress:testkube/testkube-dashboard-testkube nil->obj (,,)"
time="2022-11-03T00:18:50Z" level=info msg="Adding resource result, status: 'SyncFailed', phase: 'Failed', message: 'Ingress.extensions \"testkube-dashboard-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set'" application=infra-digital-addons-testkube kind=Ingress name=testkube-dashboard-testkube namespace=testkube phase=Sync syncId=00410-kpgCE
time="2022-11-03T00:18:50Z" level=info msg="Updating operation state. phase: Running -> Failed, message: 'one or more tasks are running' -> 'one or more objects failed to apply, reason: Ingress.extensions \"cli-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set,Ingress.extensions \"testkube-dashboard-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set,Ingress.extensions \"ui-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set'" application=infra-digital-addons-testkube syncId=00410-kpgCE
time="2022-11-03T00:18:50Z" level=info msg="Partial sync operation to 22788023a5bbd37e62c672926a9f84c8eecefb00 failed: one or more objects failed to apply, reason: Ingress.extensions \"cli-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set,Ingress.extensions \"testkube-dashboard-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set,Ingress.extensions \"ui-testkube-api-server-testkube\" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: \"nginx\": can not be set when the class field is also set" application=infra-digital-addons-testkube dest-namespace=testkube dest-server="https://kubernetes.default.svc" reason=OperationCompleted type=Warning

testkube-operator-manager-role not found if testkube is installed with release name different than "testkube"

Describe the bug
When I install testkube using helm and release name is different than testkube (e.g. my-testkube) clusterrolebinding my-testkube-operator-manager-rolebinding points to wrong cluster role (it creates my-testkube-operator-manager-role and clusterrolebinding points to ClusterRole/testkube-operator-manager-role).

To Reproduce
Steps to reproduce the behavior:

  1. Install testkube using helm with different release name than "testkube"
  2. Check logs of testkube-operator-controller-manager pod
  3. See error
E0310 06:29:11.857408       1 leaderelection.go:330] error retrieving resource lock my/47f0dfc1.testkube.io: leases.coordination.k8s.io "47f0dfc1.testkube.io" is forbidden: User "system:serviceaccount:my:testkube-operator-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "my": RBAC: [clusterrole.rbac.authorization.k8s.io "testkube-operator-manager-role" not found, clusterrole.rbac.authorization.k8s.io "testkube-operator-proxy-role" not found, role.rbac.authorization.k8s.io "my-operator-leader-election-role" not found]

Expected behavior
Every clusterrolebinding is created with expected values

Version / Cluster

  • Which testkube version? -- 1.9.98
  • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube) -- local
  • What Kubernetes version? -- GitVersion:"v1.23.16"

Additional context
Role template -- https://github.com/kubeshop/helm-charts/blob/main/charts/testkube-operator/templates/role.yaml#L7
Role binding template -- https://github.com/kubeshop/helm-charts/blob/main/charts/testkube-operator/templates/rolebinding.yaml#L18

Container port on testkube-dashboard pods not configurable

Cannot make the container port on the testkube-dashboard pods as the readiness/liveness probes will fail

The testkube-dashboard service is configured to use the http(80) as the target port for testkube-dashboard pods.

Thus I updated the values file to make the testkube-dashboard pods to expose port 80 as given below.

testkube-dashboard:
  enabled: true
  service:
    type: ClusterIP
    port: 80

But the testkube-dashboard pods are not coming to a running state as the liveness/readiness probes are failing. The liveness/readiness probes ports are not configurable as given here

Is this a missing feature or am I missing something here?

Version / Cluster

  • testkube version : v1.8.95
  • Kubernetes cluster: EKS
  • Kubernetes version: v1.23

Screenshots
image

webhook-cert-patch lacks node selector configurations

Describe the enhancement you'd like to see
"dpejcev/kube-webhook-certgen:1.0.6" does not support arm64 and in clusters, with mixed architecture, it is sometimes scheduled on arm64 nodes which end up failing with "exec /kube-webhook-certgen: exec format error".
Either upgrading kube-webhook-certgen to a more recent version that has multiplatform images or support node selector would be nice :)

Testkube operator's cronjob template labels/annotations cannot be modified

Is your feature request related to a problem? Please describe.
We are using testkube on a cluster where service mesh is enabled. On each pod there is an istio-proxy container added. Currently there is no option to disable creation of istio-proxy on cronjob (without editing cronjob template and releasing new helm chart) based on some flag on values by adding specific label into it's template.

Describe the solution you'd like
I'd like to modify cronjob's template labels based on a value on helm chart

Describe alternatives you've considered
If we would be able to overwrite cronjob somehow (from helm chart level, not editing operator deploment) so our own cronjob template could be used -- that can be also a solution

Testkube dashboard can't get the testkube api properly

Describe the bug
The ingress not picking the correct url for testkube api when changing the default values

To Reproduce

  1. Downloaded the values.yml from https://github.com/kubeshop/helm-charts/blob/main/charts/testkube/values.yaml
  2. Updated and added the following fields:
testkube-api:
#...
  uiIngress:
    enabled: true
    className: "nginx"
    path: /results/(v\d/.*)
    hosts:
      - api.myhost.com
#..
testkube-dashboard:
  ingress:
    enabled: true
    className: "nginx"
    hosts:
      - testkube.myhost.com
  apiServerEndpoint: "api.myhost.com"

And upgrade the helm: helm upgrade testkube kubeshop/testkube -f .\values-prod.yml
When I visit testkube.myhost.com I get the prompt to enter the testkube api url:
image

Expected behavior
Expecting the configuration to work in first place without entering the api url in the prompt

Version / Cluster

  • Which testkube version?
    Client Version 1.9.24
    Server Version v1.9.32
  • What Kubernetes cluster? openshift
  • What Kubernetes version?1.24.8

Additional context
But when I change uiIngress.path from /results/(v\d/.*) to the value /, it work!

Is the default configuration in values.yml correct? I'm trying to understand what's happening under the hood.

Thank you.

MongoDB PreUpgradeHook does not take values override

Describe the bug
In case we want to override the image registry for the upgrade mongodb "preUpgradeHook" using another value other then the global one, the value is not taken in consideration when deploying the helm chart.

To Reproduce
Steps to reproduce the behavior:

  1. Create a values.yaml file to override registry values
  2. Set preUpgradeHook with a different value than [global][imageRegistry]
  3. Deploy the Helm Chart
  4. See that the Job is taking the value from the global imageRegistry

Expected behavior
The expectation is that the value set under the preUpgradeHook is taken in consideration the same way it happens with "testkube-operator" preUpgrade.

Version / Cluster

  • Which testkube version? 1.11.15
  • What Kubernetes cluster? EKS
  • What Kubernetes version? 1.23

Screenshots
If applicable, add CLI commands/output to help explain your problem.

Additional context

values-extra.yaml
`global:
imageRegistry: remote-docker.artifactory.xxxxxxxxx.com
annotations:
app: testkube

preUpgradeHook:
image:
registry: k8s-gcr-docker-remote.artifactory.xxxxxxxxxx.com

testkube-operator:
preUpgrade:
image:
registry: k8s-gcr-docker-remote.artifactory.xxxxxxxxxx.com`

By changing this line of code under "/testkube/templates/pre-upgrade" it picks up the value from the override (would be the same as under the testkube operator pre-upgrade):

from:
image: {{ include "global.images.image" (dict "imageRoot" .Values.preUpgradeHook.image "global" .Values.global) }}

to:
image: {{ include "global.images.image" (dict "imageRoot" .Values.preUpgradeHook.image) }}

Local values are not taken at all if global is set

Describe the bug
With 6f69e00 commit I observed regression in our testkube instance. We are using only local registries on our clusters (as we don't have an internet access there) and for each kind of registry (docker.io, gcr.io) we have separate url. As docker.io is the most common one we used that value as a global one and for each other registry we overwritten them using "local" variables. As far as I understand helm values local ones should overwrite global ones, but right now it's not available. We observed that on operator-pre-upgrade job.

To Reproduce
Steps to reproduce the behavior:

  1. Run helm template . --set="global.imageRegistry=test"
  2. Check that all images uses "test" registry

Expected behavior
If local variables are set the default/global variable should be overwritten

Version / Cluster

  • Which testkube version? 1.12.9

Manage job-template.yaml from values.yaml

Describe the bug
It seems .Values.configValues parameter breaks job-template.yaml. It can't run test after updates.

To Reproduce
Steps to reproduce the behavior:

  1. Copy content from job-template.yaml and paste to .Values.configValues like:
    .Values.configValues: | <here content from the file>
    Or use example from values.yaml
  2. Try to run test

Expected behavior
Ability to configure job-template.yaml from values.yaml for adding any of job or job container spec. Or ability to somehow load self job-tepmlate from self chart using testkube as dependency chart.

Version / Cluster

  • Version 1.13.11
  • GKE
  • 1.24

Related to: https://discord.com/channels/884464549347074049/885185660808474664/1130494644573175858

monitoringLabels do not get set for service monitor

Describe the bug
Custom labels set via testkube chart values can not be found in service monitor

To Reproduce
Steps to reproduce the behavior:

  1. Set custom values for testkube chart:
testkube-api:
  prometheus:
    enabled: true
  monitoringLabels:
    - release: "my-release"
  1. install testkube

Expected behavior
Service monitor contains label metadata.labels.release = "my-release"

Version / Cluster

  • Which testkube version? 1.8.13

Pods with error status after deploying testkube with kustomize

Describe the bug
After deploying helm chart with kustomize, multiple pods fail. This does not seem to impact functionality of testkube, but seem to be including tests which failing is irritating. This might not be limited to deploying with kustomize.

To Reproduce
Steps to reproduce the behavior:

  1. Create yaml.file:
---
kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
namespace: default

helmCharts:
- name: testkube
  namespace: default
  releaseName: testkube
  repo: https://kubeshop.github.io/helm-charts
  version: 1.11.220
  1. In directory of yaml file, run kustomize build --enable-helm . | kubectl apply -f -
  2. Multiple pods throw error:
  • testkube-api-server-test-connection: wget: can't connect to remote host (xx.xx.xxx.xx): Connection refused
  • testkube-operator-test-connection: wget: bad address 'testkube-operator:80'
  • webhook-cert-patch: Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. Error: no secret with 'webhook-server-cert' in 'default' {"err":"secrets \"webhook-server-cert\" not found","level":"info","msg":"secret default/webhook-server-cert does not exist","source":"k8s/k8s.go:348","time":"2023-05-26T08:44:22Z"}

Expected behavior
No failing pods on first install

Version / Cluster

  • Which testkube version? 1.11.220
  • What Kubernetes cluster? colima
  • What Kubernetes version? v1.27.1+k3s1

Add labels from values.yaml global.labels to job template

We were able to deploy successfully testkube but we cannot run any tests due to label admission hook that needs everything running inside the cluster to have labels added. Can we add labels to the testkube job execution since right now I see errors like:

failed to execute test: execution failed: test execution failed: retry count exceeeded, there are no active pods with given id=6490134ee57f8ed2b1050030","status":500

My guess would be that no labels are added to k8s job. With a hint I think labels needs to be added in those template files here: https://github.com/kubeshop/helm-charts/tree/develop/charts/testkube-api

testkube-api-server cannot find host testkube-nats

Describe the bug
Helm chart not properly running when installing, error in testkube-api-server pod

To Reproduce
Steps to reproduce the behavior:

  1. Install helm chart (in this was done with argoCD)
  2. See error "caller":"bus/nats.go:29","msg":"error connecting to nats","error":"dial tcp: lookup testkube-nats on [redacted]: no such host","stacktrace":"github.com/kubeshop/testkube/pkg/event/bus. โ”‚ โ”‚ NewNATSConnection\n\t/build/pkg/event/bus/nats.go:29\nmain.main\n\t/build/cmd/api-server/main.go:198\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
  3. Name of nats resources is release-name-nats

Expected behavior
All containers are running properly, no placeholder name nats

Version / Cluster

  • Which testkube version? latest
  • What Kubernetes cluster? (e.g. GKE, EKS, Openshift etc, local KinD, local Minikube)
  • What Kubernetes version?

Additional context
There is a possible workout, set testkube-api-nats-uri="nats://release-name-nats"

SecurityContext of testkube dependencies

Describe the enhancement you'd like to see
We use Kyverno for policy enforcement, when deploying testkube we are seeing that the pods from the dependencies run without setting allowPrivilegeEscalation or providing an override path.

It would be great to either offer a way to override these or set allowPrivilegeEscalation: false as the default option (unless there is a valid use case?)

Dependencies that were flagged:

  • testkube-operator-controller-manager
  • testkube-mongodb
  • testkube-minio-testkube
  • testkube-api-server

Readiness probe failed for the testkube-api-server pods

What is going wrong
When trying to install the testkube helm chart v1.8.95 in my EKS cluster, the testkube-api-server pods are not starting as the Readiness and Liveness probes are failing

To Reproduce
I installed the helm chart by the following commands.

helm dependency build [Charts.zip](https://github.com/kubeshop/helm-charts/files/10570105/Charts.zip)
helm install -f values-sreiously.yaml testkube ./charts/testkube-1.8.95.tgz --debug -n testkube-ashish

Expected behavior
I expect that this command should let the necessary pods to be in a running state.

Version / Cluster

  • testkube version: v1.8.95
  • EKS cluster
  • EKS kubernetes version: 1.23

Screenshots

image

Additional context
I even tried to edit the deployment that creates these pods and still the pods are not coming up into a running state.

The files that I used are attached.

Image for pre-upgrade Job can't be overwritten

Image for pre-upgrade Job can't be overwritten using --set Helm flag (or any other way)

To Reproduce
Steps to reproduce the behavior:

  1. Run helm install on environment where Docker registry is not available
  2. mongodb-upgrade job can't be done due to ImagePullBackOff error

Expected behavior
I can overwrite that value using helm --set flag

Version / Cluster

  • 1.9.92
  • Local kubernetes

Additional context
https://github.com/kubeshop/helm-charts/blob/main/charts/testkube/templates/pre-upgrade.yaml#L26

What version are we installing?

It would be nice to have here in this repo, for every release of the Testkube Helm chart, what version of the other helm charts is also installing.

I'd like to see easily, it could be a description on the release, the versions of the dashboard, operator-controller-manager and api-server I'm installing when I pick testkube 1.9 for instance

MultiNamespace

How can i select namespace for each tests/testsuites from dashboard??? or even create tests via CLI Tools (kubectl / testkube) ?

testkube-operator v1.13.0 artifacts are not the same between release bundles.

Describe the bug
Installing testkube-operator v1.13.0 from helm does not include the up to date testsuite resource.

To Reproduce
Steps to reproduce the behavior:

  1. Fetch the current releases via curl
curl -L https://github.com/kubeshop/helm-charts/releases/download/testkube-1.13.15/testkube-1.13.15.tgz -o testkube-1.13.15.tgz
tar xvfz testkube-1.13.15.tgz
curl -L https://github.com/kubeshop/helm-charts/releases/download/testkube-operator-1.13.0/testkube-operator-1.13.0.tgz -o testkube-operator-1.13.0.tgz
tar xvfz testkube-operator-1.13.0.tgz
  1. Compare the testkube-operator manifests
diff testkube/charts/testkube-operator/Chart.yaml testkube-operator/Chart.yaml

everything looks good, right?

  1. Compare the testkube-operator artifacts
diff -r testkube-operator testkube/charts/testkube-operator

and here we are explaining to our bosses where all the time went today.

Expected behavior
A Manifest / Chart.yaml is a BOM / Bill of materials, we would expect them to be identical whether or not they are sub-charts.

Screenshots

diff -r testkube-operator/Chart.lock testkube/charts/testkube-operator/Chart.lock
6c6
< generated: "2023-07-07T09:45:22.604201587Z"
---
> generated: "2023-07-25T13:52:21.978016984Z"
diff -r testkube-operator/templates/deployment.yaml testkube/charts/testkube-operator/templates/deployment.yaml
25a26,32
>         {{- if .Values.global.labels }}
>         {{- include "global.tplvalues.render" ( dict "value" .Values.global.labels "context" $ ) | nindent 8 }}
>         {{- end }}
>         {{- with .Values.podAnnotations }}
>       annotations:
>         {{- toYaml . | nindent 8 }}
>       {{- end }}
diff -r testkube-operator/templates/executor.testkube.io_webhooks.yaml testkube/charts/testkube-operator/templates/executor.testkube.io_webhooks.yaml
53a54,58
>               headers:
>                 additionalProperties:
>                   type: string
>                 description: webhook headers
>                 type: object
56a62,64
>                 type: string
>               payloadTemplate:
>                 description: golang based template for notification payload
diff -r testkube-operator/templates/pre-upgrade-sa.yaml testkube/charts/testkube-operator/templates/pre-upgrade-sa.yaml
13d12
< ---
14a14
> ---
29d28
< ---
30a30
> ---
diff -r testkube-operator/templates/role.yaml testkube/charts/testkube-operator/templates/role.yaml
324,325d323
< ---
< 
326a325
> ---
373c372
< {{- end -}}
\ No newline at end of file
---
> {{- end -}}
diff -r testkube-operator/templates/rolebinding.yaml testkube/charts/testkube-operator/templates/rolebinding.yaml
94,95d93
< ---
< 
96a95
> ---
124,125c123
<   {{- end -}}
< 
---
> {{- end -}}
diff -r testkube-operator/templates/serviceaccount.yaml testkube/charts/testkube-operator/templates/serviceaccount.yaml
20d19
< ---
22a22
> ---
44c44
<     {{- end }}
---
> {{- end }}
diff -r testkube-operator/templates/tests.testkube.io_tests.yaml testkube/charts/testkube-operator/templates/tests.testkube.io_tests.yaml
391,393d390
<                             namespace:
<                               description: object kubernetes namespace
<                               type: string
418,420d414
<                             namespace:
<                               description: object kubernetes namespace
<                               type: string
431c425,431
<                       description: test type
---
>                       description: |
>                         type of sources a runner can get data from.
>                             string: String content (e.g. Postman JSON file).
>                             file-uri: content stored on the webserver.
>                             git-file: the file stored in the Git repo in the given repository.path field.
>                             git-dir: the entire git repo or git subdirectory depending on the  repository.path field (Testkube does a shadow clone and sparse checkout to limit IOs in the case of monorepos)
>                             git: automatically provisions either a file, directory or whole git repository depending on the repository.path field
592a593,595
>                     postRunScript:
>                       description: script to run after test execution
>                       type: string
595c598
<                       type: string 
---
>                       type: string
diff -r testkube-operator/templates/tests.testkube.io_testsources.yaml testkube/charts/testkube-operator/templates/tests.testkube.io_testsources.yaml
70,72d69
<                       namespace:
<                         description: object kubernetes namespace
<                         type: string
96,98d92
<                         type: string
<                       namespace:
<                         description: object kubernetes namespace
diff -r testkube-operator/templates/tests.testkube.io_testsuites.yaml testkube/charts/testkube-operator/templates/tests.testkube.io_testsuites.yaml
544a545,814
>       storage: false
>       subresources:
>         status: {}
>     - name: v3
>       schema:
>         openAPIV3Schema:
>           description: TestSuite is the Schema for the testsuites API
>           properties:
>             apiVersion:
>               description: 'APIVersion defines the versioned schema of this representation
>                 of an object. Servers should convert recognized schemas to the latest
>                 internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
>               type: string
>             kind:
>               description: 'Kind is a string value representing the REST resource this
>                 object represents. Servers may infer this from the endpoint the client
>                 submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
>               type: string
>             metadata:
>               type: object
>             spec:
>               description: TestSuiteSpec defines the desired state of TestSuite
>               properties:
>                 after:
>                   description: After batch steps is list of batch tests which will be
>                     sequentially orchestrated for parallel tests in each batch
>                   items:
>                     description: set of steps run in parallel
>                     properties:
>                       execute:
>                         items:
>                           description: TestSuiteStepSpec for particular type will have
>                             config for possible step types
>                           properties:
>                             delay:
>                               description: delay duration in time units
>                               format: duration
>                               type: string
>                             test:
>                               description: object name
>                               type: string
>                           type: object
>                         type: array
>                       stopOnFailure:
>                         type: boolean
>                     required:
>                     - stopOnFailure
>                     type: object
>                   type: array
>                 before:
>                   description: Before batch steps is list of batch tests which will
>                     be sequentially orchestrated for parallel tests in each batch
>                   items:
>                     description: set of steps run in parallel
>                     properties:
>                       execute:
>                         items:
>                           description: TestSuiteStepSpec for particular type will have
>                             config for possible step types
>                           properties:
>                             delay:
>                               description: delay duration in time units
>                               format: duration
>                               type: string
>                             test:
>                               description: object name
>                               type: string
>                           type: object
>                         type: array
>                       stopOnFailure:
>                         type: boolean
>                     required:
>                     - stopOnFailure
>                     type: object
>                   type: array
>                 description:
>                   type: string
>                 executionRequest:
>                   description: test suite execution request body
>                   properties:
>                     cronJobTemplate:
>                       description: cron job template extensions
>                       type: string
>                     executionLabels:
>                       additionalProperties:
>                         type: string
>                       description: execution labels
>                       type: object
>                     httpProxy:
>                       description: http proxy for executor containers
>                       type: string
>                     httpsProxy:
>                       description: https proxy for executor containers
>                       type: string
>                     labels:
>                       additionalProperties:
>                         type: string
>                       description: test suite labels
>                       type: object
>                     name:
>                       description: test execution custom name
>                       type: string
>                     namespace:
>                       description: test kubernetes namespace (\"testkube\" when not
>                         set)
>                       type: string
>                     secretUUID:
>                       description: secret uuid
>                       type: string
>                     sync:
>                       description: whether to start execution sync or async
>                       type: boolean
>                     timeout:
>                       description: timeout for test suite execution
>                       format: int32
>                       type: integer
>                     variables:
>                       additionalProperties:
>                         properties:
>                           name:
>                             description: variable name
>                             type: string
>                           type:
>                             description: variable type
>                             type: string
>                           value:
>                             description: variable string value
>                             type: string
>                           valueFrom:
>                             description: or load it from var source
>                             properties:
>                               configMapKeyRef:
>                                 description: Selects a key of a ConfigMap.
>                                 properties:
>                                   key:
>                                     description: The key to select.
>                                     type: string
>                                   name:
>                                     description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
>                                       TODO: Add other useful fields. apiVersion, kind,
>                                       uid?'
>                                     type: string
>                                   optional:
>                                     description: Specify whether the ConfigMap or its
>                                       key must be defined
>                                     type: boolean
>                                 required:
>                                 - key
>                                 type: object
>                               fieldRef:
>                                 description: 'Selects a field of the pod: supports metadata.name,
>                                   metadata.namespace, `metadata.labels[''<KEY>'']`,
>                                   `metadata.annotations[''<KEY>'']`, spec.nodeName,
>                                   spec.serviceAccountName, status.hostIP, status.podIP,
>                                   status.podIPs.'
>                                 properties:
>                                   apiVersion:
>                                     description: Version of the schema the FieldPath
>                                       is written in terms of, defaults to "v1".
>                                     type: string
>                                   fieldPath:
>                                     description: Path of the field to select in the
>                                       specified API version.
>                                     type: string
>                                 required:
>                                 - fieldPath
>                                 type: object
>                               resourceFieldRef:
>                                 description: 'Selects a resource of the container: only
>                                   resources limits and requests (limits.cpu, limits.memory,
>                                   limits.ephemeral-storage, requests.cpu, requests.memory
>                                   and requests.ephemeral-storage) are currently supported.'
>                                 properties:
>                                   containerName:
>                                     description: 'Container name: required for volumes,
>                                       optional for env vars'
>                                     type: string
>                                   divisor:
>                                     anyOf:
>                                     - type: integer
>                                     - type: string
>                                     description: Specifies the output format of the
>                                       exposed resources, defaults to "1"
>                                     pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
>                                     x-kubernetes-int-or-string: true
>                                   resource:
>                                     description: 'Required: resource to select'
>                                     type: string
>                                 required:
>                                 - resource
>                                 type: object
>                               secretKeyRef:
>                                 description: Selects a key of a secret in the pod's
>                                   namespace
>                                 properties:
>                                   key:
>                                     description: The key of the secret to select from.  Must
>                                       be a valid secret key.
>                                     type: string
>                                   name:
>                                     description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
>                                       TODO: Add other useful fields. apiVersion, kind,
>                                       uid?'
>                                     type: string
>                                   optional:
>                                     description: Specify whether the Secret or its key
>                                       must be defined
>                                     type: boolean
>                                 required:
>                                 - key
>                                 type: object
>                             type: object
>                         type: object
>                       type: object
>                   type: object
>                 repeats:
>                   type: integer
>                 schedule:
>                   description: schedule in cron job format for scheduled test execution
>                   type: string
>                 steps:
>                   description: Batch steps is list of batch tests which will be sequentially
>                     orchestrated for parallel tests in each batch
>                   items:
>                     description: set of steps run in parallel
>                     properties:
>                       execute:
>                         items:
>                           description: TestSuiteStepSpec for particular type will have
>                             config for possible step types
>                           properties:
>                             delay:
>                               description: delay duration in time units
>                               format: duration
>                               type: string
>                             test:
>                               description: object name
>                               type: string
>                           type: object
>                         type: array
>                       stopOnFailure:
>                         type: boolean
>                     required:
>                     - stopOnFailure
>                     type: object
>                   type: array
>               type: object
>             status:
>               description: TestSuiteStatus defines the observed state of TestSuite
>               properties:
>                 latestExecution:
>                   description: latest execution result
>                   properties:
>                     endTime:
>                       description: test suite execution end time
>                       format: date-time
>                       type: string
>                     id:
>                       description: execution id
>                       type: string
>                     startTime:
>                       description: test suite execution start time
>                       format: date-time
>                       type: string
>                     status:
>                       type: string
>                   type: object
>               type: object
>           type: object
>       served: true
diff -r testkube-operator/templates/tests.testkube.io_testtriggers.yaml testkube/charts/testkube-operator/templates/tests.testkube.io_testtriggers.yaml
89a90,94
>                     delay:
>                       description: duration in seconds the test trigger waits between
>                         condition check
>                       format: int32
>                       type: integer
118a124,166
>                 probeSpec:
>                   description: What resource probes should be matched
>                   properties:
>                     delay:
>                       description: duration in seconds the test trigger waits between
>                         probes
>                       format: int32
>                       type: integer
>                     probes:
>                       description: list of test trigger probes
>                       items:
>                         description: TestTriggerProbe is used for definition of the
>                           probe for test triggers
>                         properties:
>                           headers:
>                             additionalProperties:
>                               type: string
>                             description: test trigger condition probe headers to submit
>                             type: object
>                           host:
>                             description: test trigger condition probe host, default
>                               is pod ip or service name
>                             type: string
>                           path:
>                             description: test trigger condition probe path to check,
>                               default is /
>                             type: string
>                           port:
>                             description: test trigger condition probe port to connect
>                             format: int32
>                             type: integer
>                           scheme:
>                             description: test trigger condition probe scheme to connect
>                               to host, default is http
>                             type: string
>                         type: object
>                       type: array
>                     timeout:
>                       description: duration in seconds the test trigger waits for probes,
>                         until its stopped
>                       format: int32
>                       type: integer
>                   type: object                  
diff -r testkube-operator/values.yaml testkube/charts/testkube-operator/values.yaml
5a6
> ## Important! Please, note that this will override sub-chart image parameters.
26a28,30
> ## Additional pod annotations to Testkube Operator pod
> podAnnotations: {}
> 
290c294
<     registry: k8s.gcr.io
---
>     registry: registry.k8s.io

Additional context
Test yo tests, and please run your release script against the testkube-operator so I don't have to waste another day fixing this, please.

STORAGE_ACCESSKEYID and STORAGE_SECRETACCESSKEY from Secret

Describe the enhancement you'd like to see
Currently, there is no way to provide the storage access-key-id and secret-access-key from the secret kind. Having these values not come from a secret introduces a security challenge in my workflow. I use Flux Helm Release to manage Testkube installation and store the manifest in a Git repo. Anyone having access to the repo will be able to see the values.

It'd be great if the helm chart loads the values from a secret if provided.

I can take a stab and submit a pull request.

- name: "STORAGE_ACCESSKEYID"
value: "{{ .Values.storage.accessKeyId }}"

- name: "STORAGE_SECRETACCESSKEY"
value: "{{ .Values.storage.accessKey }}"

Setting "allowTLS: true" fails the helm install

Describe the bug
Using Amazon DocumentDB and setting allowTLS: true results in the following error

Helm install failed: Deployment in version "v1" cannot be handled as a Deployment: json: cannot unmarshal bool into Go struct field EnvVar.spec.template.spec.containers.env.value of type string Last Helm

To Reproduce
Set values

testkube-api:
  mongodb:
    dsn: "<dsn>"
    allowDiskUse: false
    dbType: docdb
    allowTLS: true

Expected behavior
The helm chart should install without any errors

Version / Cluster

  • Which testkube version? 1.12.5
  • Which Helm Chart version? 1.12.1
  • What Kubernetes cluster? AWS EKS
  • What Kubernetes version? 1.23

Additional context
The helm chart installs if allowTLS is not set.

testkube-api:
  mongodb:
    dsn: "<dsn>"
    allowDiskUse: false
    dbType: docdb

Service Monitor validation fails

Describe the bug
Inbuilt service monitor yaml is incorrect

To Reproduce
Steps to reproduce the behavior:

  1. Enable built in service monitor
  2. Try to deploy
  3. See error error validating data: ValidationError(ServiceMonitor.spec): unknown field "port" in **com.coreos.monitoring.v1.ServiceMonitor.spec**

Expected behavior
Deployment works

Additional context
See ServiceMonitor API, port found under endpoint so more indentation needed

HorizontalPodAutoscaler (HPA) resource does not work for newer kubernetes versions

Describe the bug
When attempting to apply the helm chart with the following configurations:

autoscaling:
    enabled: true
    minReplicas: ${apiserver_configs.min_replicas}
    maxReplicas: ${apiserver_configs.max_replicas}
    targetCPUUtilizationPercentage: ${apiserver_configs.target_cpu_utilization}
    targetCPU: ${apiserver_configs.target_cpu_utilization}
    targetMemoryUtilizationPercentage: ${apiserver_configs.target_memory_utilization}

for both testkube-api or testkube-dashboard, the apply will fail with the following error:

Error: error validating "": error validating data: [ValidationError(HorizontalPodAutoscaler.spec.metrics[1].resource): unknown
field "targetAverageUtilization" in io.k8s.api.autoscaling.v2.ResourceMetricSource,
ValidationError(HorizontalPodAutoscaler.spec.metrics[1].resource): missing required field "target" in
io.k8s.api.autoscaling.v2.ResourceMetricSource]

I'm running kubernetes version 1.23, I believe the problem is kubernetes deprecated the apiVersion autoscaling/v2beta1 for HPA since 1.22, and we need to use autoscaling/v2 instead for newer versions with a slightly different syntax (see example PR in Additional Context section).

To Reproduce
Steps to reproduce the behavior:

  1. Have a cluster with server version >= 1.23
  2. Turn autoscaling configuration on
  3. install/upgrade helm with the autoscaling configuration
  4. see the error

Expected behavior
The apply should succeed with the HPA resources created for any kubernetes server version

Version / Cluster

  • Which testkube version? latest helm chart version 1.11.126
  • What Kubernetes cluster? EKS
  • What Kubernetes version? 1.23

Additional Context
Found an example PR with similar issue and fix: https://github.com/bitnami/charts/pull/10061/files

testkube oauth2 proxy support for custom tls certs

Describe the enhancement you'd like to see

As a Testkube operator
in order to use oauth2 proxy with a self-signed OIDC endpoint
I need testkube oauth2 helm chart to support mounting such custom certs through an extra volume + volume mount, similar to https://github.com/bitnami/charts/blob/4f55b58df012d9bbad764182a6f3c415b36b9767/bitnami/oauth2-proxy/values.yaml#L568-L571

workaround

Post process helm chart rendering, with flux my team uses the following to mount custom tls certs from host

  postRenderers:
    # Mount host ssl certs to corpotate FQDN used
    - kustomize:
        patchesJson6902:
          - target:
              kind: Deployment
              name: oauth2-proxy
            patch:
              - op: add
                path: /spec/template/spec/volumes
                value: []
              - op: add
                path: /spec/template/spec/volumes/-
                value:
                  name: cert-volume
                  hostPath:
                    path: /etc/ssl/certs
                    type: Directory
              - op: add
                path: /spec/template/spec/containers/0/volumeMounts
                value: []
              - op: add
                path: /spec/template/spec/containers/0/volumeMounts/-
                value:
                  mountPath: /etc/ssl/certs
                  name: cert-volume

Additional context
Add any other context, CLI input/output, etc. about the enhancement here.

Allow image Docker registry redefining for the entire package

Why should it be implemented?

In many enterprise environments, image downloads are allowed from internal servers only due to security measurements. Now it's impossible to redefine image registries for all the charts, i.e. nats-io chart lacks this option (nats-io/k8s#378).

Describe the improvement

Ideally it should be a global global.imageRegistry parameter to define it across all the charts, but at least a working image.registry parameter for every chart would help. Maybe all external charts could be adopted and installed from local folders rather than from remote repositories.

Additional context
Add any other context about the problem here if needed.

Testkube-api-server Liveness/Readiness Faile

NAME                                                    READY   STATUS    RESTARTS        AGE
testkube-api-server-86dc8dc8b4-97v2b                    0/1     Running   7 (5m57s ago)   17m
testkube-dashboard-59497c8f8d-9l92c                     1/1     Running   0               17m
testkube-minio-testkube-7dddff9494-9qfh4                0/1     Pending   0               17m
testkube-dashboard-59497c8f8d-9l92c                     1/1     Running   0               17m
testkube-minio-testkube-7dddff9494-9qfh4                0/1     Pending   0               17m
testkube-mongodb-6475c59c5f-l9f8j                       0/1     Pending   0               17m
testkube-nats-0                                         3/3     Running   0               17m
testkube-nats-box-5789c4b55f-vxl7h                      1/1     Running   0               17m
testkube-operator-controller-manager-5fff8d9888-vv8s2   2/2     Running   0               17m

when i kubectl describe Pod/testkube-api-server, it has liveness/readiness issues.
will minio & mongodb both in pending status cause testkube-api-server /health malfunctional???

No options for adding extra env variables

I think an option is needed to add extra env variables. For example if you need to add a proxy envs

Example from bitnami charts:

## @param extraEnvVars Extra environment variables to add
## E.g:
## extraEnvVars:
##   - name: FOO
##     value: BAR
##
extraEnvVars: []

Allow modification of readiness and liveness probe parameters of the testkube-api deployment

Is your feature request related to a problem? Please describe.
I tried installing testkube using the helm charts. When starting the application, I see that requests are being throttled towards the Kubernetes Control Plane in my cluster. After a while, the testkube-api app starts running, but is terminated externally with a terminate signal.

Describe the solution you'd like
I suspect that the readiness and liveness probes were set too sharply. To test this I would like to change the readiness and liveness probes for the testkube-api deployment. The helm chart currently does not offer any option to do that.

Describe alternatives you've considered
You can change the deployment manually after installing it through helm, but that's not a good way.

Additional context
testkube-api-startup-error

minio component in chart does not have storageclass value

Describe the bug
Is not possible to specify the value for the storageclass for minio

To Reproduce
Steps to reproduce the behavior:

  1. helm install

Expected behavior
Being able to specify storage class

Version / Cluster
NA

Screenshots
NA

Additional context
NA

Namespace preference not working for testkube

Describe the bug
When I set a namespace preference with kubectl config, requests without specifying --namespace with testkube does not work

To Reproduce
Steps to reproduce the behavior:

  1. Install helm chart with kustomize values (https://json.schemastore.org/kustomization.json), use a custom namespace:
helmCharts:
  - name: testkube
     namespace: <insert-namespace-name-here>
     releaseName: testkube
     repo: https://kubeshop.github.io/helm-charts
  1. Run kubectl config set-context --current --namespace=<insert-namespace-name-here>
  2. Run kubectl config view --minify | grep namespace: to verify namespace was set
  3. Run kubectl testkube get tests --verbose
  4. See error
api/GET-testkube.ServerInfo returned error: api server response: '{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"testkube-api-server\" not found","reason":"NotFound","details":{"name":"testkube-api-server","kind":"services"},"code":404}
'
error: services "testkube-api-server" not found

getting all test with execution summaries in namespace testkube (error: api/GET-[]testkube.TestWithExecutionSummary returned error: api server response: '{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"testkube-api-server\" not found","reason":"NotFound","details":{"name":"testkube-api-server","kind":"services"},"code":404}
'
error: services "testkube-api-server" not found)

Expected behavior
Same output as when using kubectl testkube get tests --namespace <insert-namespace-name-here>

Version / Cluster

  • Which testkube version? 1.7.17

Slack config causes template error

Describe the bug
Defining slackConfig causes the testkube-api-server deployment to fail to deploy due to trying to parse the value of the config as base64.

To Reproduce
Steps to reproduce the behavior:

Set values file for testkube-api-server to:

slackConfig:
    - ChannelID: ""
      selector: {}
      testName: []
      testSuiteName: []
      events:
        - "start-test"
        - "end-test-success"
        - "end-test-failed"
        - "end-test-aborted"
        - "end-test-timeout"
        - "start-testsuite"
        - "end-testsuite-success"
        - "end-testsuite-failed"
        - "end-testsuite-aborted"
        - "end-testsuite-timeout"

New TestKube api server fails to deploy with the following message:

Creating slack loader (error: error decoding slack config from base64: illegal base64 data at input byte 0)

Expected behavior
Setting slackConfig should override the value in the ConfigMap, but not be passed in as an environment variable value here.

Version / Cluster

  • Helm chart version 1.11.206

Screenshots

Additional context
This snippet indicates that if the env var is passed in, then it will be read as base64, but the Helm chart indicates that the slackConfig should be used only to template the ConfigMap, and thus the environment variable should be omitted completely or set as a different Helm value such as slackConfigString, where it takes in a base64 encoded string.

Use higher than default value for nats.nats.maxPayload due to error with Slack messages sending

Why should it be implemented?
It's related with kubeshop/testkube#3640 issue -- slack messages are not sent as maxPayload is exceeded. That message is default testkube's message so that means NATS default maxPayload value (1MB -- source ) is too low. I tested with 8MB and Slack messages was sent successfully

Describe the improvement
Default value of NATS configuration is not enough for Slack messages related with testsuite failure.

Additional context
Add any other context about the problem here if needed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.