Giter VIP home page Giter VIP logo

helm-charts's People

Contributors

a-imal avatar afalhambra-hivemq avatar angeloskaltsikis avatar ankociemba avatar cosmolev avatar dc2-danielkrueger avatar donnerbart avatar florian-limpoeck avatar gitseti avatar hivemq-jenkins avatar hlohse avatar hurtadosanti avatar mario-schwede-hivemq avatar mchernyakov avatar mhofsche avatar patrickjahns avatar pawellabaj avatar remit avatar renovate[bot] avatar sbaier1 avatar schaebo avatar sgtsilvio avatar yannickweber avatar ymengesha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

PodSecurityPolicy was deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25

When running helm upgrade --install hivemq hivemq/hivemq-operator on 1.25.4, the following error occures:

Release "hivemq" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: resource mapping not found for name: "hivemq-hivemq-operator" namespace: "default" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first

See:
https://kubernetes.io/docs/concepts/security/pod-security-policy/

Specifically:

PodSecurityPolicy was deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25.

unable to create a cluster

I followed https://www.hivemq.com/docs/operator/4.6/kubernetes-operator/operator-intro.html

After issuing the following command (I need to reduce the node count as my cluster only has one worker node)
helm upgrade --install hivemq hivemq/hivemq-operator --set hivemq.nodeCount=1

I see that the status is always Creating, and only the operator pod is running

$ kubectl get hivemq-clusters
NAME     SIZE   IMAGE            VERSION     STATUS     ENDPOINT   MESSAGE
hivemq   1      hivemq/hivemq4   k8s-4.6.0   Creating              Initial status

$ kubectl get po
NAME                                               READY   STATUS    RESTARTS   AGE
hivemq-hivemq-operator-operator-84c5b5bd47-4vpvt   1/1     Running   0          13m

How can I know what is wrong?

Community Edition operator support

Hello,

No valid license file found. Using trial license, restricted to 25 connections.

Is it possible to use the operator to install the CE?

Auto scale on k8s env

Could Hivemq support auto scaling which depend on the cpu and memory usage in k8s env?

HiveMQ deployment not working on local Kubernetes cluster

I'm trying to test hivemq deployment using helm on the local kubernetes cluster but it's stuck at "Syncing state for the cluster hivemq":

14:45:51.370 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 32167ms. Server Running: https://hivemq-hivemq-operator-operator-8464f5b995-8s7mj:8443
14:46:01.374 [main] INFO  com.hivemq.Operator - Operating from namespace 'default'
14:46:01.377 [main] INFO  com.hivemq.Operator - Initializing HiveMQ operator
14:46:13.690 [pool-1-thread-1] INFO  com.hivemq.AbstractWatcher - CustomResource watcher running for kinds HiveMQCluster
14:46:13.761 [main] INFO  com.hivemq.Operator - Operator started in 12385ms
14:46:24.062 [pool-1-thread-2] INFO  com.hivemq.Operator - Syncing state for cluster hivemq

I tried to test it by using port-forwarding but it's also timing out:

$ kubectl port-forward svc/hivemq-hivemq-mqtt 1883:1883
error: timed out waiting for the condition

I've tried using

  • K3S
  • Kind
    frameworks for local Kubernetes deployment.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Base branch does not exist - skipping

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

dockerfile
tests-hivemq-operator/src/integrationTest/resources/k3s.dockerfile
  • ubuntu noble-20240429@sha256:3f85b7caad41a95462cf5b787d8a04604c8262cdcdf9a472b8c52ef83375fe15
  • rancher/k3s v1.30.1-k3s1@sha256:09e019280cdc89d038644f1656ac7f2aed98807bd97c20e2dc1b5b9f534a0718
tests-hivemq-platform-operator/src/integrationTest/resources/helm.dockerfile
  • ubuntu noble-20240429@sha256:3f85b7caad41a95462cf5b787d8a04604c8262cdcdf9a472b8c52ef83375fe15
  • rancher/k3s v1.30.1-k3s1@sha256:09e019280cdc89d038644f1656ac7f2aed98807bd97c20e2dc1b5b9f534a0718
github-actions
.github/workflows/hivemq-operator-integration-test.yml
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • docker/setup-qemu-action v3@68827325e0b33c7199eb31dd4e31fbe9023e06e3
  • docker/login-action v3@0d4c9c5ea7693da7b068278f7b52bda2a190a446
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/setup-java v4@99b8673ff64fbf99d8d325f52d9a5bdedb8483e9
  • gradle/actions v3@db19848a5fa7950289d3668fb053140cf3028d43
  • actions/upload-artifact v4@65462800fd760344b1a7b4382951275a0abb4808
.github/workflows/hivemq-platform-operator-integration-test.yml
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • docker/setup-qemu-action v3@68827325e0b33c7199eb31dd4e31fbe9023e06e3
  • docker/login-action v3@0d4c9c5ea7693da7b068278f7b52bda2a190a446
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/setup-java v4@99b8673ff64fbf99d8d325f52d9a5bdedb8483e9
  • gradle/actions v3@db19848a5fa7950289d3668fb053140cf3028d43
  • actions/upload-artifact v4@65462800fd760344b1a7b4382951275a0abb4808
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • docker/setup-qemu-action v3@68827325e0b33c7199eb31dd4e31fbe9023e06e3
  • docker/login-action v3@0d4c9c5ea7693da7b068278f7b52bda2a190a446
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/setup-java v4@99b8673ff64fbf99d8d325f52d9a5bdedb8483e9
  • gradle/actions v3@db19848a5fa7950289d3668fb053140cf3028d43
  • actions/upload-artifact v4@65462800fd760344b1a7b4382951275a0abb4808
.github/workflows/release.yml
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • azure/setup-helm v4@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814
  • helm/chart-releaser-action v1.6.0@a917fd15b20e8b64b94d9158ad54cd6345335584
.github/workflows/smoke-test.yml
  • docker/setup-qemu-action v3@68827325e0b33c7199eb31dd4e31fbe9023e06e3
  • docker/login-action v3@0d4c9c5ea7693da7b068278f7b52bda2a190a446
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/setup-java v4@99b8673ff64fbf99d8d325f52d9a5bdedb8483e9
  • gradle/actions v3@db19848a5fa7950289d3668fb053140cf3028d43
  • actions/upload-artifact v4@65462800fd760344b1a7b4382951275a0abb4808
  • docker/setup-qemu-action v3@68827325e0b33c7199eb31dd4e31fbe9023e06e3
  • docker/login-action v3@0d4c9c5ea7693da7b068278f7b52bda2a190a446
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/setup-java v4@99b8673ff64fbf99d8d325f52d9a5bdedb8483e9
  • gradle/actions v3@db19848a5fa7950289d3668fb053140cf3028d43
  • actions/upload-artifact v4@65462800fd760344b1a7b4382951275a0abb4808
.github/workflows/verify.yml
  • actions/checkout v4@a5ac7e51b41094c92402da3b24376905380afc29
  • azure/setup-helm v4@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814
  • d3adb5/helm-unittest-action v2.4@66140cd099aa6c4f2ebc59735b8e421135a6d4e3
gradle
gradle.properties
settings.gradle.kts
build.gradle.kts
tests-hivemq-operator/gradle.properties
  • org.assertj:assertj-core 3.26.0
  • org.codehaus.groovy:groovy-all 3.0.21
  • org.junit.jupiter:junit-jupiter 5.10.2
  • org.junit.jupiter:junit-jupiter-api 5.10.2
  • org.testcontainers:testcontainers 1.19.8
  • org.testcontainers:k3s 1.19.8
  • org.testcontainers:junit-jupiter 1.19.8
  • org.slf4j:slf4j-api 2.0.13
  • org.slf4j:slf4j-simple 2.0.13
  • io.fabric8:kubernetes-client 6.13.0
  • org.bouncycastle:bcprov-jdk15on 1.70
  • org.bouncycastle:bcpkix-jdk15on 1.70
  • org.awaitility:awaitility 4.2.1
  • ch.qos.logback:logback-classic 1.5.6
  • ch.qos.logback:logback-core 1.5.6
  • org.testcontainers:hivemq 1.19.8
tests-hivemq-operator/settings.gradle.kts
tests-hivemq-operator/build.gradle.kts
tests-hivemq-platform-operator/gradle.properties
  • org.junit.jupiter:junit-jupiter 5.10.2
  • org.junit.jupiter:junit-jupiter-api 5.10.2
  • org.javassist:javassist 3.30.2-GA
  • org.jboss.shrinkwrap:shrinkwrap-api 1.2.6
  • org.jboss.shrinkwrap:shrinkwrap-impl-base 1.2.6
  • org.testcontainers:testcontainers 1.19.8
  • org.testcontainers:k3s 1.19.8
  • org.testcontainers:hivemq 1.19.8
  • org.testcontainers:junit-jupiter 1.19.8
  • org.testcontainers:selenium 1.19.8
  • org.bouncycastle:bcprov-jdk15on 1.70
  • org.bouncycastle:bcpkix-jdk15on 1.70
  • org.assertj:assertj-core 3.26.0
  • org.awaitility:awaitility 4.2.1
  • org.seleniumhq.selenium:selenium-remote-driver 4.20.0
  • org.seleniumhq.selenium:selenium-java 4.20.0
  • ch.qos.logback:logback-classic 1.5.6
  • ch.qos.logback:logback-core 1.5.6
  • org.slf4j:slf4j-api 2.0.13
tests-hivemq-platform-operator/settings.gradle.kts
tests-hivemq-platform-operator/build.gradle.kts

  • Check this box to trigger a request for Renovate to run again on this repository

Quickstart doesn't start cluster / Stuck on PENDING

Hi,

I'm trying to start a test hivemq cluster on k8s using the quickstart guide (https://www.hivemq.com/docs/operator/4.8/kubernetes-operator/deploying.html#deploy-operator).

I've added the hivemq helm repo and ran the command helm upgrade --install hivemq hivemq/hivemq-operator.

During this deployment I see one error, but I'm not sure it is related:
W0926 14:47:02.100015 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0926 14:47:02.879929 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0926 14:47:03.103902 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
I0926 14:47:34.943124 121077 trace.go:205] Trace[183458752]: "Reflector ListAndWatch" name:k8s.io/[email protected]/tools/cache/reflector.go:167 (26-Sep-2022 14:47:04.896) (total time: 30046ms):
Trace[183458752]: ---"Objects listed" error: 30046ms (14:47:34.943)
Trace[183458752]: [30.046624755s] [30.046624755s] END
W0926 14:47:35.028430 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0926 14:47:35.304094 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0926 14:48:05.788086 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0926 14:48:05.953539 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0926 14:48:10.756457 121077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: hivemq
LAST DEPLOYED: Mon Sep 26 14:47:01 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1

The operator is started, I see no errors on the operator. But when I do cm kubectl get hivemq-clusters, the status stays on PENDING. No errors that I can find in the operator logs.
NAME SIZE IMAGE VERSION STATUS ENDPOINT MESSAGE
hivemq 3 hivemq/hivemq4 k8s-4.8.4 Pending Initial status

Any idea what this could be?

hivemqCluster.json not updated

The file hivemqCluster.json does not represent the recent changes done to the CR. So it does not match the hivemq part in the values.yaml.

Function error when deploying chart for hivemq platform

Following the tutorial on this link, when running
helm install <your-operator> hivemq/hivemq-platform-operator -n <namespace> --wait && helm install <your-platform> hivemq/hivemq-platform -n <namespace> --wait

It returns the following error:

Error: INSTALLATION FAILED: parse error at (hivemq-platform/templates/_helpers.tpl:127): function "break" not defined

STEPS TO REPRODUCE:

  • On Google Cloud Platform, create an Autopilot Private Cluster
  • Using Cloud Shell, run the following
    gcloud container clusters get-credentials <your-cluster-name> --zone <your-cluster-zone>
    kubectl create namespace hivemq
    helm repo add hivemq https://hivemq.github.io/helm-charts
    helm repo update
    helm install hivemq-operator hivemq/hivemq-platform-operator -n hivemq --wait
    helm install hivemq-platform hivemq/hivemq-platform -n hivemq --wait

image

EDIT: clarity on the code blocks

Wrong name when installing as an helm dependency

I use the Helm chart as a dependency of another service I use, so I can ramp up everything at once.

In my chart.yaml file I added this:

dependencies:
  - name: hivemq-operator
    version: 0.11.17
    repository: https://hivemq.github.io/helm-charts
    condition: hivemq-operator.enabled

then in the values.yamlI added this:

hivemq-operator:
  enabled: true
  namespaceOverride: hivemq
  nameOverride: hivemq
  fullnameOverride: hivemq
  global:
    rbac:
      pspEnabled: false

When I run the command:

helm upgrade -i foo .

I expect to have in the hivemq namespace a deployment named hivemq for the cluster, but I found 3 pods named "foo".

I presume this happens because in the cluster-deployment.yaml file it's using "{{ spec.name }}" and not allowing a name override from the user.

Docs: Description for noConnectIdleTimeout says seconds instead of miliseconds

See:
https://www.hivemq.com/docs/operator/4.6/kubernetes-operator/configuration.html#basic

noConnectIdleTimeout 10000 The time in seconds that HiveMQ waits for the CONNECT message of a client before closing an open TCP socket

And:
https://www.hivemq.com/docs/hivemq/4.6/user-guide/restrictions.html#connection-timeouts

no-connect-idle-timeout 10000 The time in milliseconds that HiveMQ waits for a CONNECT message of a client before an open TCP socket is closed.

Missing support of user defined annotations on various components

I am trying to deploy hivemq from the helm charts using argocd. At the moment the charts heavily rely on helm hooks that run at various moments of the release life-cycle.

Using the helm chart with argocd, means that jobs running post deployment that patch various fields will run based on the
helm-hook - argo-hook default mapping .

This mapping might not be adequate for all cases that's why I propose allowing the users to provide their own annotations on the helm hooks to fine control their behavior when deploying through argocd.

Support Deployment of hivemq-operator in host network mode.

This issue asks for approval to open a PR that will allow running hivemq-operator in hostNetwork mode when configured explicitly.

Rationale:

Managed K8s service EKS at the moment has the following limitation:

  • If the data plane uses a different CNI from the control plane (VPC CNI), then the control plane is unable to call admission-webhooks (related issue).

The proposed solution is to run webhook endpoints with hostNetwork: enabled . I have tested running hivemq-operator with hostNetwork: true and indeed it seems to resolve the issue.

Allow the user to set a custom name for the default hivemq cluster

When deploying the chart with helm upgrade --install hivemq hivemq/hivemq-operator , you end up with three pods like the below:

hivemq-hivemq-operator-operator-5f7449fff6-lxjjv
hivemq-b8c7c4c5f-pjfmw
hivemq-b8c7c4c5f-q9mnr

The name of the hivemq-operator pod is confusing to look at, but is fixed if you use a release name of hivemq-operator but this results in pods like the below:

hivemq-operator-operator-5c79b9887c-xgvrx
hivemq-operator-8685f94f56-mdjqk
hivemq-operator-8685f94f56-tvw5n

Ideally, a combination would exist like the below desirable names:

hivemq-operator-5c79b9887c-xgvrx
hivemq-cluster-8685f94f56-mdjqk
hivemq-cluster-8685f94f56-tvw5n

I can see that the name of the hivemq cluster is currently hardcoded to the release-name but preferably a user could set this via a custom value with a sensible default instead (i am aware i could create my own definition of the cluster in a separate chart or manifest)

Error: unknown field restApiConfiguration in com.hivemq.v1.HiveMQCluster.spec

When installing the helm chart hivemq/hivemq-operator we are receiving the validation exception of the missing restApiConfiguration.

the operator chart version is 4.4.0

Reproduction steps:

//Ensure your helm repo is up to date
helm repo update

//Check your repo if it's on version 4.4.0
helm search repo hivemq

//Install the hivemq operator
helm upgrade --install hivemq hivemq/hivemq-operator

Decouple Operator and Cluster from each other in Helm Chart

Hi Guys,

Wouldn't it be sensible to decouple the installation of the operator and of the cluster? Right now, both are in the same helm chart. I know, I can switch off the creation of a cluster (deployCr: false), but it seems like I cannot do it the other way round.

I imagine a typical use case would be like:

  • Deploy the Operator once
  • Deploy/Update clusters multiple times

I do not see how I can do this conveniently with the helm chart provided. I would expect one Helm chart for deploying and maintaining the operator (upgrade, ...). In addition, I would expect a second Helm chart for deploying a CR, i.e. a cluster. This helm chart would be installed several times with differing names (prod cluster, int cluster, ...) and I can manage the lifecycle of the clusters independently of each other and of the operator.

Also the documentation (https://www.hivemq.com/docs/operator/4.6/kubernetes-operator/configuration.html) is a bit fishy in my opinion. At first, it deals with the values for the Helm chart. But at some point it switches to resource files of kind: HiveMQCluster.

Does this make sense to you or is there a misunderstanding on my side?

Thanks,
Marius
(Enterprise Customer via Daimler AG - VehicleDiagnosticsSystem)

Need to set AllowPrivilegeEscalation flag to 'false' on the security context of the container's spec in Hivemq

There was a security remediation provided by Microsoft defender of cloud.

  1. From the Unhealthy resources tab, select the cluster. Defender for Cloud lists the pods running containers with privilege escalation to root in your Kubernetes cluster.
  2. For these pods, set the AllowPrivilegeEscalation flag to 'false' on the security context of the container's spec.
  3. After making your changes, redeploy the pod with the updated spec.

The remediation is shown for Hivemq pod as well.
So, we need to set the AllowPrivilegeEscalation flag to 'false' in the values.yaml file

We are using helm chart to deploy the Hivemq and when i deploy by assigning the value to false it is reflecting in pod values.yaml. But, the remediation is not removed from the Microsoft Defender of cloud.

Please let me know if any input is required.

Please assist me to solve this.

ImagePullSecrets missing for webhook and custom resource

containers:
- name: patch
{{- if .Values.operator.admissionWebhooks.patch.image.sha }}
image: {{ .Values.operator.admissionWebhooks.patch.image.repository }}:{{ .Values.operator.admissionWebhooks.patch.image.tag }}@sha256:{{ .Values.operator.admissionWebhooks.patch.image.sha }}
{{- else }}
image: {{ .Values.operator.admissionWebhooks.patch.image.repository }}:{{ .Values.operator.admissionWebhooks.patch.image.tag }}
{{- end }}
imagePullPolicy: {{ .Values.operator.admissionWebhooks.patch.image.pullPolicy }}
args:
- patch
- --webhook-name={{ template "hivemq.fullname" . }}-admission
- --namespace={{ template "hivemq.namespace" . }}
- --patch-mutating=false
- --secret-name={{ template "hivemq.fullname" . }}-admission
- --patch-failure-policy={{ .Values.operator.admissionWebhooks.failurePolicy }}
resources:

Image for webhook can be specified, but no imagePullSecrets can be given. Most customers depending on changing the image/repo will also have to give credentials for their company container registries.

For the HiveMQ K8s image, imagePullSecrets can be given but there is no hint about that in values.yaml, only in hivemqCluster.json. Would be more user friendly if all possible settings are given in values.yaml (and just commented out).

@patrickjahns

Operator gets killed because it runs out of memory

When following these instructions, I have noticed that the default settings for the operator do not provide enough memory to run it.

After running helm upgrade --install hivemq hivemq/hivemq-operator, the operator pod will be created but when monitoring with K9s, you can see that it crashes pretty soon due to an OOM error.

The pod will restart pretty quickly, however it does not seem to function properly after that.

I suggest increasing the default value for the operator memory limit.

HiveMQ-k8s-4.7.0 crashloop when upgrading from 4.6.4 => 4.7.0

Hey there,

I'm facing the issue when upgrading my HiveMQ cluster operator from v4.6.4 =to=> v4.7.0. Please see below code block for full stack trace. The main exception is

Picked up JAVA_TOOL_OPTIONS: -XX:+UnlockExperimentalVMOptions -XX:InitialRAMPercentage=30 -XX:MaxRAMPercentage=80 -XX:MinRAMPercentage=30
�[36m08:05:55.384�[0;39m �[1;30m[main]�[0;39m �[34mINFO �[0;39m �[35mcom.hivemq.Application�[0;39m - Preparing SSL files
�[36m08:05:57.590�[0;39m �[1;30m[main]�[0;39m �[34mINFO �[0;39m �[35mcom.hivemq.Application�[0;39m - Wrote converted key store to /tmp/store.p12
__ __ _ _
| \/ (_) ___ _ __ ___ _ __ __ _ _ _| |_
| |\/| | |/ __| '__/ _ \| '_ \ / _` | | | | __|
| | | | | (__| | | (_) | | | | (_| | |_| | |_
|_| |_|_|\___|_| \___/|_| |_|\__,_|\__,_|\__|
Micronaut (v2.4.2)
�[36m08:06:01.794�[0;39m �[1;30m[main]�[0;39m �[1;31mERROR�[0;39m �[35mio.micronaut.runtime.Micronaut�[0;39m - Error starting Micronaut server: Error instantiating bean of type [io.micronaut.http.server.netty.NettyHttpServer]: An error occurred configuring SSL
io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.micronaut.http.server.netty.NettyHttpServer]: An error occurred configuring SSL
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1972)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2724)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2710)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2382)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2356)
at io.micronaut.context.DefaultBeanContext.findBean(DefaultBeanContext.java:1282)
at io.micronaut.context.DefaultBeanContext.findBean(DefaultBeanContext.java:752)
at io.micronaut.context.BeanLocator.findBean(BeanLocator.java:149)
at io.micronaut.runtime.Micronaut.start(Micronaut.java:73)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:311)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:297)
at com.hivemq.Application.main(Application.java:48)
Caused by: io.micronaut.http.ssl.SslConfigurationException: An error occurred configuring SSL
at io.micronaut.http.ssl.SslBuilder.getKeyManagerFactory(SslBuilder.java:109)
at io.micronaut.http.server.netty.ssl.CertificateProvidedSslBuilder.build(CertificateProvidedSslBuilder.java:85)
at io.micronaut.http.server.netty.ssl.CertificateProvidedSslBuilder.build(CertificateProvidedSslBuilder.java:79)
at io.micronaut.http.server.netty.ssl.CertificateProvidedSslBuilder.build(CertificateProvidedSslBuilder.java:72)
at io.micronaut.http.server.netty.NettyHttpServer.<init>(NettyHttpServer.java:211)
at io.micronaut.http.server.netty.$NettyHttpServerDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1943)
... 11 common frames omitted
Caused by: java.io.IOException: keystore password was incorrect
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(Unknown Source)
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(Unknown Source)
at java.base/java.security.KeyStore.load(Unknown Source)
at io.micronaut.http.ssl.SslBuilder.load(SslBuilder.java:144)
at io.micronaut.http.ssl.SslBuilder.getKeyStore(SslBuilder.java:124)
at io.micronaut.http.server.netty.ssl.CertificateProvidedSslBuilder.getKeyStore(CertificateProvidedSslBuilder.java:135)
at io.micronaut.http.ssl.SslBuilder.getKeyManagerFactory(SslBuilder.java:98)
... 17 common frames omitted
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
... 24 common frames omitted

Are there any necessary configuration changes which needs to take place before upgrading the operator? I was able to update the HiveMQ broker cluster w/o any issues from v4.6.4 =to=> v4.7.0 by doing a calm rolling restart.

Any help would be highly appreciated.

Helm Chart is outdated to k8s v1.25.x

When do we can expect a patched helm chart to support k8s v1.25.x?
due to the changelog, PodSecurityPolicy is deprecated beginning in 1.25.x

https://kubernetes.io/docs/reference/using-api/deprecation-guide/#psp-v125

and the current helm chart hivemq-operator-0.11.11 is outdated:

❯ helm upgrade --install hivemq hivemq/hivemq-operator --namespace helm-test
Release "hivemq" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: resource mapping not found for name: "hivemq-hivemq-operator" namespace: "helm-test" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first

Warning regarding usage of apiextensions.k8s.io/v1beta1 on K8S v1.19

A warning is generated when deploying under Kubernetes version 1.19:

W0129 19:17:03.764064 13720 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition

The helm chart's template probably needs to be updated to generate one or the other depending on the Kubernetes version used for the deployment.

Cannot read environment variable from ConfigMap or Secret

The env section in the hivemqCluster CRD only allows for key and value:

"env": {
"description": "Additional environment variables for the cluster",
"type": "array",
"items": {
"type": "object",
"javaType": "com.hivemq.openapi.spec.Env",
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string"
}
},
"required": [
"name",
"value"
]
}
},

When I want to set the Control Center user and password from a secret, this is obviously not possible. It is only possible to set it directly in the CRD, which lacks flexibility and security (user/password (hashed) in Git).

Facing issues with the secure websocket config !!

I am trying to enable the secure web socket on the hivemq and using below config file and mounting appropriate key store and truststores.
This is my values file:

hivemq:
nodeCount: 3
cpu: 200m
memory: 3072M
listenerConfiguration: |

8001
0.0.0.0
/mqtt
my-websocket-listener

mqttv3.1
mqtt

false




/etc/hivemqstore/hivemq.jks
123456
123456


/etc/hivemqstore/hivemqtrust.jks
123456

NONE


ports:
- name: "mqtt"
port: 8001
expose: true
patch:
- '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]'
# If you want Kubernetes to expose the MQTT port
- '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]'
- name: "cc"
port: 8080
expose: true
patch:
- '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]'

- '[{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]'

configMaps:
- name: "hivemqstore"
path: "/etc/hivemqstore"
#secrets: [ {"name": "hivemqstore", "path": "/opt/hivemq/hivemq.jks"}, ]

hivemq-operator-0.11.6 is not available

Description

hivemq-operator-0.11.6 release can not be installed.

Cause

hivemq-operator-0.11.6 release failed to complete with the following error:

Successfully packaged chart and saved it to: .cr-release-packages/hivemq-operator-0.11.6.tgz
Packaging chart 'charts/hivemq-swarm'...
Successfully packaged chart and saved it to: .cr-release-packages/hivemq-swarm-0.2.10.tgz
Releasing charts...
Error: error creating GitHub release: POST https://api.github.com/repos/hivemq/helm-charts/releases: 422 Validation Failed [{Resource:Release Field:tag_name Code:already_exists Message:}]
Usage:
  cr upload [flags]

Flags:
  -b, --git-base-url string     GitHub Base URL (only needed for private GitHub) (default "https://api.github.com/")
  -r, --git-repo string         GitHub repository
  -u, --git-upload-url string   GitHub Upload URL (only needed for private GitHub) (default "https://uploads.github.com/")
  -h, --help                    help for upload
  -o, --owner string            GitHub username or organization
  -p, --package-path string     Path to directory with chart packages (default ".cr-release-packages")
  -t, --token string            GitHub Auth Token

Global Flags:
      --config string   Config file (default is $HOME/.cr.yaml)

Reproduction Steps

Run helm upgrade --install --dry-run --debug --version 0.11.6 hivemq-test hivemq/hivemq-operator

$ helm upgrade --install --dry-run --debug --version 0.11.6 hivemq-test hivemq/hivemq-operator

history.go:56: [debug] getting history for release hivemq-test
Release "hivemq-test" does not exist. Installing it now.
install.go:178: [debug] Original chart version: "0.11.6"
Error: chart "hivemq-operator" matching 0.11.6 not found in hivemq index. (try 'helm repo update'): no chart version found for hivemq-operator-0.11.6
helm.go:84: [debug] no chart version found for hivemq-operator-0.11.6
helm.sh/helm/v3/pkg/repo.IndexFile.Get
	helm.sh/helm/v3/pkg/repo/index.go:218
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).ResolveChartVersion
	helm.sh/helm/v3/pkg/downloader/chart_downloader.go:287
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).DownloadTo
	helm.sh/helm/v3/pkg/downloader/chart_downloader.go:90
helm.sh/helm/v3/pkg/action.(*ChartPathOptions).LocateChart
	helm.sh/helm/v3/pkg/action/install.go:753
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:190
main.newUpgradeCmd.func2
	helm.sh/helm/v3/cmd/helm/upgrade.go:121
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:902
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
	runtime/proc.go:250
runtime.goexit
	runtime/asm_amd64.s:1594
chart "hivemq-operator" matching 0.11.6 not found in hivemq index. (try 'helm repo update')
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).ResolveChartVersion
	helm.sh/helm/v3/pkg/downloader/chart_downloader.go:289
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).DownloadTo
	helm.sh/helm/v3/pkg/downloader/chart_downloader.go:90
helm.sh/helm/v3/pkg/action.(*ChartPathOptions).LocateChart
	helm.sh/helm/v3/pkg/action/install.go:753
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:190
main.newUpgradeCmd.func2
	helm.sh/helm/v3/cmd/helm/upgrade.go:121
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:902
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
	runtime/proc.go:250
runtime.goexit
	runtime/asm_amd64.s:1594

Failed to create resource: Internal error occurred: failed calling webhook "hivemq-cluster-policy.hivemq.com"

Expected behavior

Helmchart to be installed correctly!

Actual behavior

Giving this error without any special configuration or change in helmchart:
failed to create resource: Internal error occurred: failed calling webhook "hivemq-cluster-policy.hivemq.com": Post "https://hivemq-operator-0-1649064099-operator.hivemq.svc:443/api/v1/validate/hivemq-clusters?timeout=30s": service "hivemq-operator-0-1649064099-operator" not found
How it is looking for its service with a different name, while the service is created with another name?

To Reproduce

Steps

https://www.hivemq.com/docs/operator/4.7/kubernetes-operator/deploying.html#helm-chart

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.