bitnami / charts Goto Github PK
View Code? Open in Web Editor NEWBitnami Helm Charts
Home Page: https://bitnami.com
License: Other
Bitnami Helm Charts
Home Page: https://bitnami.com
License: Other
Zookeeper statefulsets does not work with custom namespaces . Service Discovery fails.
In the statefulset.yaml , the following needs to be rectified
name: Zookeeper
value: zk-zookeeper-0.zk-zookeeper-headless.default.svc.cluster.local:2888:3888
name: Zookeeper
value: zk-zookeeper-0.zk-zookeeper-headless.my_namespace.svc.cluster.local:2888:3888
This might be also breaking Kafka when installing with a custom namespace.
Suppressed: java.lang.IllegalArgumentException: unknown setting [cloud.kubernetes.service] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
Missing dicovery plugin https://github.com/fabric8io/elasticsearch-cloud-kubernetes in values.yaml:
plugins:
- io.fabric8:elasticsearch-cloud-kubernetes:6.2.3.2
Description: When clicking on this link: http:///docs/html-host-manager-howto.html?org.apache.catalina.filters.CSRF_NONCE=F3DAEEADB91D1517391ECF6F8ECD0782 the resource is not available (404).
Steps to Reproduce:
Minor feedback regarding the elasticsearch
chart: the default JVM parameters include -Xms1024m -Xmx1024m
, yet only 512Mi
are requested in the manifests.
image bitnami/node:7.5.0 not found
$ helm install mariadb-cluster/
kindly-bobcat
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gangly-gorilla-wildfly 10.227.243.59 146.148.61.7 80/TCP,9990/TCP 6m
hazy-anaconda-apache 10.227.244.60 104.197.103.69 80/TCP,443/TCP 16m
kindly-bobcat-master 10.227.247.187 3306/TCP 20s
kindly-bobcat-slave 10.227.255.50 3306/TCP 20s
kubernetes 10.227.240.1 443/TCP 1h
Names should reflect the application launched:
e.g. kindly-bobcat-mariadb-master, kindly-bobcat-mariadb-slave
We add documentation around how people can contribute charts. For example, how you should use the run-generators.sh
.
The MySQL and Elasticsearch templates have conflicting templates. They both define a "master.fullname" template which take a different number of arguments to printf. This is an issue if you're using the charts as subcharts since elasticsearch's template expects 3 arguments to printf while mysql only expects 2. The elasticsearch one seemed to take precedence for the chart I was building yielding invalid names for the MySQL yaml files.
The templates should have namespaces (such as the PostgreSQL templates).
When trying to scale the statefulset to 2 from 1 the container logs reports:
==> The ID of the host is 1
==> Creating data dir...
==> Adding member to existing cluster.
==> Adding new member
client: etcd cluster is unavailable or misconfigured; error #0: dial tcp: lookup lumpy-tarsier-etcd-0 on 10.96.0.10:53: server misbehaving
10.96.0.10:53 is my Kube DNS server
When I do an nslookup with lumpy-tarsier-etcd-0 it doesn't resolve but when I do an nslookup with lumpy-tarsier-etcd-0.lumpy-tarsier-etcd-headless.default.svc.cluster.local it works fine.
In https://hub.kubeapps.com/charts/bitnami/kafka, Source Repository
points to https://github.com/bitnami/charts/kafka which yields 404.
The logic we have that syncs upstream Bitnami charts (developed out of https://github.com/helm/charts) to the upstream folder in this repo appears to only sync new changes and not delete files that are no longer used.
I found that the Redis chart is affected by this, a helm install bitnami/redis
will fail with the following error:
render error in \"redis/templates/svc.yaml\": template: redis/templates/svc.yaml:11:14: executing \"redis/templates/svc.yaml\" at \u003c.Values.service.anno...\u003e: can't evaluate field annotations in type interface {}
When looking for this template in the upstream repo, I found that templates/svc.yaml doesn't exist (removed in https://github.com/helm/charts/pull/4662/files), however the Redis in this repo still has the svc.yaml and deployment.yaml that got removed upstream (https://github.com/bitnami/charts/tree/master/upstreamed/redis/templates). This results in a currently broken chart.
We need to fix the sync logic to ensure we delete any files removed upstream (e.g. rsync with --delete
option), and we need to manually resync every chart.
Running the current version of Drupal I get.
"There is a security update available for your version of Drupal. To ensure the security of your server, you should update immediately! See the available updates page for more information and to install your missing updates."
I tried many many combinations of annotations but basically consul-ui always redirects to to /ui/ so the only way to get the ingress to work (that I have found) is to math path / (wont work for example if you make it /consul/)
This is due to a long standing issue with consul, seems that was fixed but broke again with the new UI - hashicorp/consul#1930
Also it might be possible to work around this added a custom config snippet (https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet) but I dont know enough about nginx to suggest a solution
I am super excited to see this repository!
Have you guys looked at the Sprig functions that are exposed to the template engine? I was looking at the RedMine chart, and thinking it might make sense to do something like {{ default 22 .smtpPort}}
or things like that.
I think Helm might be a version or so back from the Sprig head. But most of the functions are the same between versions.
Thanks!
There is a chart that requires bitnami/postgres as a dependency. I would like to pass in initialization scripts and override postgresql.conf
. The directory structure is:
mychart
|- charts
|-- postgresql-2.1.0.tgz
The only way I found to do this was clone the chart into the charts
folder and insert the files directly:
mychart
|- charts
|-- postgresql
|--- files
|---- postgresql.conf
|---- docker-entrypoint-initdb.d
|----- setup.sql
This, however, breaks the helm workflow for chart management
A potential solution is to override the init configmaps with my own, which would include files/templates taken from the top level. Thoughts?
It seems to me the index.yaml is incorrect, with a wrong urls
field. It lacks the actualy URL and only references the tarball.
wget https://charts.bitnami.com/incubator/index.yaml
more index.yaml
apiVersion: v1
entries:
apache:
- created: 2017-11-30T18:40:04.582855774Z
description: Chart for Apache HTTP Server
digest: 190a7bfc84d8b5bcee1ae76f6fb12226fda1d7bcefb171989a909beb67271c42
engine: gotpl
home: https://httpd.apache.org
keywords:
- apache
- http
- https
- www
- web
- reverse proxy
maintainers:
- email: [email protected]
name: Bitnami
name: apache
sources:
- https://github.com/bitnami/bitnami-docker-apache
urls:
- apache-0.3.7.tgz
version: 0.3.7
You end up being able to search the index, but you cannot install the charts.
cc/ @sameersbn @prydonius
ConfigMap volumes have been read-only since Kubernetes 1.9.4.
kubernetes/kubernetes#58720
Then Pod outputs the following error and exits.
Welcome to the Bitnami elasticsearch container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues
Send us your feedback at [email protected]
nami INFO Initializing elasticsearch
Error executing 'postInstallation': EROFS: read-only file system, chown '/bitnami/elasticsearch/conf/elasticsearch_custom.yml'
Is there any solution to this?
this repo needs a license file
Right now, the node chart uses init-containers to just clone the repo, and then it does the rest of the initialization during the final container boot.
The best way to do this is throw all initialization steps into init-containers, letting the final container boots by only executing one command: npm start
can someone confirm if this is still necessary?
deploying the chart as a subchart means that svc name gets completely foobarred as it will pick up the release name, tag on postgres
then truncate to 24 characters.
e.g.
hhs-feature-devops-64-en
when it was supposed to be hhs-feature-devops-64-env-postgres
.
The only workaround I have (which is a horrible one) is all my charts have to implement the same truncation.
original issue: kubernetes/kubernetes#25041
mongoDB chart will continue to crash and restart on a windows 10 host with mongodb.persistence.enabled=true
(persistence disabled works without issue)
running on docker-for-windows Version 18.06.1-ce-win73 (19507)
with kubernetes enabled
kubectl describe pod dev-mongodb-6584dd75f5-vc9rp
Name: dev-mongodb-6584dd75f5-vc9rp
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Wed, 26 Sep 2018 21:20:55 -0700
Labels: app=mongodb
pod-template-hash=2140883191
release=dev
Annotations: <none>
Status: Running
IP: 10.1.0.62
Controlled By: ReplicaSet/dev-mongodb-6584dd75f5
Containers:
dev-mongodb:
Container ID: docker://4e8bdefbc5d8727d82bba976e5552ae4dd5b92e1bcee6c13c8f985aa12b5f1ab
Image: docker.io/bitnami/mongodb:3.6
Image ID: docker-pullable://bitnami/mongodb@sha256:a3b85168bcc94a329b96729683edd3ec731a8aac902c664ee6a3aeba0b5f5293
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 100
Started: Wed, 26 Sep 2018 21:21:50 -0700
Finished: Wed, 26 Sep 2018 21:21:59 -0700
Ready: False
Restart Count: 1
Liveness: exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
MONGODB_ROOT_PASSWORD: <set to the key 'mongodb-root-password' in secret 'dev-mongodb'> Optional: false
MONGODB_USERNAME:
MONGODB_DATABASE:
MONGODB_ENABLE_IPV6: yes
MONGODB_EXTRA_FLAGS: --smallfiles --logpath=/dev/null
Mounts:
/bitnami/mongodb from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-phmll (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dev-mongodb
ReadOnly: false
default-token-phmll:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-phmll
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned dev-mongodb-6584dd75f5-vc9rp to docker-for-desktop
Normal SuccessfulMountVolume 1m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "pvc-b8423bfb-c20c-11e8-bb65-00155d016f01"
Normal SuccessfulMountVolume 1m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-phmll"
Warning Unhealthy 44s kubelet, docker-for-desktop Readiness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:23.949+0000 I NETWORK [thread1] Socket recv() Connection reset by peer 127.0.0.1:27017
2018-09-27T04:21:24.042+0000 I NETWORK [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [127.0.0.1:27017]
2018-09-27T04:21:24.058+0000 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host '127.0.0.1:27017' :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
Warning Unhealthy 34s kubelet, docker-for-desktop Readiness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:33.631+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2018-09-27T04:21:34.401+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
Warning Unhealthy 28s kubelet, docker-for-desktop Liveness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:40.145+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2018-09-27T04:21:40.145+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
�[0m
�[0m�[1mWelcome to the Bitnami mongodb container�[0m
�[0mSubscribe to project updates by watching �[1mhttps://github.com/bitnami/bitnami-docker-mongodb�[0m
�[0mSubmit issues and feature requests at �[1mhttps://github.com/bitnami/bitnami-docker-mongodb/issues�[0m
�[0m
nami INFO Initializing mongodb
mongodb INFO ==> Deploying MongoDB with persisted data...
mongodb INFO ==> No injected configuration files found. Creating default config files...
mongodb INFO
mongodb INFO ########################################################################
mongodb INFO Installation parameters for mongodb:
mongodb INFO Persisted data and properties have been restored.
mongodb INFO Any input specified will not take effect.
mongodb INFO This installation requires no credentials.
mongodb INFO ########################################################################
mongodb INFO
nami INFO mongodb successfully initialized
�[0m�[38;5;2mINFO �[0m ==> Starting mongodb...
�[0m�[38;5;2mINFO �[0m ==> Starting mongod...
2018-09-27T04:34:00.223+0000 I CONTROL [initandlisten] MongoDB starting : pid=34 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=dev-mongodb-7b9d7bbdd9-bglkr
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] db version v3.6.8
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] git version: 6bc9ed599c3fa164703346a22bad17e33fa913e4
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.0f 25 May 2017
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] allocator: tcmalloc
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] modules: none
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] build environment:
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] distmod: debian92
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] distarch: x86_64
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] target_arch: x86_64
2018-09-27T04:34:00.224+0000 I CONTROL [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: true } }
2018-09-27T04:34:00.234+0000 I - [initandlisten] Detected data files in /opt/bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2018-09-27T04:34:00.237+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=478M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2018-09-27T04:34:00.732+0000 E STORAGE [initandlisten] WiredTiger error (1) [1538022840:732507][34:0x7fdef0bb8580], file:WiredTiger.wt, connection: /opt/bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: Operation not permitted
2018-09-27T04:34:00.733+0000 E - [initandlisten] Assertion: 28595:1: Operation not permitted src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421
2018-09-27T04:34:00.856+0000 I STORAGE [initandlisten] exception in initAndListen: Location28595: 1: Operation not permitted, terminating
2018-09-27T04:34:00.857+0000 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2018-09-27T04:34:00.857+0000 I NETWORK [initandlisten] removing socket file: /opt/bitnami/mongodb/tmp/mongodb-27017.sock
2018-09-27T04:34:00.857+0000 I CONTROL [initandlisten] now exiting
2018-09-27T04:34:00.857+0000 I CONTROL [initandlisten] shutting down with code:100
I launched a Ghost chart in my GKE cluster, and it assigned an external IP address. When I logged in and created a new blog post, and clicked on the 'View Post' button, it sent me to http://10.8.1.4:2368/my-awesome-blog-post/
, which looks like an internal IP.
$ helm install wildfly
Error: parse error in "wildfly/templates/wildfly-secrets.yaml": template: wildfly/templates/wildfly-secrets.yaml:11: unexpected "{" in command
Fix below.
$ git diff
diff --git a/wildfly/templates/wildfly-secrets.yaml b/wildfly/templates/wildfly-secrets.yaml
index 2223e8b..a369f9a 100644
--- a/wildfly/templates/wildfly-secrets.yaml
+++ b/wildfly/templates/wildfly-secrets.yaml
@@ -8,4 +8,4 @@ metadata:
heritage: bitnami
type: Opaque
data:
I'm trying to use Kafka with a SSL enabled and generated the truststore files and keystore files using my kubernetes domain names (kafka-{0,1,2,3,4}.kafka-headless.namespcae.svc.cluster.local) as subjective alternative names. However the inter broker communication fails on the following exception.
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1529)
at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1214)
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1186)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
at org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:439)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:304)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:258)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:125)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:487)
at org.apache.kafka.common.network.Selector.poll(Selector.java:425)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:330)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:322)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1614)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1052)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:992)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:989)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1467)
at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:393)
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:473)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:331)
... 8 more
Caused by: java.security.cert.CertificateException: No name matching kafka-kafka-2.kafka-kafka-headless.kafka.svc.cluster.local found
at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:231)
at sun.security.util.HostnameChecker.match(HostnameChecker.java:96)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1601)
... 17 more
[2018-09-04 16:46:34,085] ERROR [Producer clientId=console-producer] Connection to node -1 failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
>[2018-09-04 16:46:34,192] ERROR [Producer clientId=console-producer] Connection to node -1 failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)```
It is hardcoded to 15672, but it should honour .Values.rabbitmqNodeport
Just noticed that even though at the end of the README it's said that existing volume claim can be used, I do not see master.persistence.existingClaim config existing there in a same way as in MariaDB chart. Is this the issue with chart description or this functionality hasn't been implemented yet?
service.port | PostgreSQL port | 5432
In zookeeper docs seems wrong, and should be fixed
"Kafka is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance."
I think this headline is more appropriate for Postgres?
As a user I want to deploy a Helm chart that will
And link all of the deployed components together
Trying to run the Redmine chart which comes with persistence enabled by default I get:
kubectl get pods
NAME READY STATUS RESTARTS AGE
giddy-skunk-mariadb-4195510591-cwdid 0/1 Pending 0 31s
giddy-skunk-redmine-3930201193-68jv7 0/1 Pending 0 31s
migmartri ~/work/bitnami/charts/redmine master $ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
giddy-skunk-mariadb Pending 3m
giddy-skunk-redmine Pending 3m
Provisioner plugin not found.
migmartri ~/work/bitnami/charts/redmine master $ kubectl describe pvc giddy-skunk-mariadb
Name: giddy-skunk-mariadb
Namespace: default
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 11s 16 {persistentvolume-controller } Warning ProvisioningFailed No provisioner plugin found for the claim!
And this is the error message shown in the Pod
kubectl describe pods giddy-skunk-redmine-3930201193-68jv7
Name: giddy-skunk-redmine-3930201193-68jv7
Namespace: default
Node: /
Labels: app=giddy-skunk-redmine
chart=redmine-redmine
heritage=Tiller
pod-template-hash=3930201193
release=giddy-skunk
Status: Pending
IP:
Controllers: ReplicaSet/giddy-skunk-redmine-3930201193
Containers:
giddy-skunk-redmine:
Image: bitnami/redmine:3.3.0-r2
...
...
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 55s 22 {default-scheduler } Warning FailedScheduling PersistentVolumeClaim is not bound: "giddy-skunk-redmine"
Using Minikube 0.9.0 in a brand new VM and cluster.
minikube version
minikube version: v0.9.0
Here's the proof: https://i.imgur.com/7uPGKdh.png
The pod logs give me awesome information. These installation parameters however are different from the ones that I set. Should they match my input in values.yaml?
wordpre INFO ########################################################################
wordpre INFO Installation parameters for wordpress:
wordpre INFO First Name: FirstName
wordpre INFO Last Name: LastName
wordpre INFO Username: user
wordpre INFO Password: **********
wordpre INFO Email: [email protected]
wordpre INFO Blog Name: User's Blog!
wordpre INFO (Passwords are not shown for security reasons)
wordpre INFO ########################################################################
It would be great if the incubator charts were available as a maintained public repository that users could add as a dependency. I think it would help increase adoption and move them to stable status. Here are the steps to do it.
Add gh-pages branch
git clone --depth=1 [email protected]:bitnami/charts.git
cd charts
git branch gh-pages && git checkout gh-pages
rm -fr * && rm .gitignore
git commit -am 'initial gh-pages commit' && git push --set-upstream origin gh-pages
From within GitHub's settings, find the section on "GitHub Pages". From the source drop-down, select gh-pages and save.
For each chart:
git clone --depth=1 https://github.com/bitnami/charts.git --branch gh-pages tmp_chart
helm package -d ./tmp_chart/ .
helm repo index tmp_chart --url https://bitnami.github.io/charts
git -C tmp_chart add *.tgz index.yaml && git -C tmp_chart commit -am 'new package xyz' && git -C tmp_chart push
rm -fr tmp_chart
First add the repo:
helm repo add bitnami-incubator https://bitnami.github.io/charts
Create requirements.yaml
file with the following content:
#requirements.yaml
dependencies:
- name: tomcat
version: 0.4.11
repository: https://bitnami.github.io/charts
alias: bitnami-tomcat
At the root level of values.yaml
add the name of the alias from the dependency bitnami-tomcat
as a node.
#values.yaml
[...]
bitnami-tomcat:
tomcatUsername: override-user
[...]
Hi,
I tried to install bitnami/postgresql
on Azure using the helm charts:
helm install --name bitnami-postgres \
--set persistence.storageClass=azurefile,persistence.accessMode=ReadWriteMany \
bitnami/postgresql
The PVC will be created with the correct storage class, but with accessMode
ReadWriteOnce
. Therefore the pods crashed with a permission error issue:
EACCES permission denied, mkdir "/bitnami/postgresql/data"
Is this parameter ignored?
Thanks for your help in advance!
Cheers
Trying to connect using the Bunny client for ruby:
> c = Bunny.new "amqp://user:bitnami@peeking-scorpion-rabbitm:15672"
> c.start
E, [2016-08-09T23:18:08.991953 #21] ERROR -- #<Bunny::Session:0x7fe7520d6a10 user@peeking-scorpion-rabbitm:15672, vhost=/, addresses=[peeking-scorpion-rabbitm:15672]>: Got an exception when receiving data: IO timeout when reading 7 bytes (Timeout::Error)
Just installed latest chart version and I discovered that on the deployment of 1 master + 2 slaves, (I login to the slave service with Adminer or PHPMyAdmin) slaves do not have the same amount of data. Same can be seen when entering pods and listing /bitnami/mysql/data
directory,
First slave has all the data, second slave has just some.
Seems that data doesn't get replicated consistently to slaves for some reason.
Can somebody test and confirm the issue? This is really critical issue.
Hi Is there a provision to chnage the number of zookeeper nodes?
ETCDCTL_ENDPOINTS="{{ $etcdClientProtocol }}://{{ $etcdFullname }}-0.{{ $etcdHeadlessServiceName }}.default.svc.cluster.local:{{ $clientPort }}
should be
ETCDCTL_ENDPOINTS="{{ $etcdClientProtocol }}://{{ $etcdFullname }}-0.{{ $etcdHeadlessServiceName }}.{{ .Release.Namespace }}.svc.cluster.local:.......
$ helm install phpbb/
Then login, select "Your first forum"
Then "New Topic"
Subject: Test post
Submit
User is prompted to login again, cannot post.
I installed the mysql chart in minikube (win64) with the following YAML settings:
#bitnami mysql:
service:
type: NodePort
port: 3306
master:
persistence:
enabled: false
slave:
persistence:
enabled: false
Chart starts up fine.
I stop and start minikube, and the chart doesn't run anymore. Both master and slave pods are getting a CrashLoopBackOff (Terminated: Error).
The master pod has the following in the log:
←[0m
←[0m←[1mWelcome to the Bitnami mysql container←[0m
←[0mSubscribe to project updates by watching ←[1mhttps://github.com/bitnami/bitnami-docker-mysql←[0m
←[0mSubmit issues and feature requests at ←[1mhttps://github.com/bitnami/bitnami-docker-mysql/issues←[0m
←[0m
nami INFO Initializing mysql
mysql INFO
mysql INFO ########################################################################
mysql INFO Installation parameters for mysql:
mysql INFO Persisted data and properties have been restored.
mysql INFO Any input specified will not take effect.
mysql INFO This installation requires no credentials.
mysql INFO ########################################################################
mysql INFO
nami INFO mysql successfully initialized
←[0m←[38;5;2mINFO ←[0m ==> Starting mysql...
←[0m←[38;5;2mINFO ←[0m ==> Starting mysqld_safe...
2018-08-03T13:27:55.114541Z mysqld_safe error: log-error set to '/opt/bitnami/mysql/logs/mysqld.log', however file don't exists. Create writable for user 'mysql'.
I think the root password is also different than on the previous run. I feel like this might be strange behavior? But I don't know if it's related to the crashing issue.
I set the wordpressBlogName to "Michelle's Blog". It gets set in the manifest after the values have been computed, however the actual blog site is still titled "Users's Blog" and I was expecting it to to be "Michelle's Blog"
We decided to comment out the optional keys in the values.yaml
, but I do not think that this should affect to the default values.
For example, in the code below is not obvious to know or spot that the commented out jenkinsUser/password
contain the default values.
We should remove the comment on those lines since are the ones that are being applied to the installed chart.
Summarizing: comment out optional values are fine except if they add some kind of restriction as form of default values that the user needs to know in order to use the application.
## Bitnami Jenkins image version
## ref: https://hub.docker.com/r/bitnami/jenkins/tags/
##
imageTag: 2.17-r0
## Specify a imagePullPolicy
## Defaults to 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
# imagePullPolicy:
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-jenkins#configuration
##
# jenkinsUser: user
## Application password
## ref: https://github.com/bitnami/bitnami-docker-jenkins#configuration
##
# jenkinsPassword: bitnami
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
enervated-koala-mongodb 10.227.250.19 27017/TCP 3m
$ kubectl run ouch --tty -i --rm --image bitnami/mongodb --command -- /bin/bash
MongoDB shell version: 3.2.7
connecting to: 10.227.250.19/test
2016-08-09T22:40:37.598+0000 W NETWORK [thread1] Failed to connect to 10.227.250.19:27017 after 5000 milliseconds, giving up.
2016-08-09T22:40:37.598+0000 E QUERY [thread1] Error: couldn't connect to server 10.227.250.19:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:231:14
@(connect):1:6
exception: connect failed
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.