banzaicloud / banzai-charts Goto Github PK
View Code? Open in Web Editor NEWCurated list of Banzai Cloud Helm charts used by the Pipeline Platform
License: Apache License 2.0
Curated list of Banzai Cloud Helm charts used by the Pipeline Platform
License: Apache License 2.0
Vault should be running in a StatefulSet such as in the banzaicloud/vault-operator: banzaicloud/bank-vaults@b0b2315#diff-28590649c56bf5f88fc16d81abf6c197 to handle restart and HA matters.
Hi,
Trying some of your charts to see if Pipeline is a good option for my use case.
I'm using Helm 2.11.0 and found warnings in some of your charts.
For instance with the pipeline
chart:
2018/11/23 12:20:48 warning: destination for annotations is a table. Ignoring non-table value <nil>
The problem come from pipeline/values.yaml:
ingress:
enabled: false
annotations:
#traefik.frontend.rule.type: PathPrefix
Helm needs accolades if by default there is no values: annotations: {}
Same problem in mysql/values.yaml: initializationFiles: {}
Option to expose the Spark Resource Staging Server endpoint via Ingress such as it's reachable from outside of the cluster as well. This should be optional as there are use cases where Spark RSS is required to be reachable only in-cluster
When setting values to use aws-kms-s3 backend for unsealing the vault in the values.yaml files, vault-unsealer fails with the following error message:
level=fatal msg="error initializing vault: error testing keystore before init: MissingRegion: could not find region configuration"
Using the following configuration for unseal:
unsealer:
image:
repository: banzaicloud/bank-vaults
tag: master
pullPolicy: Always
args: [
"--mode",
"aws-kms-s3",
"--aws-kms-key-id",
"arn:aws:kms:region:account:key/xxxx-xxxxx-xxxxxx-xxxxxx",
"--aws-s3-bucket",
"bucket-name",
"--aws-s3-prefix",
"prefix"
]
Have tried the following options as well without success:
I've verified the IAM permissions on the instance running the deployment, and all are correct.
incubator
makes no sense anymore.
Create default policies
helm install banzaicloud-incubator/spot-termination-exporter
Error: failed to download "banzaicloud-incubator/spot-termination-exporter" (hint: running `helm repo update` may help)
I followed the steps including helm repo update
and got above error.
This article https://banzaicloud.com/blog/hands-on-thanos/ refers to a soon to be released thanos chart but that chart is not yet available in banzai-stable repo.
when is this chart going to be released?
Re #36 we should refactor/remove unused code for helm init and ingress
I have been using the productinfo, but after a new release it broke my currently released version:
Error: chart "productinfo" matching 0.4.5 not found in banzaicloud-stable index. (try 'helm repo update'). No chart version found for productinfo-0.4.5
chart "productinfo" matching 0.4.5 not found in banzaicloud-stable index. (try 'helm repo update'). No chart version found for productinfo-0.4.5
stable/pipeline/templates/configmap.yaml: tokensigningkey = "{{ randAlphaNum 16 }}"
This makes existing tokens unusable after a helm release, since the value changes from upgrade to upgrade.
Depends on: banzaicloud/pipeline#579
Not changing basePath
results in wrong livenessProbe
and readinessProbe
path.
livenessProbe:
httpGet:
- path: /productinfo/status
+ path: //status
port: http
readinessProbe:
httpGet:
- path: /productinfo/status
+ path: //status
port: http
Currently Drone is exposed to the Internet with the following Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-drone
namespace: default
labels:
release: {{ .Release.Name }}
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/ssl-redirect: "false"
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- http:
paths:
- path: /build
backend:
serviceName: {{ .Release.Name }}-drone
servicePort: 80
If the host is https://pipeline.banzaicloud.com/
Drone webhooks are getting registered with the path https://pipeline.banzaicloud.com/hook
but instead, they should be registered with https://pipeline.banzaicloud.com/build/hook
since the whole drone stuff is moved behind /build
.
Sometimes helm install fails with crd not ready in these charts:
Details of securing Spark RSS can be found here: https://github.com/apache-spark-on-k8s/spark/blob/branch-2.2-kubernetes/docs/running-on-kubernetes.md#securing-the-resource-staging-server-with-tls
https://github.com/apache-spark-on-k8s/spark/blob/branch-2.2-kubernetes/conf/kubernetes-resource-staging-server.yaml#L56
https://github.com/apache-spark-on-k8s/spark/blob/branch-2.2-kubernetes/conf/kubernetes-resource-staging-server.yaml#L64-L65
The content for spark.ssl.kubernetes.resourceStagingServer.keyPem
and spark.ssl.kubernetes.resourceStagingServer.serverCertPem
is generated as TLS secret upfront and stored in Vault.
Hi @bonifaido there are quite a few changes needed to bring your OpenFaaS chart into sync with master. Please see upstream for more details - https://github.com/openfaas/faas-netes/tree/master/chart/openfaas
Thanks,
Alex
Theres too many env vars in the Deployment. We should fail back to configuration file.
I wonder why the HDFS is not named as an option the https://github.com/banzaicloud/banzai-charts/tree/master/stable/spark-hs readme.
I'd like to store history logs, app logs in HDFS as well as processing data that resides within the HDFS. Is this possible in the current stage of the project?
We should support adding / to the end of logdirectory path if it's not set by the user.
Otherwise the HS will fail with:
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$startPolling(FsHistoryProvider.scala:214)
at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:160)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:156)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:78)
... 6 more
I try to use helm tidb.
Everything ok when using default setting:
helm install --name my-release banzaicloud-incubator/tidb
But failed when enable persistence.
The command is:
helm install --name my-release --set tikv.persistence.enabled=true,tikv.persistence.accessMode=ReadWriteOnce,tikv.persistence.size=8Gi banzaicloud-incubator/tidb
Using GlusterFS + heketi for storage layer.
Information:
k get pv,pvc | grep tidb
pv/pvc-e431a826-001e-11e8-9529-021abc0baa27 9Gi RWO Delete Bound default/my-release-tidb tidb 10m
pvc/my-release-tidb Bound pvc-e431a826-001e-11e8-9529-021abc0baa27 9Gi RWO tidb 10m
k get pods | grep tidb
my-release-tidb-db-5bfcb99b97-hvbln 1/1 Running 0 14m
my-release-tidb-db-5bfcb99b97-jm2ds 1/1 Running 0 14m
my-release-tidb-kv-68894f944-5fcfm 1/1 Running 0 14m
my-release-tidb-kv-68894f944-jxm6b 0/1 CrashLoopBackOff 7 14m
my-release-tidb-kv-68894f944-nhb8z 0/1 CrashLoopBackOff 7 14m
my-release-tidb-pd-0 1/1 Running 0 14m
my-release-tidb-pd-1 1/1 Running 0 14m
my-release-tidb-pd-2 1/1 Running 0 14m 1/1 Running 0 11m
Only one kv-pod executed. Two got errors.
k describe pod my-release-tidb-kv-68894f944-5fcfm
Normal SuccessfulMountVolume 19m kubelet, ranchernode1 MountVolume.SetUp succeeded for volume "pvc-e431a826-001e-11e8-9529-021abc0baa27"
k describe pod my-release-tidb-kv-68894f944-jxm6b
Normal SuccessfulMountVolume 17m kubelet, ranchernode1 MountVolume.SetUp succeeded for volume "pvc-e431a826-001e-11e8-9529-021abc0baa27"
k describe pod my-release-tidb-kv-68894f944-nhb8z
Normal SuccessfulMountVolume 19m kubelet, ranchernode2 MountVolume.SetUp succeeded for volume "pvc-e431a826-001e-11e8-9529-021abc0baa27"
k logs my-release-tidb-kv-68894f944-5fcfm
No ERROR.
k logs my-release-tidb-kv-68894f944-jxm6b
2018/01/23 09:43:57.020 tikv-server.rs:157: [ERROR] lock "/data/tikv" failed, maybe another instance is using this directory.
k logs my-release-tidb-kv-68894f944-nhb8z
2018/01/23 09:43:45.272 tikv-server.rs:157: [ERROR] lock "/data/tikv" failed, maybe another instance is using this directory.
The problem is that the second and the third pod mount the same storage?
while trying to do
[with /
as well at the end]
helm repo add fb-1-stable https://s3-eu-west-1.amazonaws.com/kubernetes-charts.banzaicloud.com/branch/fb-1
getting error:
Error: Looks like "https://s3-eu-west-1.amazonaws.com/kubernetes-charts.banzaicloud.com/branch/fb-1/" is not a valid chart repository or cannot be reached: Failed to fetch https://s3-eu-west-1.amazonaws.com/kubernetes-charts.banzaicloud.com/branch/fb-1/index.yaml : 403 Forbidden
Automatic install of the Grafana dashboard referenced here: https://grafana.com/dashboards/7752
For example:
http://sagikazarmark-test.sagikazarmark.beta.banzaicloud.io/grafana/
redirects to:
https://sagikazarmark-test.sagikazarmark.beta.banzaicloud.io/
Suspected culprit: https://github.com/banzaicloud/banzai-charts/blob/master/pipeline-cluster-monitor/values.yaml#L308
[GIN] 2018/07/11 - 08:14:26 | 404 | 14.698924ms | 10.4.2.1 | GET /stream/logs/banzaicloud/pipeline/4/1
[GIN] 2018/07/11 - 08:23:21 | 404 | 11.594932ms | 10.128.0.2 | GET /stream/logs/banzaicloud/pipeline/4/1
[GIN] 2018/07/11 - 08:23:46 | 404 | 11.685056ms | 10.128.0.2 | GET /stream/logs/banzaicloud/pipeline/4/1
[GIN] 2018/07/11 - 08:23:52 | 404 | 11.831573ms | 10.128.0.2 | GET /stream/logs/banzaicloud/pipeline/4/1
Currently, vault chart have default update strategy (RollingUpdate in current k8s). This prevents helm releases from smooth upgrade: RolingUpdate waits forever for non-leaders pods to be in "ready" state, so upgrade fails. Workaround: use Recreate update strategy.
Please, make update strategy configurable via values.yaml for those, who dont use vault operator yet. Thanks you!
Error: file "banzaicloud-incubator/tidb" not found
Curl the end point http://kubernetes-charts-incubator.banzaicloud.com/
You'll notice that urls is missing an h in the protocol (http://). I think that's what's causing it?
urls:
- ttp://kubernetes-charts-incubator.banzaicloud.com/tidb-0.0.1.tgz
name: Documentation of Adding Authentication Backends via External Config
Is your feature request related to a problem? Please describe.
Using the External Configuration Example provided at https://banzaicloud.github.io/bank-vaults/ unable to get vault to authenticate with GitHub via the UI. Using bank-vaults configure command with the example file with values and tokens set to our organisation was likewise unsuccessful.
Describe the solution you'd like
A working example for configuring GitHub auth backend and an LDAP backend.
Describe alternatives you've considered
A manual configuration is possible but it would be quite useful to take advantage of bank-vaults functionality to stand up a vault instance in k8 that would be hooked into an authentication backend at boot.
Additional context
Sample of the config used to set up github stolen shamelessly from above:
auth:
- type: github
config:
organization: xxxxxxx
map:
teams:
dev: dev
users:
bertiew: root
When deployed with this config, attempts to authenticate with GitHub via the UI produces an error about a missing token.
When using s3 backend or aws auth, the config contains aws access keys which are preferably stored in a secret rather than a configmap.
Even though secrets are just base64 encoded, access to them can be restricted through RBAC
Add support to Spark History Server to be able use user name and password stored in K8s secret (originating from secrets stored in Vault).
Expected :
Run : helm install --name hpa-operator --namespace utility banzaicloud-stable/hpa-operator
the hpa-operator pod should start without any errors.
Actual:
Following errors found:
time="2018-07-30T09:29:11Z" level=info msg="Go Version: go1.9.7"
time="2018-07-30T09:29:11Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-07-30T09:29:11Z" level=info msg="operator-sdk Version: 0.0.5+git"
ERROR: logging before flag.Parse: E0730 09:29:11.751967 1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: statefulsets.apps is forbidden: User "system:serviceaccount:utility:default" cannot list statefulsets.apps at the cluster scope
ERROR: logging before flag.Parse: E0730 09:29:11.752076 1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: deployments.apps is forbidden: User "system:serviceaccount:utility:default" cannot list deployments.apps at the cluster scope
ERROR: logging before flag.Parse: E0730 09:29:12.755533 1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: deployments.apps is forbidden: User "system:serviceaccount:utility:default" cannot list deployments.apps at the cluster scope
ERROR: logging before flag.Parse: E0730 09:29:12.755542 1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: statefulsets.apps is forbidden: User "system:serviceaccount:utility:default" cannot list statefulsets.apps at the cluster scope
Resolution:
Added name of service account name to pod spec in deployment at
spec.template.spec
In some situations, for example, if the requirements file is not correct, the helm dep build
command fails, however, this doesn't make the CircleCI chart build fail, which leaves us with missing charts:
./pipeline-cp
Error: found in requirements.yaml, but missing in charts/ directory: grafana, traefik, prometheus, pipeline, pipeline-ui, drone, telescopes, productinfo
In the final iteration of the build for loop, we should exit on errors immediately.
Not all instances are spot instances and this daemon set should only go to instances flagged as spot instances.
Grafana logs:
chown: changing ownership of '/var/lib/grafana/dashboards/..data': Read-only file system
chown: changing ownership of '/var/lib/grafana/dashboards/..2018_03_19_17_25_51.132647244': Read-only file system
chown: changing ownership of '/var/lib/grafana/dashboards': Read-only file system
Enable the following Anchore config params setup in pipeline chart:
[anchore]
enabled = true
dialect = "postgres"
host = "localhost"
port = 5432
user = "anchore-db-user"
password = "xxxxxxxxxx"
dbname = "anchore"
endPoint = "https://beta.dev.banzaicloud.com"
The vault chart default values set vault.config.storage
to file
- when adding s3
configuration you end up configuring 2 storage backends.
Helm is supposed to drop a key if you set it to null
helm/helm#2648 - but this didn't seem to work, is it an issue having 2 storage backends configured and is there a way to delete default keys?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.