Giter VIP home page Giter VIP logo

banzai-charts's People

Contributors

ahma avatar akijakya avatar asdwsda avatar baluchicken avatar bonifaido avatar colin014 avatar ecsy avatar gracedo avatar jwillker avatar khernyo avatar kozmagabor avatar laci21 avatar lantier avatar levydori avatar lpuskas avatar martonsereg avatar matyix avatar mvisonneau avatar pbalogh-sa avatar pepov avatar pregnor avatar qingkunl avatar sagikazarmark avatar sancyx avatar shanchunyang0919 avatar stoader avatar tarokkk avatar tkircsi avatar turip avatar waynz0r avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

banzai-charts's Issues

Warnings with Helm 2.11.0

Hi,

Trying some of your charts to see if Pipeline is a good option for my use case.

I'm using Helm 2.11.0 and found warnings in some of your charts.

For instance with the pipelinechart:

2018/11/23 12:20:48 warning: destination for annotations is a table. Ignoring non-table value <nil>

The problem come from pipeline/values.yaml:

ingress:
  enabled: false
  annotations:
    #traefik.frontend.rule.type: PathPrefix

Helm needs accolades if by default there is no values: annotations: {}

Same problem in mysql/values.yaml: initializationFiles: {}

aws-kms-s3 configuration does not work in the vault chart.

When setting values to use aws-kms-s3 backend for unsealing the vault in the values.yaml files, vault-unsealer fails with the following error message:

level=fatal msg="error initializing vault: error testing keystore before init: MissingRegion: could not find region configuration"

Using the following configuration for unseal:

unsealer:
image:
repository: banzaicloud/bank-vaults
tag: master
pullPolicy: Always
args: [
"--mode",
"aws-kms-s3",
"--aws-kms-key-id",
"arn:aws:kms:region:account:key/xxxx-xxxxx-xxxxxx-xxxxxx",
"--aws-s3-bucket",
"bucket-name",
"--aws-s3-prefix",
"prefix"
]

Have tried the following options as well without success:

  1. using the full arn:s3 name for the bucket
  2. Only the key id, no arn:aws:kms:etc
  3. Enabling encryption with the right key on the bucket
  4. Disabling encryption on the bucket.
  5. Various tags both for bank-vaults and the vault image.

I've verified the IAM permissions on the instance running the deployment, and all are correct.

Setup policy in Anchore

Create default policies

  • Define default policy bundles (5)
  • Create policy bundles
  • Add default policy bundles during anchore-policy-validator deployment

Chart updates break backwards compatibility

I have been using the productinfo, but after a new release it broke my currently released version:

Error: chart "productinfo" matching 0.4.5 not found in banzaicloud-stable index. (try 'helm repo update'). No chart version found for productinfo-0.4.5
chart "productinfo" matching 0.4.5 not found in banzaicloud-stable index. (try 'helm repo update'). No chart version found for productinfo-0.4.5

Drone: webhooks are registered with wrong path so they don't work

Currently Drone is exposed to the Internet with the following Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-drone
  namespace: default
  labels:
    release: {{ .Release.Name }}

  annotations:
    kubernetes.io/ingress.class: traefik
    ingress.kubernetes.io/ssl-redirect: "false"
    traefik.frontend.rule.type: PathPrefixStrip

spec:
  rules:
    - http:
        paths:
          - path: /build
            backend:
             serviceName: {{ .Release.Name }}-drone
             servicePort: 80

If the host is https://pipeline.banzaicloud.com/ Drone webhooks are getting registered with the path https://pipeline.banzaicloud.com/hook but instead, they should be registered with https://pipeline.banzaicloud.com/build/hook since the whole drone stuff is moved behind /build.

Securing Spark RSS with TLS.

Extend History Server chart to add / to log directory path

We should support adding / to the end of logdirectory path if it's not set by the user.
Otherwise the HS will fail with:

	at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$startPolling(FsHistoryProvider.scala:214)
	at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:160)
	at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:156)
	at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:78)
	... 6 more

helm TiDB persistence failed

I try to use helm tidb.
Everything ok when using default setting:
helm install --name my-release banzaicloud-incubator/tidb

But failed when enable persistence.
The command is:
helm install --name my-release --set tikv.persistence.enabled=true,tikv.persistence.accessMode=ReadWriteOnce,tikv.persistence.size=8Gi banzaicloud-incubator/tidb
Using GlusterFS + heketi for storage layer.

Information:

k get pv,pvc | grep tidb
pv/pvc-e431a826-001e-11e8-9529-021abc0baa27   9Gi        RWO            Delete           Bound     default/my-release-tidb         tidb  10m
pvc/my-release-tidb         Bound     pvc-e431a826-001e-11e8-9529-021abc0baa27   9Gi        RWO            tidb               10m

k get pods | grep tidb
my-release-tidb-db-5bfcb99b97-hvbln                              1/1       Running            0          14m
my-release-tidb-db-5bfcb99b97-jm2ds                              1/1       Running            0          14m
my-release-tidb-kv-68894f944-5fcfm                               1/1       Running            0          14m
my-release-tidb-kv-68894f944-jxm6b                               0/1       CrashLoopBackOff   7          14m
my-release-tidb-kv-68894f944-nhb8z                               0/1       CrashLoopBackOff   7          14m
my-release-tidb-pd-0                                             1/1       Running            0          14m
my-release-tidb-pd-1                                             1/1       Running            0          14m
my-release-tidb-pd-2                                             1/1       Running            0          14m                                           1/1       Running            0          11m

Only one kv-pod executed. Two got errors.

k describe pod my-release-tidb-kv-68894f944-5fcfm
Normal SuccessfulMountVolume 19m kubelet, ranchernode1 MountVolume.SetUp succeeded for volume "pvc-e431a826-001e-11e8-9529-021abc0baa27"
k describe pod my-release-tidb-kv-68894f944-jxm6b
Normal SuccessfulMountVolume 17m kubelet, ranchernode1 MountVolume.SetUp succeeded for volume "pvc-e431a826-001e-11e8-9529-021abc0baa27"
k describe pod my-release-tidb-kv-68894f944-nhb8z
Normal SuccessfulMountVolume 19m kubelet, ranchernode2 MountVolume.SetUp succeeded for volume "pvc-e431a826-001e-11e8-9529-021abc0baa27"
k logs my-release-tidb-kv-68894f944-5fcfm
No ERROR.

k logs my-release-tidb-kv-68894f944-jxm6b
2018/01/23 09:43:57.020 tikv-server.rs:157: [ERROR] lock "/data/tikv" failed, maybe another instance is using this directory.

k logs my-release-tidb-kv-68894f944-nhb8z
2018/01/23 09:43:45.272 tikv-server.rs:157: [ERROR] lock "/data/tikv" failed, maybe another instance is using this directory.

The problem is that the second and the third pod mount the same storage?

unable to addd Helm repo. problem with helming

while trying to do
[with / as well at the end]

helm repo add fb-1-stable https://s3-eu-west-1.amazonaws.com/kubernetes-charts.banzaicloud.com/branch/fb-1

getting error:

Error: Looks like "https://s3-eu-west-1.amazonaws.com/kubernetes-charts.banzaicloud.com/branch/fb-1/" is not a valid chart repository or cannot be reached: Failed to fetch https://s3-eu-west-1.amazonaws.com/kubernetes-charts.banzaicloud.com/branch/fb-1/index.yaml : 403 Forbidden

ingress: Drone targeted requests are reaching the Pipeline

[GIN] 2018/07/11 - 08:14:26 | 404 |   14.698924ms |        10.4.2.1 | GET      /stream/logs/banzaicloud/pipeline/4/1
[GIN] 2018/07/11 - 08:23:21 | 404 |   11.594932ms |      10.128.0.2 | GET      /stream/logs/banzaicloud/pipeline/4/1
[GIN] 2018/07/11 - 08:23:46 | 404 |   11.685056ms |      10.128.0.2 | GET      /stream/logs/banzaicloud/pipeline/4/1
[GIN] 2018/07/11 - 08:23:52 | 404 |   11.831573ms |      10.128.0.2 | GET      /stream/logs/banzaicloud/pipeline/4/1

[FR] Make update strategy configurable in vault chart

Currently, vault chart have default update strategy (RollingUpdate in current k8s). This prevents helm releases from smooth upgrade: RolingUpdate waits forever for non-leaders pods to be in "ready" state, so upgrade fails. Workaround: use Recreate update strategy.

Please, make update strategy configurable via values.yaml for those, who dont use vault operator yet. Thanks you!

Failed to install tidb

Error: file "banzaicloud-incubator/tidb" not found

Curl the end point http://kubernetes-charts-incubator.banzaicloud.com/ You'll notice that urls is missing an h in the protocol (http://). I think that's what's causing it?

    urls:
    - ttp://kubernetes-charts-incubator.banzaicloud.com/tidb-0.0.1.tgz

Feature Request: External Configuration Documentation


name: Documentation of Adding Authentication Backends via External Config


Is your feature request related to a problem? Please describe.
Using the External Configuration Example provided at https://banzaicloud.github.io/bank-vaults/ unable to get vault to authenticate with GitHub via the UI. Using bank-vaults configure command with the example file with values and tokens set to our organisation was likewise unsuccessful.

Describe the solution you'd like
A working example for configuring GitHub auth backend and an LDAP backend.

Describe alternatives you've considered
A manual configuration is possible but it would be quite useful to take advantage of bank-vaults functionality to stand up a vault instance in k8 that would be hooked into an authentication backend at boot.

Additional context
Sample of the config used to set up github stolen shamelessly from above:

auth:

  • type: github
    config:
    organization: xxxxxxx
    map:
    teams:
    dev: dev
    users:
    bertiew: root

When deployed with this config, attempts to authenticate with GitHub via the UI produces an error about a missing token.

[vault] Should configmap be a secret?

When using s3 backend or aws auth, the config contains aws access keys which are preferably stored in a secret rather than a configmap.

Even though secrets are just base64 encoded, access to them can be restricted through RBAC

deployment in hpa-operator chart fails due to missing rbac

Expected :
Run : helm install --name hpa-operator --namespace utility banzaicloud-stable/hpa-operator
the hpa-operator pod should start without any errors.

Actual:
Following errors found:

time="2018-07-30T09:29:11Z" level=info msg="Go Version: go1.9.7"
time="2018-07-30T09:29:11Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-07-30T09:29:11Z" level=info msg="operator-sdk Version: 0.0.5+git"
ERROR: logging before flag.Parse: E0730 09:29:11.751967       1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: statefulsets.apps is forbidden: User "system:serviceaccount:utility:default" cannot list statefulsets.apps at the cluster scope
ERROR: logging before flag.Parse: E0730 09:29:11.752076       1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: deployments.apps is forbidden: User "system:serviceaccount:utility:default" cannot list deployments.apps at the cluster scope
ERROR: logging before flag.Parse: E0730 09:29:12.755533       1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: deployments.apps is forbidden: User "system:serviceaccount:utility:default" cannot list deployments.apps at the cluster scope
ERROR: logging before flag.Parse: E0730 09:29:12.755542       1 reflector.go:205] github.com/banzaicloud/hpa-operator/vendor/github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:80: Failed to list *unstructured.Unstructured: statefulsets.apps is forbidden: User "system:serviceaccount:utility:default" cannot list statefulsets.apps at the cluster scope

Resolution:

Added name of service account name to pod spec in deployment at

spec.template.spec

[chart-build] Break the build on missing dependencies

In some situations, for example, if the requirements file is not correct, the helm dep build command fails, however, this doesn't make the CircleCI chart build fail, which leaves us with missing charts:

./pipeline-cp
Error: found in requirements.yaml, but missing in charts/ directory: grafana, traefik, prometheus, pipeline, pipeline-ui, drone, telescopes, productinfo

In the final iteration of the build for loop, we should exit on errors immediately.

Grafana not able to start on pipeline-cp due to PVC config problems

Grafana logs:

chown: changing ownership of '/var/lib/grafana/dashboards/..data': Read-only file system
chown: changing ownership of '/var/lib/grafana/dashboards/..2018_03_19_17_25_51.132647244': Read-only file system
chown: changing ownership of '/var/lib/grafana/dashboards': Read-only file system

[vault] drop default file storage

The vault chart default values set vault.config.storage to file - when adding s3 configuration you end up configuring 2 storage backends.

Helm is supposed to drop a key if you set it to null helm/helm#2648 - but this didn't seem to work, is it an issue having 2 storage backends configured and is there a way to delete default keys?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.