Giter VIP home page Giter VIP logo

helm-operator-get-started's Issues

Timeout error when performing step to install flux

I'm blindly following the walkthrough, and when I get to this step I get a timeout error. I did replace the repo URL to my forked repo, and am using my company's AWS EKS cluster that I set up, using Kubernetes v1.18.

$ helm upgrade -i flux fluxcd/flux --wait \
--namespace fluxcd \
--set [email protected]:zillag/helm-operator-get-started
Release "flux" does not exist. Installing it now.
Error: timed out waiting for the condition

What do I need to do to proceed?

Should not need Helm to install Helm Operator

I do not think it is a good prerequisite to make the user install helm, just to install the helm-operator. I understand the irony of that statement, but setup should be as simple as possible. Put simply, helm is completely unnecessary (not saying difficult to install or anything, just unnecessary).

Dynamic Environments Support

Hello @stefanprodan, we have a new question, please (we are a consulting company trying to promote GitOps and Flux among our clients). I have posted this question on the Weave Slack, but didn't receive any feedback yet, so maybe you can help us.

We are testing the Weave Flux Helm Operator and it's working very well for a "static" configuration, by configuring via yaml files the namespaces (prod, qa, etc), and by also using Yaml to configure each release.

Our question is: there is any way to support "dynamic" environments? Suppose that a developer creates a new branch named "app-hot-fix" and push some commits on that branch. The CI process creates new docker images based on that branch. Could be awesome automatically create a new namespace ("app-hot-fix" namespace) for that topic branch and deploy the new Docker image. As far I know, to do that I need to create a new Yaml file to define the namespace, a new Yaml file to define the new release, etc. This configuration process is manual.

But since that branch will just exist for a few hours could be great if that process could be automatically performed by Flux (create the new namespace and release). Something like "GitOps by Convention" (a k8s namespace per Git branch). Are there any features (or tools) that could be used to automagically setup a new release on-the-fly, just based on the existing Git branches?

Thank you very much in advance!

Microservices best practices with HelmOperator

I have an architectural question regarding flux and helm. I'm following the https://github.com/fluxcd/helm-operator-get-started/ tutorial.

In my project, I have tens of microservices which can not live without each other. What would be your suggestion? Should I create as many charts and HelmReleases as many microservice I have, or just create a generic chart, which contains all of the microservices helm yaml files (like deployment, service, etc...)?

How to specify ordering/dependencies for different releases

Hi Team,

We are doing a poc for installing essential services at the time of bootstrapping our kubernetes cluster. We have certain scenarios where some releases should deploy only after another release is deployed successfully.

For example we are installing sealed secrets and other chart uses it to get secrets so part of our requirement is that sealed secrets should be installed before other releases.

Other scenario is we want to install CRD's before installing charts so if we have everything committed on github and we try to bootstrap our cluster through flux then how flux ensures that first CRD is installed and only after it's installation chart gets installed?

Thanks,
Hussain

Docker Images Filtering Question

Hello Stefan, first of all, thanks for your work. We are testing the Helm Flux operator and works very well. We are using the following image: quay.io/weaveworks/helm-operator:0.2.1. Actually, this GitHub issue I'm creating is more like a question than a bug report (I think).

This is our scenario: we have created 2 namespaces: prod and qa (under "namespaces"). Then, under the "releases" directories we have created 2 releases for the same application (one per each environment).

This is the "prod" release (I'm obfuscating the AWS account number, we are using ECR):

---
apiVersion: helm.integrations.flux.weave.works/v1alpha2
kind: FluxHelmRelease
metadata:
  name: hello-prod
  namespace: prod
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.hello: glob:master-*
  labels:
    chart: hello
spec:
  chartGitPath: hello
  releaseName: hello-prod
  values:
    image: 999999999999.dkr.ecr.us-west-2.amazonaws.com/eks-bgl_dev:master-v1
    persistence:
      enabled: false

This is the "qa" release:

---
apiVersion: helm.integrations.flux.weave.works/v1alpha2
kind: FluxHelmRelease
metadata:
  name: hello-qa
  namespace: qa
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.hello: glob:qa-*
  labels:
    chart: hello
spec:
  chartGitPath: hello
  releaseName: hello-qa
  values:
    image: 999999999999.dkr.ecr.us-west-2.amazonaws.com/eks-bgl_dev:qa-v1
    persistence:
      enabled: false

So, manual upgrades (by editing the image version on the Yaml) works like a charm. Now we are trying to test the automated upgrades. For that reason we have added the annotations "automated: true" and "tag.hello: glob--*"

What we can see is that Flux is not applying such filters. So, if you push a Docker image tagged as "master-v2", both the "prod" and "qa" deployments are automatically upgraded. If you push a new Docker image tagged as "qa-v2", both the "prod" and "qa" environments are also upgraded. What we want to do is just upgrade one of both environments at the same time, based on these annotations.

On the Flux logs I see something like this (some info obfuscated):

ts=2018-09-13T15:16:01.833847609Z caller=images.go:79 component=sync-loop service=test:fluxhelmrelease/hello container=chart-image [...] pattern=glob:* [...] info="added update to automation run" new=[...] reason="latest v1 (2018-09-13 11:37:08.020861429 +0000 UTC) > current v5 (2018-09-13 11:37:08.020861429 +0000 UTC)"

I see that it says "pattern=glob:*". I guess that here is where I should see my branch filter instead right?

I have been trying to emulate what you did at https://github.com/stefanprodan/gitops-helm/blob/master/releases/dev/podinfo.yaml for example. Do you think that my configuration is wrong? Or it can be a bug?

Thank you very much in advance.

Cheers,
Leo

Question: How to handle sub charts in a HelmRelease?

trying to add Istio with a helmRelease. The helmRelease gets created but it fails along the way. The operator doesnt give any error but when getting the output of the helmRelease i find this:

"message": "could not update dependencies in /tmp/flux-working725546946/charts/istio: Error: no repository definition for , , , , , , , , , , , , , , . Please add them via 'helm repo add'\nNote that repositories must be URLs or aliases. For example, to refer to the stable\nrepository, use "https://kubernetes-charts.storage.googleapis.com/\" or "@stable" instead of\n"stable". Don't forget to add the repo, too ('helm repo add').\n",

chart install fails with "cluster-admin not found" error

Running minikube v0.25.2, helm 2.9.0:

ncc@mal:~$ minikube start --kubernetes-version v1.10.0
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

ncc@mal:~$ kubectl create clusterrolebinding tiller-cluster-rule \
>     --clusterrole=cluster-admin \
>     --serviceaccount=kube-system:tiller 
clusterrolebinding.rbac.authorization.k8s.io "tiller-cluster-rule" created

ncc@mal:~$ helm init --skip-refresh --upgrade --service-account tiller
$HELM_HOME has been configured at /home/ncc/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

ncc@mal:~$ helm repo add sp https://stefanprodan.github.io/k8s-podinfo
"sp" has been added to your repositories

ncc@mal:~$ helm install --name cd --set helmOperator.create=true --set [email protected]:ncabatoff/weave-flux-helm-demo --set git.chartsPath=charts sp/weave-flux
Error: release cd failed: clusterroles.rbac.authorization.k8s.io "cd-weave-flux" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:["*"], Resources:["*"], Verbs:["*"]} PolicyRule{NonResourceURLs:["*"], Verbs:["*"]}] user=&{system:serviceaccount:kube-system:tiller 537689e0-5603-11e8-955b-080027bc943f [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Didn't see any changes after running ci-mock.sh?

I ran the following command, keeping in mind to use my dockerhub username, and it did push a Docker image with a dev-xxxx tag. What's supposed to happen isn't clear after that.

cd hack && ./ci-mock.sh -r zillag/podinfo -b dev

The walkthrough says

With the fluxcd.io/automated annotations I instruct Flux to automate this release. When a new tag with the prefix dev is pushed to Docker Hub, Flux will update the image field in the yaml file, will commit and push the change to Git and finally will apply the change on the cluster.

Some questions:

  • Should I create the dev (and stg later) branch first before running the mock script? (I did anyway.)
  • Should I expect the releases/dev/podinfo.yaml to change in my local repo, and pushed to my remote? IOW, I'm not clear about the interaction between my local files, Git remote, and Kubernetes. There were no steps outlined on what I should check after I run the mock script.

https://weaveworks.github.io/flux is not a valid chart repository

Running the following command fails:

helm repo add fluxcd https://weaveworks.github.io/flux

Error:

Error: Looks like "https://weaveworks.github.io/flux" is not a valid chart repository or cannot be reached: Failed to fetch https://weaveworks.github.io/flux/index.yaml : 404 Not Found

Is chart's values.yaml used?

When I create a HelmRelease using the provided artifact (/blob/master/releases/dev/podinfo.yaml), the helm deployment only has the values provided inline, not using the values.yaml file.

/gitops-helm/releases/dev$ helm get values podinfo-dev
hpa:
  enabled: false
image: stefanprodan/podinfo:dev-hdtwcel9
replicaCount: 1

does this work fine with public helm charts?

The approach looks very interesting, before testing it, I wonder if (as I supposed from reading the configuration flow) this works fine also for public helm charts. Also for owned charts we would like to keep the git repo separate and only interact with the helm charts repo. Thanks for the advice!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.