linkyard / concourse-helm-resource Goto Github PK
View Code? Open in Web Editor NEWDeploy to kubernetes helm from your concourse.ci.
License: Apache License 2.0
Deploy to kubernetes helm from your concourse.ci.
License: Apache License 2.0
I'm currently pinned to a private image to use #59 but would like to use an official release :)
8c4ce72 introduced a bug in this resource:
helm history
has the ability to include the --max
option since this prints historical revisions for a given release. helm status
not since this shows the current status of the release
Help needed: I think I have mis-configured my resource because I'm getting the following output:
Initializing kubectl...
Cluster "default" set.
User "admin" set.
Context "default" created.
Switched to context "default". Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: x509: certificate signed by unknown authority
my resource:
- name: resource-chart
type: helm
source:
cluster_url: ((concourse-cluster-url))
cluster_ca: ((concourse-cluster-ca))
token: ((concourse-cluster-token))
My concourse is from stable/concourse and I'm using k8s secrets, using the --from-literal
. The ca is my ca in base64 and the token is from kubectl config view -o jsonpath="{.users[?(@.name == \"$(kubectl config current-context)\")].user.auth-provider.config.access-token}"
. I have a basic knowledge of k8s so I'm not sure if I've configured the resource-chart
properly or obtained the correct token
? I tried other options like admin_key
/admin_cert
but I got further with token
.
Is possible to use a private ropository for charts?
Perhaps this is naïveté on my part, but I'm unsure how to use this resource in its current form without having the ability to use a file from my own pipeline to configure the authentication token.
Currently I'm using a GCE service account for authentication and to generate a bearer token I build it in a preceding step of my pipeline.
You can see the fork I'm using to accomplish this here: https://github.com/groupby/concourse-helm-resource
I am targeting another Kubernetes Cluster in my group and everything was working correctly, all deployments working properly. Today for some reason all my deployment attempts (even the ones that were previously working) fail with:
Error: cannot connect to Tiller
The Tiller deployment is up and running on the staging cluster, no problems there it seems. I ran into this old issue on helm which seemed to be slightly related, but I'm not entirely sure of where to start debugging this. Any ideas?
It's easy to just download a JSON file containing all service account auth data. It would be great to just provide a path to this while when configuring concourse-helm-resource
.
{
"type": "service_account",
"project_id": "my-project-id",
"private_key_id": "904a...",
"private_key": "-----BEGIN PRIVATE KEY-----\n ...",
"client_email": "[email protected]",
"client_id": "1032...",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/my-service-account%40my-project-id.iam.gserviceaccount.com"
}
We have Concourse deployed in our Kubernetes cluster and when the concourse-helm-resource sets up helm, it hangs when running helm version
:
Initializing kubectl...
Cluster "default" set.
User "admin" set.
Context "default" created.
Switched to context "default".
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.9", GitCommit:"3fb1aafdafa3d33bc698930095db1e56c0f76452", GitTreeState:"clean", BuildDate:"2018-03-12T16:13:32Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Initializing helm...
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Kubernetes: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.9", GitCommit:"3fb1aafdafa3d33bc698930095db1e56c0f76452", GitTreeState:"clean", BuildDate:"2018-03-12T16:13:32Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
[debug] context deadline exceeded
Error: cannot connect to Tiller
We tracked this down to setupConnection()
in helm.go
, which creates a tunnel to Tiller, unless HELM_HOST
is set.
Setting HELM_HOST
to tiller-deploy.kube-system:44134
solved our problem.
I have added a source configuration setting for HELM_HOST
, so that this can be set in the pipeline, if you would like me to make a PR.
There's no option to specify the helm client version.
This is needed to match the helm server version on an enterprise cluster that might not always be up to the latest version, otherwise the build errors out if the versions don't match.
It is harder to force the server version to match the client than the other way around.
I think this can be done by modifying the Dockerfile to something like:
ARG HELM_VERSION=latest
FROM linkyard/docker-helm:$HELM_VERSION
Then in the put we can add something like:
build_args:
HELM_VERSION: 2.9.1
I would like to avoid having separate yaml files for my helm chart overrides. Also, individual key-value sets are pretty tedious.
Would you consider implementing something like this (see override_values_objects
)? A map-merge would happen in the order objects are listed. This stage would precede the override_values
so that key-value changes could be the last operation. Maybe you can think of a better name for it. :)
resource_types:
- name: helm
source:
repository: linkyard/concourse-helm-resource
type: docker-image
resources:
- name: chaoskube
source:
repos:
- name: stable
url: https://kubernetes-charts.storage.googleapis.com
type: helm
jobs:
plan:
- put: chaoskube
params:
chart: stable/chaoskube
override_values_objects:
- config:
annotations: ''
dryRun: false
interval: 5m
labels: release!=chaoskube
namespaces: '!kube-system'
priorityClassName: common-high
- priorityClassName: changed-my-mind
We have a use case where I need to pass in a release per deploy, and would prefer to configure a single resource. The logic is similar to release
where it can be figured on the resource level, and the job (put) level. Would you accept a PR for this?
Example:
resources:
- name: helm-server
type: helm
source:
cluster_url: https://kube-master.domain.example
cluster_ca: _base64 encoded CA pem_
admin_key: _base64 encoded key pem_
admin_cert: _base64 encoded certificate pem_
repos:
- name: some_repo
url: https://somerepo.github.io/charts
jobs:
# ...
plan:
- put: helm-server
params:
chart: source-repo/chart-0.0.1.tgz
namespace: namespace-1
- put: helm-server
params:
chart: source-repo/chart-2-0.0.1.tgz
namespace: namespace-2
When configuring a cluster_ca
, the resource is passing it through these lines:
ca_path="/root/.kube/ca.pem"
echo "$cluster_ca" | base64 -d > $ca_path
kubectl config set-cluster default --server=$cluster_url --certificate-authority=$ca_path
Unfortunately, if the certificate contains '=' characters (used for padding), base64 complains about it. I'm not entirely sure why it fails, to be fair.
I found out after hijack-ing into the container. If instead of using a base64-decoded certificate, I call the same kubectl config
command with the path to a base64-encoded certificate, it works fine.
If I remove the '=' sign at the end of the certificate, base64 works fine, but I get an error down the line like this:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: x509: certificate signed by unknown authority
A bit of a chicken-and-problem. Any thoughts?
I can't get it to work with my concourse set up in a k8s cluster.
This is how my pipeline looks. I've taken away the uninteresting parts:
resource_types:
name: helm
type: docker-image
source:
repository: linkyard/concourse-helm-resource
name: dev-cluster
type: helm
source:
cluster_url: https://kubernetes
cluster_ca: xxxxx
admin_cert: xxxx
admin_key: xxx
put: dev-cluster
params:
chart: stable/redis
release: my-redis
I get an error when I try to run it:
Initializing kubectl...
Cluster "default" set.
User "admin" set.
Context "default" created.
Switched to context "default".
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Initializing helm...
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Resource setup successful.
cat: can't open '/tmp/build/put/my-redis': No such file or directory
failed
If I jump into the worker via fly intercept I can install the package via helm manually but for some reason this put won't work. Any ideas of what I'm doing wrong?
#59 quotes all parameters, so breaks glob expansion. Previously, setting params.chart
to something like my-chart/*.tgz
would correctly expand to match the single tgz file in my-chart
. Now, it results in an error like Error: path "/tmp/build/put/my-chart/*.tgz" not found
.
I'm working on a PR to fix this now.
We want to start collecting the Chart version (and perhaps the App version) metadata from this resource. We regularly deploy charts that are "latest" but would like a way to track what version we just deployed.
Here's the general idea:
helm repo update
helm search repo/chart
if Chart is being deployed without a versionhelm upgrade -i repo/chart
Normal upgrade/install pathIs this something that you would accept a PR for?
Thanks!
With local helm repository I'm getting this error when try to deploy:
Initializing kubectl...
Cluster "default" set.
User "admin" set.
Context "default" created.
Switched to context "default".
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Initializing helm...
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Installing helm repository local_repo http://192.168.56.20:15000/api/packages
"local_repo" has been added to your repositories
Resource setup successful.
Installing local_repo/template-rest
Running command helm upgrade --install --tiller-namespace kube-system --namespace default local_repo/template-rest --version 1.1.0 local_repo/template-rest | tee /tmp/log
Error: failed to download "local_repo/template-rest" (hint: running helm repo update
may help)
When trying to run the helm task, I get the following exception:
failed to ping registry: 2 error(s) occurred:
- ping https: Get https://https/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
- ping http: Get http://https/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
My pipeline:
---
resource_types:
- name: helm
type: docker-image
source:
repository: https://github.com/linkyard/concourse-helm-resource
resources:
...
- name: backend-helm
type: helm
source:
cluster_url: https://<cluster-ip>
cluster_ca: <cert>
token: <token>
jobs:
...
- put: backend-helm
params:
chart: git-repo-backend/helm/Chart.yaml
values: git-repo-backend/values.yaml
Not sure what it is trying to ping and why. Would appreciate any help.
It is fundamentally flawed to check for source.release
if you can overwrite it in params.release
This cause all the current check to fail if you installed a chart and overwrite the release name with params.release
.
I don't know what would be the best approach to fix that but you can't access params
in check.
My current use case is PR generate multiple parallel runs of a pipeline. I need to namespace some stuff by commitID to avoid clashes. I don't know said commitID before so I overwrite the release name on put
which causes the check to fail because the release in source doesn't exist. I think we should overall make the resource release agnostic by default (remove source.release) but then I have no idea what check we could perform..
I specify: cluster_url, cluster_ca and token
I expect that helm deployment will appear in the target cluster.
Instead the deployment goes to the cluster that concourse runs in (cluster_url seems to be ignored)
Hi,
we have the resource configured with cluster_url
, cluster_ca
and namespace
resource parameters. Then we only use put
with token_path
(and other params related to deployment).
However, check
is scheduled even if no get
is set up but it cannot authenticate against Kubernetes, obviously:
resource script '/opt/resource/check []' failed: exit status 1
stderr:
Initializing kubectl...
Cluster "default" set.
User "admin" set.
Context "default" modified.
Switched to context "default".
error: tls: failed to find any PEM data in certificate input
Although this doesn't cause any problems with put
, it makes resource square orange (failing). This is not documented. Could you suggest anything to deal with this?
Also reported at helm/helm#4582
On GKE installing via concourse the install process bombs out. This was working previously until I made some changes to my cluster. I switched my nodes from cos to ubuntu and upgraded to the latest Kubernetes version on GKE.
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platfo
rm:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.6-gke.2", GitCommit:"384b4eaa132ca9a295fcb3e5dfc74062b257e7df", GitTreeState:"clean", BuildDate:"2018-08-15T00:10:14Z", GoVersion:"go1.9.3b4", Compiler:"
gc", Platform:"linux/amd64"}
Helm version:
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Stack trace:
Running command helm upgrade taskhero-staging /tmp/build/put/helm-repo/taskhero-0.1.0.tgz --tiller-namespace=kube-system -f /tmp/build/put/chart-source/staging.yaml --set image.repository=gcr.io/taskhero-197200/taskhero-staging --set image.digest=sha256:2c27459820b97667af265c84d9fcf04afe89089cc28bee15b85af347ea69acf6 --set env.APP_VERSION=latest --install --namespace staging | tee /tmp/log
Release "taskhero-staging" does not exist. Installing it now.
panic: interface conversion: interface {} is []interface {}, not map[string]interface {}
goroutine 1 [running]:
k8s.io/helm/pkg/strvals.(*parser).key(0xc42066ef60, 0xc4204d68d0, 0x1214aad, 0x14d70c0)
/go/src/k8s.io/helm/pkg/strvals/parser.go:211 +0xebc
k8s.io/helm/pkg/strvals.(*parser).parse(0xc42066ef60, 0xc4204d68d0, 0x0)
/go/src/k8s.io/helm/pkg/strvals/parser.go:133 +0x38
k8s.io/helm/pkg/strvals.ParseInto(0x7ffed89d9ea4, 0x16, 0xc4204d68d0, 0x0, 0x0)
/go/src/k8s.io/helm/pkg/strvals/parser.go:85 +0xbf
main.vals(0xc42052cfe0, 0x1, 0x1, 0xc42007f6c0, 0x3, 0x4, 0x20d7238, 0x0, 0x0, 0x20d7238, ...)
/go/src/k8s.io/helm/cmd/helm/install.go:382 +0x45a
main.(*installCmd).run(0xc420b7db68, 0xc42000c018, 0x163e75e)
/go/src/k8s.io/helm/cmd/helm/install.go:232 +0x1e4
main.(*upgradeCmd).run(0xc420001980, 0x0, 0x1778080)
/go/src/k8s.io/helm/cmd/helm/upgrade.go:202 +0xe3f
main.newUpgradeCmd.func2(0xc420769d40, 0xc4206b41c0, 0x2, 0xe, 0x0, 0x0)
/go/src/k8s.io/helm/cmd/helm/upgrade.go:117 +0x141
k8s.io/helm/vendor/github.com/spf13/cobra.(*Command).execute(0xc420769d40, 0xc4206b40e0, 0xe, 0xe, 0xc420769d40, 0xc4206b40e0)
/go/src/k8s.io/helm/vendor/github.com/spf13/cobra/command.go:599 +0x3db
k8s.io/helm/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4207d4000, 0xc4207426c0, 0xc420742fc0, 0xc420743680)
/go/src/k8s.io/helm/vendor/github.com/spf13/cobra/command.go:689 +0x2d4
k8s.io/helm/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4207d4000, 0xf, 0xf)
/go/src/k8s.io/helm/vendor/github.com/spf13/cobra/command.go:648 +0x2b
main.main()
/go/src/k8s.io/helm/cmd/helm/helm.go:161 +0x79
When using a tool like https://github.com/kubernetes/kops, a complete kubeconfig is generated. Instead of decomposing it into URLs and credentials to pass to the concourse-helm-resource, I'd like to be able to specify a file like this:
resources:
- name: mysql-helm
type: helm
source:
release: custom-mysql
- name: kubeconfig
type: s3
source:
bucket: mybucket
access_key_id: ((s3-access-key))
secret_access_key: ((s3-secret))
jobs:
- name: deploy-mysql
plan:
- get: kubeconfig
- put: mysql-helm
params:
kubeconfig_file: kubeconfig/file
chart: stable/mysql
The matching in parseResource
of parse-helm.ts needs some love. The types switched on are incomplete or too strict, depending on how you look at it. If you're running a newer or older version of Kube, the types may be different. The current implementation seems like it'll be brittle and require constant supervision.
Specifically, my Kube deploy has extensions/v1beta1/Deployment
as the type for Deployments. The impact is wait-for-helm doesn't actually wait until everything has been created, so the pipeline continues and blows up later because the release isn't actually deployed properly. I've forked and done a hacky fix to solve my immediate issue (see: L67) but this doesn't feel sustainable.
What are your thoughts on a future proof implementation? I'm happy to send a PR over.
I'm thinking options:
Deployment
. The downside of this is if ever there's any double up in type suffixes we might have to disambiguate.kind
field and not worry about the API versions. This'd be a much bigger change though, as we'd need kubectl and a reliable way to query all resources created by the chart etc...As far as I could see, with this resource it's not possible to deploy to a Tiller install that uses TLS to connect, is it?
I couldn't find any --tls
flags in any of the resources and I thought this was seen as a best practice, am I missing something? I will do it myself and submit a PR, was just wondering if I am not interpreting something incorrectly.
Thanks!
Lately the hide
option does not work for me anymore, even when providing new type: string
along with it.
- put: k8s
params:
chart: git/my-helm-chart
values: git/my-helm-chart/values.yaml
namespace: ((k8s_namespace))
release: my-release
override_values:
- key: database.password
value: ((database_password))
type: string
hide: true
This configuration still prints the values of database.password
in the logs.
Hello,
I'm trying to use a private repository of gitlab where my chart is, but I can not put a token or private_key to connect. Is there a solution?
Thanks,
Jônatas
will be needed for compatibility with future concourse versions.
On line 145 helm repo update
is called many times, once for each repo. Would it not make sense to call once, after the loop of adds? Anything I'm missing?
concourse-helm-resource/assets/common.sh
Lines 133 to 146 in 0ce1523
As of now it only supports deployment of charts from the file system (mychart.tgz). Mostly this is what you do in a build, but we should also allow the deployment of charts from the stable repo at least.
Would you be open to implement that in this resource ? (I'm willing to do it)
https://rimusz.net/tillerless-helm/
https://github.com/rimusz/helm-tiller
Basically it let you run tiller on the client side and use the user RBAC rights, this fixes the fact that Tiller needs full access on the server side. I've been using it for a while outside of concourse and it's really awesome.
Hey,
I came across this tip:
In order to use the same command when installing and upgrading a release, use the following command:
helm upgrade --install <release name> --values <values file> <chart directory>
@msiegenthaler Was there a reason you didn't use this to avoid checking for upgrade/install?
P.S.
Thanks again for creating this resourceful! It's exceedingly useful
We use Vault with Concourse to configure deployments of Helm apps with passwords/tokens/etc without them being in cleartext in the pipeline. Using values_override
means they get logged as so:
Running command helm upgrade tools-chatbot -f /tmp/build/put/project-git/charts/values.yaml --set tag=62690800a1869f78ca058e83438b91f41c56968f --set 'config.token=<SUPER_SECRET_TOKEN>' project-git/charts/chatbot | tee /tmp/log
I would very much prefer to be able to hide these values.
The best option would be to include a new hide
option in the values_override
list:
values_override:
- key: tag
file: project-git/.git/ref
- key: config.token
value: ((vault-secret-value))
hide: true
Which would give an output similar to:
Running command helm upgrade tools-chatbot -f /tmp/build/put/project-git/charts/values.yaml --set tag=62690800a1869f78ca058e83438b91f41c56968f --set 'config.token=*REDACTED*' project-git/charts/chatbot | tee /tmp/log
Another, maybe simpler, option could be to have a flag to disable the echo "Running command $helm_cmd"
in helm_install()
and helm_upgrade()
.
If you have a chart defined in the same repo as your sourcecode being built and try to install it you will get an error like this:
Error: found in requirements.yaml, but missing in charts/ directory
The way to fix this is to do a helm dependency up
for local files before you run the helm upgrade --install command.
Anybody else have another solution? Except for hosting the chart somewhere outside the local code repo that is.
It would be nice to be able to support Helm plugins for extended use.
Our particular use-case is to add something like https://github.com/skuid/helm-value-store to manage the values based on tag selectors.
Seeing how this is not a simple task I hope to discuss if it is even viable (supporting a generic plugin framework) or a fork supporting each plugin specifically would make more sense.
I'm getting the very cryptic unexpected end of JSON input
error from check.
I'm using a config like this one, which shouldn't be too special:
- name: helm
type: helm
source:
cluster_url: ((kubernetes-cluster-url))
cluster_ca: ((kubernetes-cluster-ca))
admin_key: ((kubernetes-admin-key))
admin_cert: ((kubernetes-admin-cert))
namespace: datahub
release: datahub
This config works for deploying charts, so I think it's correct.
Is there any way to get more debug output from check? Anything I could do to help figure this out?
Issue #58 fixed
Error: grpc: trying to send message larger than max (22395781 vs. 20971520)
in the out
command. Unfortunately, the check
command still uses helm history
without --max
.
Use case: mainly private chart repos
I know I'm annoying :P I don't think you released my helm test stuff
I think this may have been undefined behaviour, or at least not explicitly supported before...
Prior to 0835099 you could pass a path to a chart in an input, and it didn't need to be a tar. I presume this is because $source/$chart
would be expanded to my-input/my/chart
which worked; however, since this change, any local charts must be in a tar. This is unfortunate, as in some cases my charts are committed alongside the code they're packaging.
put: chart
params:
chart: git-src/chart
It'd be nice to get a release with the output fix. #82
concourse-helm-resource/assets/out
Line 225 in 9f0abb6
concourse-helm-resource/assets/out
Lines 57 to 72 in 9f0abb6
If the first deployment is a failure, then STATUS
in helm status
will still be there (as FAILED), but helm upgrade won't work on that release because there have been no successfully deployed release that can be upgrade yet.
So, is_deployed
should use helm install
if the very first deployment of a helm release fails.
Not sure at the moment on how can we properly identify if at least one deployment for release is in passed state, so that we can use helm upgrade.
Hey everybody 👋
could you please make release of the the current master so we can use the feature introduced in #98 ?
Maybe /cc @msiegenthaler ?
Thanks! 💯
I follow this tutorial :https://cloud.google.com/solutions/continuous-integration-helm-concourse
With the images provided there. It works but it use helm 2.6.2 and concourse-helm-resource:2.6.2
I tried to use the latest version 1.12.2 but it fails for some strange reason and I can't find how to fix it.
Here is some info.
My resoure type:
- name: helm
type: docker-image
source:
repository: ldbl/helm-concourse-gcp
My job:
plan:
- get: app-image
trigger: true
passed:
- build-image
- get: chart-source
trigger: true
- task: build-chart
file: chart-source/tasks/prep-chart.yaml
params:
bucket: {{bucket}}
chart_name: {{chart_name}}
- put: {{release_name}}
params:
chart: gcs-repo/{{chart_name}}
override_values:
- key: image.repository
path: app-image/repository
- key: image.digest
path: app-image/digest
My dockerfile
FROM linkyard/concourse-helm-resource
ENV PATH=$PATH:/opt/google-cloud-sdk/bin
ENV GCLOUD_SDK_VERSION=234.0.0
ENV GCLOUD_SDK_URL=https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-${GCLOUD_SDK_VERSION}-linux-x86_64.tar.gz
# Add helm-gcs pluginf
RUN helm init --client-only && \
helm plugin install https://github.com/viglesiasce/helm-gcs.git --version v0.2.0
# Install gcloud
RUN apk update && apk add curl openssl python \
&& mkdir -p /opt && cd /opt \
&& wget -q -O - $GCLOUD_SDK_URL |tar zxf - \
&& /bin/bash -l -c "echo Y | /opt/google-cloud-sdk/install.sh && exit"
I got this error
Initializing kubectl...
Cluster "default" set.
User "admin" set.
Context "default" created.
Switched to context "default".
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:39:52Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.6-gke.2", GitCommit:"04ad69a117f331df6272a343b5d8f9e2aee5ab0c", GitTreeState:"clean", BuildDate:"2019-01-04T16:19:46Z", GoVersion:"go1.10.3b4", Compiler:"gc", Platform:"linux/amd64"}
Initializing helm...
Client: &version.Version{SemVer:"v2.12.2", GitCommit:"7d2b0c73d734f6586ed222a567c5d103fed435be", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.2", GitCommit:"7d2b0c73d734f6586ed222a567c5d103fed435be", GitTreeState:"clean"}
Installing helm repository gcs-repo gs://modular-robot-222611-helm-repo
"gcs-repo" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "gcs-repo" chart repository
Update Complete. ⎈ Happy Helming!⎈
Resource setup successful.
Installing dev-site
Running command helm upgrade dev-site gcs-repo/"nginx" --tiller-namespace=kube-system --set image.repository=gcr.io/modular-robot-222611/app-image --set image.digest=sha256:5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 --install --namespace default | tee /tmp/log
Error: failed to download "gcs-repo/\"nginx\"" (hint: running `helm repo update` may help)
More interesting is that if I intercept this step and execute exactly the same command from the containter it actually works and I don't know what is wrong exactly
helm upgrade dev-site gcs-repo/"nginx" --tiller-namespace=kube-system --set image.repository=gcr.io/modular-robot-222611/app-image --set image.digest=sha256:5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 --install --namespace default
I .
1: build #3, step: dev-site, type: put
2: build #4, step: build-chart, type: task
3: build #4, step: dev-site, type: put
4: build #5, step: build-chart, type: task
5: build #5, step: dev-site, type: put
choose a container: 5
bash-4.4# helm upgrade dev-site gcs-repo/"nginx" --tiller-namespace=kube-system --set image.repository=gcr.io/modular-robot-222611/app-image --set image.digest=sha256:5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 --install --namespace default
Release "dev-site" does not exist. Installing it now.
NAME: dev-site
LAST DEPLOYED: Fri Feb 15 09:57:54 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dev-site-nginx ClusterIP 10.3.240.104 <none> 80/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dev-site-nginx 1 1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
dev-site-nginx-78f78f6b58-bqm6l 0/1 ContainerCreating 0 0s
==> v1/ConfigMap
NAME DATA AGE
dev-site-nginx-static 1 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=nginx,release=dev-site" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
Can someone give me some clue how to fix this ?
Currently only the standard chart repos are support. Allow to specify the repo in the resource configuration.
Some work regarding this was done at https://github.com/thebeefcake/concourse-helm-resource
in commit b2ed690, e572d8b, ad57e07, 99aab74
Helm 2.9 will add support for username
and password
for authenticating to a registry. This resource should be updated to accept these new params.
For more information about the new feature:
If a release has a lot of history doing a helm history
will fail with a large message error:
> helm history prometheus
Error: grpc: trying to send message larger than max (22395781 vs. 20971520)
The implementation of current_revision
has this issue so builds are failing after they've pushed to tiller.
current_revision() {
helm history --tiller-namespace $tiller_namespace $release | grep "DEPLOYED" | awk '{ print $1 }'
}
I think adding a --max 1
would be sufficient to fix this, as max always returns the latest revision.
It looks like token_path
is mentioned user Source Configuration
section in README where as the it should be provided as params according to https://github.com/linkyard/concourse-helm-resource/blob/master/assets/common.sh#L19
It took me a while to find out why my configuration was not working. I hope fixing the documentation it will help others.
It's possible and helpful with the helm
cli tool to pass multiple --values
options, it'd be useful to be able to use this within Concourse.
Scenario:
values.yaml
with chart customisations in, say persistent volume changes.values.yaml
with just updated docker image tagsvalues.yaml
s, and helm
merges them- task: generate-custom-values.yaml
config:
outputs:
- custom-values
run:
path: sh
args:
- -ec
- echo "docker_image: v123" > custom-values/values.yaml
- put: prometheus
params:
chart: stable/prometheus
values:
- my-repo/values.yaml
- custom-values/values.yaml
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.