Giter VIP home page Giter VIP logo

drone-helm3's Introduction

Drone plugin for Helm 3

Build Status Go Report

This plugin provides an interface between Drone and Helm 3:

  • Lint your charts
  • Deploy your service
  • Delete your service

The plugin is inpsired by drone-helm, which fills the same role for Helm 2. It provides a comparable feature-set and the configuration settings are backward-compatible.

Example configuration

The examples below give a minimal and sufficient configuration for each use-case. For a full description of each command's settings, see docs/parameter_reference.md.

Linting

steps:
  - name: lint
    image: pelotech/drone-helm3
    settings:
      mode: lint
      chart: ./

Installation

steps:
  - name: deploy
    image: pelotech/drone-helm3
    settings:
      mode: upgrade
      chart: ./
      release: my-project
    environment:
      KUBE_API_SERVER: https://my.kubernetes.installation/clusters/a-1234
      KUBE_TOKEN:
        from_secret: kubernetes_token

Uninstallation

steps:
  - name: uninstall
    image: pelotech/drone-helm3
    settings:
      mode: uninstall
      release: my-project
    environment:
      KUBE_API_SERVER: https://my.kubernetes.installation/clusters/a-1234
      KUBE_TOKEN:
        from_secret: kubernetes_token

Upgrading from drone-helm

drone-helm3 is largely backward-compatible with drone-helm. There are some known differences:

  • You'll need to migrate the deployments in the cluster helm-v2-to-helm-v3.
  • EKS is not supported. See #5 for more information.
  • The prefix setting is no longer supported. If you were relying on the prefix setting with secrets: [...], you'll need to switch to the from_secret syntax.
  • During uninstallations, the release history is purged by default. Use keep_history: true to return to the old behavior.
  • Several settings no longer have any effect. The plugin will produce warnings if any of these are present:
    • purge -- this is the default behavior in Helm 3
    • recreate_pods
    • tiller_ns
    • upgrade
    • canary_image
    • client_only
    • stable_repo_url
  • Several settings have been renamed, to clarify their purpose and provide a more consistent naming scheme. For backward-compatibility, the old names are still available as aliases. If the old and new names are both present, the updated form takes priority. Conflicting settings will make your .drone.yml harder to understand, so we recommend updating to the new names:
    • helm_command is now mode ° helm_repos is now add_repos
    • api_server is now kube_api_server
    • service_account is now kube_service_account
    • kubernetes_token is now kube_token
    • kubernetes_certificate is now kube_certificate
    • wait is now wait_for_upgrade
    • force is now force_upgrade

Since helm 3 does not require Tiller, we also recommend switching to a service account with less-expansive permissions.

This repo is setup in a way that if you enable a personal drone server to build your fork it will build and publish your image (makes it easier to test PRs and use the image till the contributions get merged)

  • Build local DRONE_REPO_OWNER=josmo DRONE_REPO_NAME=drone-ecs drone exec
  • on your server (or cloud.drone.io) just make sure you have DOCKER_USERNAME, DOCKER_PASSWORD, and PLUGIN_REPO set as secrets

drone-helm3's People

Contributors

bnmcg avatar colinhoglund avatar erincall avatar georgekaz avatar hobbypunk90 avatar josmo avatar minhdanh avatar navi86 avatar obukhov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drone-helm3's Issues

force_upgrade is being ignored

My drone-helm3 and drone versions:
drone:1.7.0
pelotech/drone-helm3:latest

What I tried to do:
We have an automatic nightly deployment that refreshes actual deployment with the same image tag, so we use force_upgrade parameter to tell Helm to recreate pods even if the same tag is present

What happened:
When the nightly deployment triggers, everything works fine, but pods aren't restarted:

$ helm history website 
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION     
160             Thu May 21 08:07:13 2020        superseded      trevor-2.3.0    1.0             Upgrade complete # <- Latest CI/CD automatic deploy
161             Fri May 22 00:18:02 2020        deployed        trevor-2.3.0    1.0             Upgrade complete # <- Nightly deploy

$ date
vie 22 may 2020 10:30:42 CES

$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
website-78f6d94c79-trt58   2/2     Running   0          24h
website-78f6d94c79-vjbk5   2/2     Running   0          24h

As you can see, pods should have ~12h age if nightly deployment refreshed pods, but instead, it has ~24h, since latest CI/CD deployment.

As far as I know, force_upgrade send helm the --force flag, which force resource updates through a replacement strategy, am I wrong?

More info:
The .drone.yml step:

  - name: helm_deploy
    image: pelotech/drone-helm3
    settings:
      mode: upgrade
      chart: my/chart
      force_upgrade: true
      add_repos: my_repo=http://charts.example.com
      namespace: ${DRONE_REPO_NAME}
      release: ${DRONE_REPO_NAME}
      kube_service_account: system:serviceaccount:helm:helm
      values: istio.hosts[0]=www.example.com,fullnameOverride=${DRONE_REPO_NAME},image.repository=${DRONE_REPO_NAME}
      values_files:
        - ./cicd/templates/values.yaml
      wait_for_upgrade: true
      kube_api_server:
        from_secret: production_api_server
      kube_token:
        from_secret: production_kubernetes_token
      kube_certificate:
        from_secret: production_k8s_ca

Also, values.yaml have a pullPolicy: Always configuration to force Kubernetes to always download the image even if same tag is provided.

This is the output of helm_deploy step on Drone web UI:

"chart" has been added to your repositories
Release "website" has been upgraded. Happy Helming!
NAME: website
LAST DEPLOYED: Fri May 22 07:32:14 2020
NAMESPACE: website
STATUS: deployed
REVISION: 162
TEST SUITE: None
NOTES:
Get project URL by running these commands:

    https://www.example.com

Thank you very much!

Which service account?

Hi! :)

I'm running Drone in a new cluster and it's failing to deploy because of the permissions for the token I'm using. Before I was using the token for the Tiller service account, but now that Tiller is gone I was trying the "default" service account in the "kube-system" namespace. It doesn't have permissions for e.g. secrets so it can't deploy. What kind of service account can I use to deploy? Could you give me an example? Thanks!

Add a "cleanup_on_fail" setting

The problem I'm trying to solve:
If someone wants to pass --cleanup-on-fail to helm upgrade, they should be able to do that

How I imagine it working:

  • Add an CleanupOnFail field to internal/helm.Config (be sure to set an envconfig tag)
  • Add an CleanupOnFail field to internal/run.Upgrade
  • Make sure internal/helm.upgrade passes the CleanupOnFail field when creating the Upgrade struct
  • Make sure Upgrade.Prepare adds --cleanup-on-fail to the helm args if CleanupOnFail is true

Expand environment variables into Values and StringValues

In order to pass secrets to helm, a user might put the following in their .drone.yml:

settings:
  values: "ssl_key=${SSL_KEY}"
environment:
  ssl_key:
    from_secret: ssl_key

When reading config from the environment, check cfg.Values and cfg.StringValues against the pattern \$\{\w+\} and substitute the corresponding environment variables. They need to respect the Prefix setting, using the same semantics as for regular config: use the non-prefixed form if it's present, but the prefixed form should override the non-prefixed.

Blocked on #9 and #19 (both fixed in #32).

Make things more godiomatic

The current design is pretty idiosyncratic—it makes total sense to me, the person who wrote it, but someone coming in for the first time will probably find it bewildering.

There should probably be New${StepName}() functions for the various Steps; they should either take the global config as an argument or use the .With${OptionName} style.

The whole Step/Plan divide may not even be worthwhile; I wrote it in a mindset more appropriate to a large project with ongoing new-feature development.

Clean up helm/config.go

The Config struct in internal/helm/config.go defines the interface between drone-helm3 and a project's Drone settings. As such, it should be clear and well-documented:

  • Make sure the comment above the Config struct makes sense to you—it makes sense to me, but I'm a poor judge since I already know all the things it's telling me.
  • Remove the "Global helm config" and "Config specifically for helm upgrade" comments; they aren't particularly accurate
  • Make sure each struct field has a comment explaining what that setting does
  • Probably remove the custom decoder for helmCommand; that responsibility lies with the determineSteps function in internal/helm/plan.go.
  • Fix the inthe typo on line 8 🙃

Add a "documentation" issue template

The problem I'm trying to solve:
We've created a couple documentation issues recently (#45, #43, and now this one) and they don't fit naturally into either of the existing issue templates.

How I imagine it working:
Add a new file, .github/ISSUE_TEMPLATE/documentation.md. It should have labels: documentation in its yaml front matter; I'll leave it to the implementor to come up with a reasonable title/description/contents.

Put drone/plugin version in the "bug" issue template

What needs explanation:

We have quite a few images up on dockerhub now, which means when someone opens an issue it's not immediately clear which version they're using. Let's put something in the "bug" template that prompts them to mention it. Drone version wouldn't hurt, either.

Remove the kube_config setting

internal/helm.Config has a KubeConfig field that specifies the destination for the kubernetes config file. It defaults to /root/.kube/config (which is the default location for a kube config file in general).

It's in the drone-helm3 config because it was in the drone-helm config, but on further reading I don't think we actually need it. It's hard to imagine a circumstance where someone would need to configure that. I guess if their Drone workspace was /root/.kube for some reason?

I don't think we need it for feature parity, either—although drone-helm will write the config file to the specified location, as far as I can tell it doesn't tell helm to look there. It never sends the --kubeconfig flag to helm, at least. Helm can use env vars in place of some of its CLI flags, but kubeconfig doesn't seem to be one of them (see the helm2 code here and the helm3 code here), so I don't think it's getting passed through from the environment stanza in .drone.yml either.

So this issue is a two-parter:

  • Investigate whether giving drone-helm a nonstandard kube_config setting can result in a successful deploy
  • Drop the struct field from internal/helm.Config and internal/run.Config, then fix test/compilation errors until nothing tries to say --kubeconfig.

Implement an update_dependencies setting

In plan.go's upgrade, lint, and uninstall functions, if config.UpdateDependencies is true, there should be an additional Step that calls helm dependency update $chart. It needs to happen before the main command, but I don't think it matters whether it happens before or after InitKube.

The UpdateDependencies struct will look like Lint, though a little simpler:

  • It only needs Chart and cmd fields.
  • Its Prepare() method should require a nonempty Chart.
  • It doesn't need to pass any global flags other than --debug.
  • It doesn't need to pass any command-specific flags at all.

Can't find path for Chart, and error kube_api_server doesn't display error message

What happened:

Drone can't find path for Chart, I think drone went directly to find the path inside the container,bacause i setting wrong KUBE_API,but the drone shows that the path cannot be found, instead it can't connect to the KUBE_API_SERVER.
More info:

name: helm-deploy
image: pelotech/drone-helm3
environment:
  api_server: http://masterIP:6443
  kube_token:
    from_secret: dev_kubernetes_token
settings:
  skip_tls_verify: true
  mode: upgrade
  chart: ./helm
  release: myapp
  wait_for_upgrade: true
  service_account: admin-user
  valuesi_fiies: ["/helm/myapp.yaml"]
  namespace: myapp
trigger:
  event:
  - tag
  branch:
  - master

error converting YAML to JSON: yaml: line 29: did not find expected key

My drone-helm3 and drone versions:

drone 1.6.5, drone-helm3 probably latest, I didn't pin tag

What I tried to do:

helm installation

What happened:

It reports some crazy error:

Generated config: {Command:upgrade DroneEvent:push UpdateDependencies:false DependenciesAction: AddRepos:[] RepoCertificate: RepoCACertificate: Debug:true Values:image.tag="master-f51c9d45" StringValues: ValuesFiles:[] Namespace: KubeToken:(redacted) SkipTLSVerify:false Certificate:******** APIServer:******** ServiceAccount:deploy ChartVersion: DryRun:false Wait:false ReuseValues:false KeepHistory:false Timeout: Chart:chart Release:dijaspora Force:false AtomicUpgrade:false CleanupOnFail:false LintStrictly:false Stdout:0xc00008a008 Stderr:0xc00008a010}
2 | calling *run.InitKube.Prepare (step 0)
3 | loading kubeconfig template from /root/.kube/config.tpl
4 | creating kubeconfig file at /root/.kube/config
5 | calling *run.Upgrade.Prepare (step 1)
6 | Generated command: '/usr/bin/helm --debug upgrade --install --set image.tag="master-f51c9d45" dijaspora chart'
7 |  
8 | calling *run.InitKube.Execute (step 0)
9 | writing kubeconfig file to /root/.kube/config
10 | calling *run.Upgrade.Execute (step 1)
11 | history.go:52: [debug] getting history for release dijaspora
12 | upgrade.go:82: [debug] preparing upgrade for dijaspora
13 | Error: UPGRADE FAILED: YAML parse error on dijaspora/templates/deployment.yaml: error converting YAML to JSON: yaml: line 29: did not find expected key
14 | helm.go:75: [debug] error converting YAML to JSON: yaml: line 29: did not find expected key
15 | YAML parse error on dijaspora/templates/deployment.yaml
16 | helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
17 | /home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:146
18 | helm.sh/helm/v3/pkg/releaseutil.SortManifests
19 | /home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:106
20 | helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
21 | /home/circleci/helm.sh/helm/pkg/action/install.go:489
22 | helm.sh/helm/v3/pkg/action.(*Upgrade).prepareUpgrade
23 | /home/circleci/helm.sh/helm/pkg/action/upgrade.go:166
24 | helm.sh/helm/v3/pkg/action.(*Upgrade).Run
25 | /home/circleci/helm.sh/helm/pkg/action/upgrade.go:83
26 | main.newUpgradeCmd.func1
27 | /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:136
28 | github.com/spf13/cobra.(*Command).execute
29 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:826
30 | github.com/spf13/cobra.(*Command).ExecuteC
31 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:914
32 | github.com/spf13/cobra.(*Command).Execute
33 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
34 | main.main
35 | /home/circleci/helm.sh/helm/cmd/helm/helm.go:74
36 | runtime.main
37 | /usr/local/go/src/runtime/proc.go:203
38 | runtime.goexit
39 | /usr/local/go/src/runtime/asm_amd64.s:1357
40 | UPGRADE FAILED
41 | main.newUpgradeCmd.func1
42 | /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:138
43 | github.com/spf13/cobra.(*Command).execute
44 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:826
45 | github.com/spf13/cobra.(*Command).ExecuteC
46 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:914
47 | github.com/spf13/cobra.(*Command).Execute
48 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
49 | main.main
50 | /home/circleci/helm.sh/helm/cmd/helm/helm.go:74
51 | runtime.main
52 | /usr/local/go/src/runtime/proc.go:203
53 | runtime.goexit
54 | /usr/local/go/src/runtime/asm_amd64.s:1357
55 | while executing *run.Upgrade step: exit status 1

More info:

If I do "helm upgrade --install --set image.tag="master-f51c9d45" dijaspora chart" at my zsh, everything is fine. So there are no errors in deployment.yaml or anywhere, but I get false error using this plugin. Do you preload values.yaml at all? I mean helm should do this, but what is wrong here?

  - name: deploy
    image: pelotech/drone-helm3
    settings:
      kube_api_server:
        from_secret: deploy_server
      kube_certificate:
        from_secret: deploy_cert
      kube_token:
        from_secret: deploy_token
      service_account: deploy
      mode: upgrade
      chart: chart
      release: dijaspora

Add github metafiles

Github repos can provide configuration with some special files. README.md is the most prominent, but there are several others that may be useful here:

Let's put one or more such files in a .github/ folder.

include a --ca-file to helm3 plugin

The problem I'm trying to solve:
It will be great to have a ca-file parameter in "helm repo add" for those who have a self-signed certificate in local repository (such as harbor).
Usually, I have to add a repo using the command "helm repo add my_repo https://my_repo.internal --ca-file ca.crt"

How I imagine it working:
Create a "ca-file" parameter to read from secret, for example.

steps:

  • name: deploy
    image: pelotech/drone-helm3
    settings:
    helm_command: upgrade
    namespace: staging
    add_repos: harbor=https://my_repo
    ca-file:
    from_secret: REPO_CA
    (...)

Thanks!

Replace fmt.Printf with an actual logger

Currently, all logging is done with fmt.Printf (or .Println, .Fprintf, etc.). That's fine for a first pass, but we should really use the log package or something like it. Printf output can pollute the test output, and it would be great to be able to say logger.Debug("...") instead of

if cfg.Debug {
	fmt.Fprint(cfg.Stderr, "...\n")
}

Allow interpolating Helm chart repository credentials from environment variables

The problem I'm trying to solve:
Helm repositories can be configured to require authentication (typically using the HTTP Basic scheme). In order to access a protected repository, you can configure its URL like so: https://user:[email protected].

The credentials for a Helm repository may be stored as an environment variable. As Drone doesn't support interpolating arbitrary variables in its pipeline by default, the Helm plugin could do this instead.

How I imagine it working:
Leverage the existing arbitrary environment variable interpolation used for values strings, adding this functionality to the add_repos configuration parameter.

Add a lint_strictly setting

I'm a big believer in treating warnings as errors. Sure, many warnings are false positives, but if a normal build has warnings, you're likely to overlook any new ones that indicate a real problem.

For the sake of the -Werr aficionados out there, add a lint_strictly setting¹ that sends the --strict flag to helm lint.

¹ Just calling it strict might be ok, but the struct field in internal/helm.Config should definitely have "Lint" in the name somewhere since the setting is specific to that command.

Add build_dependencies parameter

The problem I'm trying to solve:
To have reproducible builds it would be nice to have the option to install dependencies from Chart.lock.

How I imagine it working:
There is an option "update_dependencies" which can be used to avoid including dependencies in the source code. But this operation pulls a new version of dependencies according to Chart.yaml ranges.
This one runs helm dependency update, I can imagine another one would run helm dependency build

Auto-tag releases

I've been trying to follow the recommended versioning process for go modules. The auto_tag setting in our .drone.yml ensures that the docker image gets the version number from the git tag, so that part is handled.

However, the current process for creating version tags is "after merging a pull request, remember to put a tag on the merge commit, then remember to push the tag." That's two "remember to"s too many. We should find something that can increment the version number automatically.

I haven't found a drone plugin that can do it for us, but I did find a github Action plugin that should be pretty easy to port to drone (it might work right out of the box, if the image is published somewhere). It looks for #major, #minor, or #patch in any commit message since the previous tag and bumps the version accordingly. It can optionally do a patchlevel bump if nothing else is specified.

InitKube.Certificate should be optional

Currently, internal/run.InitKube.Prepare returns an error if its Certificate field is empty (unless SkipTLSVerify is true). That was an error on my part! The kubeconfig only needs a certificate-authority-data field if the cluster CA is using a self-signed certificate.

Implement EKS support

internal/helm.Config will need two new fields:

	EKSCluster string `envconfig:"EKS_CLUSTER"`
	EKSRoleARN string `envconfig:"EKS_ROLE_ARN"`
  • internal/run.InitKube and .kubeValues will need matching fields, so their values can be passed along to the kubeconfig template.
  • In InitKube.Prepare, if i.EKSCluster != "", i.Token should not be mandatory (and should probably be forbidden).

See also ipedrazas/drone-helm#80 for how this was implemented over there.

add_repo doesn't doing repo update?

I'm trying to install chart from repo, but don't get any success.

    image: pelotech/drone-helm3
    add_repos: repo=https://REPOURL
    settings:
      helm_command: upgrade
      chart: repo/test
      release: test
      api_server:
        from_secret: KUBERNETES_SERVICE_HOST
      kubernetes_token:
        from_secret: KUBE_TOKEN

error:

Error: repo repo not found

Move values config out of run.Config

The Config struct defined in internal/run.Config is meant to hold configuration that applies to all Steps. It currently has three fields, Values, StringValues, and ValuesFiles, that don't meet that standard. When I originally wrote it I thought they'd apply to most everything, but in fact they're only used in two of the seven Steps we currently have defined.

Remove those fields from internal/run.Config and add them to the Upgrade and Lint Steps.

Implement the prefix setting

drone-helm allows the drone chart to set a prefix to be used when looking up environment variables. For example, given this stanza:

environment:
  prefix: FOO
  foo_token: fjejkdkfjfj

It will look for the token setting in FOO_TOKEN rather than TOKEN.

We should retain support for that feature.

Look for CLI flags added in helm3; consider implementing them

This is sort of the inverse of #10—helm3 probably added new commands and/or flags, and it may be worthwhile to implement some of them. Compare helm3's CLI options to helm2 and decide whether this plugin should have support for any of the additions. If so, make github issues for "implement such-and-such feature."

Give drone-helm3 usage information in the `help` Step

Currently, the help step just calls helm help and nothing more. It would be more useful if it gave usage information for drone-helm3 itself, either instead of the helm usage or in addition to it.

Some potentially-useful information to include:

  • information drone-helm3 needs in order to run (either a command setting or a DRONE_EVENT env var)
  • A list of valid commands (probably including delete and lint, even if #3 and #4 aren't complete)
  • mandatory settings/env vars for the various commands

See also #15, which will create a circumstance in which someone might actually see the help command's output 🙃

Warn if helm2-specific settings are present

drone-helm has a handful of settings that correspond to helm2 commands/flags that don't exist in helm3. If we see those env vars, it probably means they were left over during an upgrade from drone-helm. drone-helm3 should emit a warning that advises the plugin consumer to remove those config settings. We don't want a user to experience the frustration of "I've set this setting, why isn't it being applied??"

Deprecated env vars:

  • PURGE (for adding --purge to helm delete. helm3's delete command has no --purge flag)
  • RECREATE_PODS (for adding --recreate-pods to helm upgrade. helm3's upgrade command has no --recreate-pods flag)
  • TILLER_NS (Tiller setting)
  • UPGRADE (Tiller setting)
  • CANARY_IMAGE (Tiller setting)
  • CLIENT_ONLY (Tiller setting)
  • STABLE_REPO_URL (Tiller setting)

Remember to look for both the prefixed and non-prefixed forms (e.g. $PURGE and $PLUGIN_PURGE).

Choose a license

GPLv3 is my goto, but Joachim has concerns about whether it's appropriate. Let's reach agreement on one.

Add aliases for settings names

The problem I'm trying to solve:

Some of the configuration fields' names aren't great. helm_command, is a leaky abstraction, for example. Most of them are for drone-helm2 backwards-compatibility; some might just be because I didn't think the names through.

How I imagine it working:

In internal/helm.NewConfig, look for environment variables with alternate names for some settings. For example, we could allow PLUGIN_MODE or maybe PLUGIN_OPERATION in place of PLUGIN_HELM_COMMAND.

The docs should probably note the "better" name as the main setting name, and note the "worse" name as a backwards-compatibility alias.

We'll need to figure out what to do if both versions are present--error, default to one or the other, something else...?

Add an "atomic" setting

The problem I'm trying to solve:
If someone wants to pass --atomic to helm upgrade, they should be able to do that

How I imagine it working:

  • Add an Atomic field to internal/helm.Config
  • Add an Atomic field to internal/run.Upgrade
  • Make sure internal/helm.upgrade passes the Atomic field when creating the Upgrade struct
  • Make sure Upgrade.Prepare adds --atomic to the helm args if Atomic is true

Implement `helm delete`

  • Make an internal/run/delete.go that can run helm delete
  • In internal/helm/plan.NewPlan, if cfg.Command is "delete", or cfg.Command is "" and cfg.DroneEvent is "delete",
  • In internal/helm/plan.NewPlan,
    • instantiate a run.InitKube
    • instantiate a run.Lint
    • put both in p.steps

Add a keep_history setting for Delete

helm2's delete command had a --purge option that would purge Helm's release ledger. In helm3 that behavior is the default; if you want to keep the ledger you need to supply --keep-history.

I'm not sure whether this should be on by default.

On the one hand, drone-helm had a purge setting, but it doesn't appear to be functional (it's omitted from the code that loads env vars into the config). So anything that was using delete was getting the non-purge behavior. If we want drone-helm3 to be fully backwards-compatible with drone-helm, our keep_history setting should default to true.

However, the fact that Helm 3 made purging the default behavior means "don't keep history" is probably the appropriate default in general. In that case we should follow their lead and make our keep_history default to false.

If we decide to make false the default, remove this issue from the Version 1.0 milestone.

Remove the `prefix` setting

The prefix setting in internal/helm/config.go exists because it existed in drone-helm. As I've learned more about drone usage, it's become clear it's not needed. It was meant to support a .drone.yml stanza like this:

pipeline:
  steps:
    - name: deploy_staging
      image: pelotech/drone-helm3
      prefix: stage
      secrets: [stage_kubernetes_token]

That secrets syntax is deprecated in recent versions of drone, and might not work at all. A modern stanza would look like this:

pipeline:
  steps:
    - name: deploy_staging
      image: pelotech/drone-helm3
      kubernetes_token:
        from_secret: stage_kubernetes_token
  • Remove the setting and associated code from config.go
  • Remove any mention of the setting (including the "using the prefix setting" section) from parameter_reference.md
  • In the "upgrading from drone-helm" section of the README, add a note that the old syntax needs to be replaced

Implement `helm lint`

  • Make an internal/run/lint.go that can run helm lint
  • In internal/helm/plan.NewPlan, if cfg.Command is lint, instantiate a run.Lint and put it in p.steps.
    • It may also need a run.InitKube; I'm not sure whether Helm's lint command needs a kubeconfig.

Deal with the change in helm's --timeout flag

The problem I'm trying to solve:
helm2's --timeout flag specified a number of seconds for the timeout. In helm3 it uses a string formatted for golang's ParseDuration function, e.g. "200s".

If someone is upgrading from drone-helm and they have a timeout: 200 in their settings, it won't do what they expect. I'm not sure whether helm will just exit with an error or set the timeout to 0, but either way the deployment won't succeed.

How I imagine it working:
I see two options:

  1. Say "update your timeout setting" in the "Upgrading from drone-helm" section of the readme
  2. If we see a bare number in cfg.Timeout, append an s so it means the same thing it did in helm2

Add tests for the `kubeconfig` template

There's a kubeconfig file at the root of the repository. internal/run.InitKube uses it to create a .kube/config. Since it's a Go template, it's code; since it's code, it should have tests.

At minimum, we should verify that it's a syntactically-valid Go template. Ideally, there should be a few more:

  • Calling i.template.Execute populates the expected values
  • Calling i.template.Execute produces syntactically-valid yaml
  • Coverage for each of the if-clauses/branching paths. Currently, those are:
    • if eq .SkipTLSVerify true/else
    • if .Namespace
    • if .Token/else if .EKSCluster

Question: Help needed on understanding this error

So I am trying to implement the helm registry publish support https://helm.sh/docs/topics/registries/, but now after my changes and after running the code, I could see the drone build is failing with this error:

Command Path: /usr/bin/helm Args: [registry login -u ****** -p ****** ******.azurecr.io]
Command Path: /usr/bin/helm Args: [registry logout ******.azurecr.io]
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
while executing *run.Registry step: exit status 1

That error more of looks like the one you get when docker login fails, but acording to my logs, it looks that the steps are correct, but it happens on Execute and the part which baffles me is that the command is not actually called. Meaning that I have put the log just before it executes the cmd from the registry file and the log never prints.

Here is my repo where I am trying this stuff. (Please ignore my logging messages :) ) https://github.com/sherry-ummen/drone-helm3/tree/publish_to_helm_registry

Anyone could point out what could be wrong here?

Remove build/drone-helm from the repo

The built binary is supposed to be .gitignored, but has been inadvertently added. It's just short of 4 MB, and since git doesn't do a very good job of managing large files, leaving it committed will have an outsized impact on clone/fetch operations.

Implement the helm_repos setting

  • There needs to be an AddRepo Step that can call helm repo add $name $url. There are no command-specific flags; it will need to at least pass the --debug global flag (and maybe others).
  • In plan.go, if config.Repos is nonempty, add an AddRepo for each repo in the list.
  • Expect elements of the list to be of the form "name=url". See the drone-helm tests for an example.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.