werf / nelm Goto Github PK
View Code? Open in Web Editor NEWNelm is a Helm 3 alternative and werf deployment engine
License: Apache License 2.0
Nelm is a Helm 3 alternative and werf deployment engine
License: Apache License 2.0
As a chart developer, the user would like to work with werf linter (werf helm lint
) for charts that use werf go templates without mocking werf.yaml
configuration.
Adding/developing/debugging a new chart in Chart Repo will be more convenient and faster without meaningless manipulations (permanent customization for the individual chart) with mock werf.yaml
.
1.2.175+fix1
werf helm test ...
.werf helm test
successfuly exists immediately, while tests pods are failing.
Next werf converge
will fail with KIND "NAME" already exists
error. And it is impossible to fix this error without removing this deployed release.
werf helm test
command should work the same as helm test
— track test pods.
werf helm test
should not break previously deployed release with KIND "NAME" already exists
error.
No response
https://github.com/werf/helm/blob/master/pkg/chart/loader/load.go#L39
Somehow this Loader function ignores global loader options.
Werf-converge and werf-dismiss commands lock release by name. Lock is stored in the kubernetes namespace.
When --with-namespace
option has been specified werf-dismiss command does not use release lock at all for now.
Werf-dismiss command should:
Werf-converge command should:
I want to keep encrypted data in values.yaml.
It comfortable if many values files for different envs
No response
No response
I'm using werf as CMP for argo-cd, and always when I update werf version - all apps becomes OutOfSync.
Add cli flag for werf render, to disable werf annotations. All or only werf.io/version
No response
Not a user friendly panic-error could occur during build or deploy process:
ERROR: lost lease 5a211cdd-6a63-45b9-8e9b-bf85a057ab2b for lock "RELEASE_NAME"
panic: Locker has lost lease for locked "RELEASE_NAME" uuid 5a211cdd-6a63-45b9-8e9b-bf85a057ab2b. Will crash current process immediately!
goroutine 130 [running]:
[github.com/werf/werf/pkg/werf.DefaultLockerOnLostLease({{0xc000fad770](http://github.com/werf/werf/pkg/werf.DefaultLockerOnLostLease(%7B%7B0xc000fad770)?, 0xc00012c010?}, {0xc00028a1c8?, 0x21?}})
/git/pkg/werf/main.go:107 +0xa8
[github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).leaseRenewWorker(0xc001465f00](http://github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).leaseRenewWorker(0xc001465f00), {{0xc000fad770?, 0x0?}, {0xc00028a1c8?, 0xb6476a?}}, {0x0, 0x0, 0x0, 0xc00000f488, 0x387d3b0}, ...)
/go/pkg/mod/github.com/werf/[email protected]/pkg/distributed_locker/distributed_locker.go:148 +0x585
created by [github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).runLeaseRenewWorker](http://github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).runLeaseRenewWorker)
/go/pkg/mod/github.com/werf/[email protected]/pkg/distributed_locker/distributed_locker.go:115 +0x3ea
This error related to the internal locking mechanics used in the werf, which uses optimistic approach: acquire lock in some system, then prolong taken lock lease every N seconds. When werf could not prolong taken lease because of some connectivity issues or network lags, this lock could be actually overtaken by another client, and werf process detects this situation.
Current default behaviour in such situation is panic as above. We should make this error more user friendly and describe possible reasons why this error could occur.
No response
Need a special format for Chart.lock, that only fixates major and minor versions of the chart. werf helm dependency build
then should auto update patch versions of such dependencies.
Argo Rollouts can be a good option if you need advanced deployment strategies. Our Kubedog tracking subsystem doesn't track readiness of Rollout CR's.
Rollout Custom Resources should be tracked for readiness.
No response
Разграничение доступа при работе пользователей с секретами.
К примеру, при определённой политике безопасности компании, разработчики не должны иметь возможности работать с продовыми секретами.
Prerequisites:
# werf.yaml
project: projname99
configVersion: 1
# .helm/templates/test.yaml
out: | {{ $.Values | toYaml | nindent 2 }}
Commands:
werf render --set 'key={value}'
werf render --set-string 'key={value}'
Result in:
...
key:
- value
...
Expected instead:
...
key: '{value}'
...
werf v1.2.37+fix1
werf dismiss --with-namespace
will automatically create target namespace then check release existance in this namespace. This is wrong behaviour, dismiss command should not create namespace in such case.
Currently, the namespace user deploying to is not managed as part of a release and created before creating the release. This means there is no way to attach labels/annotations to this namespace during werf converge
unless the namespace is created and configured before werf converge
.
We need a way to specify labels/annotations for all werf commands that create namespaces, i.e. werf converge
and werf bundle apply
.
Can be achieved with --namespace-labels
and --namespace-annotations
cli options and deploy.namespaceLabels{}
and deploy.namespaceAnnotations{}
settings in werf.yaml.
1.2.248
export WERF_SECRET_KEY=$(werf helm secret generate-secret-key)
echo "foo: 123" | werf helm secret values encrypt | werf helm secret values decrypt
Number was changed to string:
foo: "123"
Number remains as number:
foo: 123
No response
Currently we do not properly validate Kubernetes manifests before deploying them. There is dry-run Apply happening in Plan construction, but we ignore its errors due to many false-positives.
We can validate resources early based on:
No response
Related to: werf/kubedog#165.
Current integration with ArgoCD is pretty bare-bones. The main issue is that ArgoCD deployer is very different from our deployment engine (Nelm) and doesn't know about lots of things (werf.io/weight
, werf.io/deploy-dependency
, tracking, ...). ArgoCD deployment engine doesn't even support some regular Helm features.
No response
Refs: #72
Related to werf/kubedog#164
.helm/values.yaml
with some array values.werf converge
.werf converge --values .helm/values.yaml
.helm/values.yaml
.werf converge
.In the (5) step default values changes will not be used by the werf. Expected werf to use changed values.
Hi,
we are running werf with something like:
werf helm upgrade -i test bitnami/nginx --wait --kube-apiserver https://127.0.0.1:6443 --kube--token xxxxxx
It seems that some parametes are not still supported. Error: unknown flag: --kube-apiserver
.
Both the above parameters allow helm installation without any kube config file, we are using them for installing in a segregated kube namespace the helm chart using a service account token.
Is it possibile add those additional flags ?
export tmpdir_values=$(mktemp -d)
werf converge
--values=${tmpdir_values}/settings/k8s/hello.yml
# werf-giterminism.yaml
helm:
allowUncommittedFiles:
- /tmp/**
Gives an error like:
Error: helm upgrade have failed: unable to read chart file "../../../../../../../../tmp/settings/k8s/hello.yaml": the file "../../../../../../../../tmp/settings/k8s/hello.yaml" not found in the project git repository
To provide a strong guarantee of reproducibility, werf reads the configuration and build's context files from the project git repository, and eliminates external dependencies. We strongly recommend following this approach, but if necessary, you can allow the reading of specific files directly from the file system and enable the features that require careful use. Read more about giterminism and how to manage it here: https://werf.io/documentation/advanced/giterminism.html.
configVersion: 1
project: eventrouter
deploy:
helmRelease: >-
[[ project ]]
helmReleaseSlug: false
namespaceSlug: false
namespaceSlug has no effect
For example, atom has a plugin ansible-vault (https://atom.io/packages/ansible-vault), which allows you to conveniently work with data encryption. It would be very cool to add such plugins for popular IDEs and not switch to using the terminal.
There are 2 problems affected by the lack of such mechanism:
werf.yaml
and .helm/templates
.werf.yaml
and duplicate such value in the .helm/values.yaml
.Proposed solution is to have something like another values file werf-values.yaml
, which is accessible from either werf.yaml
or from .helm/templates
.
No response
We need some linting instrument to check that your werf project configuration: werf.yaml and .helm/templates confirms with some known best practices.
werf lint
command which performs some checks based on some rules.Older issues:
werf convege --set-secret-file KEY=PATH --set-secret-file KEY=PATH
No easy way to copy files from containers to host after running werf converge
.
No response
werf kube-run
already has --copy-to
and --copy-from
for this.
Refs: werf/werf#3691
v1.1.23+fix50
Command:
# all the options here most likely unrelated
werf deploy --stages-storage :local --timeout 21600 ...
... which tried to deploy a Deployment + PV + PVC + 2x Service + CM + VPA resulted in:
...
Status progress
panic: interface conversion: cache.DeletedFinalStateUnknown is not runtime.Object: missing method DeepCopyObject [recovered]
panic: interface conversion: cache.DeletedFinalStateUnknown is not runtime.Object: missing method DeepCopyObject [recovered]
panic: interface conversion: cache.DeletedFinalStateUnknown is not runtime.Object: missing method DeepCopyObject
goroutine 253 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x24c83e0, 0xc00389f6e0)
/usr/local/go/src/runtime/panic.go:969 +0x166
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x24c83e0, 0xc00389f6e0)
/usr/local/go/src/runtime/panic.go:969 +0x166
k8s.io/client-go/tools/watch.NewIndexerInformerWatcher.func3(0x2587e00, 0xc0021d59c0)
/go/pkg/mod/k8s.io/[email protected]/tools/watch/informerwatcher.go:135 +0x9d
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.newInformer.func1(0x24da900, 0xc000e11ce0, 0x1, 0xc000e11ce0)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:399 +0x360
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc00390c790, 0xc00380ce10, 0x0, 0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/delta_fifo.go:492 +0x235
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc003920480)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:173 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0038e5f20)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000fc9f20, 0x2da3880, 0xc00380ce40, 0xc000d83601, 0xc000b0a2a0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0038e5f20, 0x3b9aca00, 0x0, 0xc003934001, 0xc000b0a2a0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc003920480, 0xc000b0a2a0)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:145 +0x2c4
k8s.io/client-go/tools/watch.NewIndexerInformerWatcher.func4(0xc000b0a3c0, 0xc00380ccf0, 0x2dfb6a0, 0xc003920480, 0xc000da8be0)
/go/pkg/mod/k8s.io/[email protected]/tools/watch/informerwatcher.go:146 +0x8b
created by k8s.io/client-go/tools/watch.NewIndexerInformerWatcher
/go/pkg/mod/k8s.io/[email protected]/tools/watch/informerwatcher.go:143 +0x3af
section_end:1636731822:step_script
ERROR: Job failed: exit status 1
Redeploy fixed it.
Command:
WERF_OLD_SECRET_KEY=WERF_OLD_SECRET_KEY werf helm secret rotate-secret-key
May result in:
Error: unable to read werf giterminism config: the untracked file "werf-giterminism.yaml" must be committed
To provide a strong guarantee of reproducibility, werf reads the configuration and build's context files from the project git repository, and eliminates external dependencies. We strongly recommend following this approach, but if necessary, you can allow the reading of specific files directly from the file system and enable the features that require careful use. Read more about giterminism and how to manage it here: https://werf.io/documentation/advanced/giterminism.html.
From UX perspective it might be better to ignore giterminism constraints for rotate-secret-key
and reencrypt all specified secrets.
Error: cannot initialize kube: cannot determine default kubernetes namespace: error loading config file "/tmp/tmp-1821-x5K1mgcjCaCH": yaml: control characters are not allowed
Werf should write about error in base64 in such case.
Hard to check whether the image was rebuilt since last time. Example use case: K8s Job should rebuild/reupload remote assets, but only when the original image was changed.
werf:
changed:
backend: true
frontend: false
No response
Werf now performs dependencies building automatically during converge process of the werf chart. But neither of werf or helm does perform dependencies building recursively for downloaded subcharts (and subcharts of subcharts, etc.).
When we operate with GKE in environments without gcloud
utility (mostly in CI/CD), we must provide the file with Google service account and perform export GOOGLE_APPLICATION_CREDENTIALS=</path-to-service-account-file>
.
It is pretty inconvenient, and It would be great if werf
could operate with Google SA encoded as base64 and passed as an option (like google-sa-base64
) or environment variable (like GOOGLE_SA_BASE64
). This will allow to decrease boilerplate preparatory actions.
Is it possible to implement this feature?
file 'C:\Users\administrator\AppData\Local\Temp\werf-integration-tests-428341961\.helm' does not appear to be a gzipped archive; got 'application/octet-stream'
Issued werf dismiss --with-namespace ...
command. Then werf converge
immediately after, which gives "namespace is being terminating" error.
1.2.296+
werf converge
suddenly starts producing errors like this:
Error: error building deploy plan: error connecting internal dependencies: error adding dependency: error adding edge from "update/default::ConfigMap:mycm" to "recreate/default:batch:Job:myjob": edge would create a cycle
New deployment engine (Nelm), activated by default since v1.2.296, cannot ignore mistakes made in your charts that result in the wrong deployment order of your resources. For example, if you have a Job with helm.sh/hook: pre-upgrade
that mounts a ConfigMap, but the ConfigMap is a non-hook resource, then your deployment order will be Job > ConfigMap, but it must be vice versa. In the old deployment engine the resources will still be applied in this (wrong) order, but in the new deployment engine this would create a cycle in the underlying graph.
These errors indicate resource ordering mistakes made in resource manifests in your chart. Fixing the order of resources in your chart is the correct solution. Just make sure, that the resource that you depend on (e.g. ConfigMap) is deployed at the same "stage" as the resource with a dependency (e.g. Job), or at an earlier stage. helm.sh/hook
and helm.sh/weight
or werf.io/weight
annotations will help you with that.
Alternatively, you can temporarily revert back to the old engine by export WERF_NELM=0
.
The only command in the secret group is tied to the project and requires a git repository and configuration files.
Options for resolving ambiguity:
werf helm secret rotate file/values
commands for point regeneration.failed parsing --set data: key "}}" has no value
The original type of the value in the secret-values.yaml is not preserved after encrypting (or editing) the file. For example yaml specification allow to define several scalar types: boolean, integer, float, string, null, timestamp or binary.
Current secrets implementation in the werf forces original value to be either converted to string or null.
Save original yaml scalar type to the encoded value string in encryption procedure. Restore original yaml scalar type in decryption procedure.
For example SOPS editor keeps original value type.
Hey,
my current project requires some helm charts that are defined in the chart.yaml
.
# .helm/Chart.yaml
apiVersion: v2
name: myproject
version: 1.0.0
dependencies:
- name: traefik
version: "9.17.5"
repository: "https://helm.traefik.io/traefik"
- name: keycloak
version: "11.0.0"
repository: "https://codecentric.github.io/helm-charts"
- name: cert-manager
version: "v1.3.1"
repository: "https://charts.jetstack.io"
In order to deploy the application with the given helm dependencies I need to setup all required repositories.
- name: Setup Traefik Repository
run: helm repo add traefik https://helm.traefik.io/traefik
- name: Setup Codecentric Repository
run: helm repo add codecentric https://codecentric.github.io/helm-charts
- name: Setup Jetstack Repository
run: helm repo add jetstack https://charts.jetstack.io
It would be great to automate this because every time a developer adds a new helm dependency its likely that he breaks the build and adds the repository setup afterwards.
Installation of CRDs from ./crds can't be disabled in werf.
There is the --skip-crds
option in Helm, we should provide the same option for werf converge
/werf bundle apply
.
No response
Current way to vendor deps:
repository
line of target chart.Maybe some special werf dependency update --verdor
command which performs steps 2, 3, 4 and 5 automatically.
No response
Werf version: v1.2.12+fix2
Application: https://github.com/werf/quickstart-application
According to help page the werf render
command should generate digests for images:
werf render --help | head -2
Render Kubernetes templates. This command will calculate digests and build (if needed) all images
defined in the werf.yaml.
But if I perorm the werf render
(for example, in werf official quickstart application), it generates bad image names (the quay
images are hardcoded inside the templates.):
werf render | grep 'image:'
- image: REPO:TAG
image: quay.io/flanteurope/werf-quickstart-application:postgres-9.4
- image: quay.io/flanteurope/werf-quickstart-application:redis-alpine
- image: REPO:TAG
- image: REPO:TAG
git clone https://github.com/werf/quickstart-application.git
cd quickstart-application
werf render | grep 'image:'
As a .helm templates writer I want to have an ability to diff render of templates between two commits after refactoring of templates to check render output for changes.
Run werf helm secret values edit and define following value:
some_key: >-
some_text_line1;
some_text_line2;
some_text_line3;
After running edit second time we receive reformatted value as follows:
some_key: some_text_line1; some_text_line2; some_text_line3;
— which is correct and is actually the same string. But for better UX we need to save original formatting with >-
block type.
No response
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.