Giter VIP home page Giter VIP logo

nelm's Introduction

GH Discussions Twitter Telegram chat
GoDoc Contributor Covenant Artifact Hub

werf is a CNCF Sandbox CLI tool to implement full-cycle CI/CD to Kubernetes easily. werf integrates into your CI system and leverages familiar and reliable technologies, such as Git, Dockerfile, Helm, and Buildah.

What makes werf special:

  • Complete application lifecycle management: build and publish container images, test, deploy an application to Kubernetes, distribute release artifacts and clean up the container registry.
  • Ease of use: use Dockerfiles and Helm chart for configuration and let werf handle all the rest.
  • Advanced features: automatic build caching and content-based tagging, enhanced resource tracking and extra capabilities in Helm, a unique container registry cleanup approach, and more.
  • Gluing common technologies: Git, Buildah, Helm, Kubernetes, and your CI system of choice.
  • Production-ready: werf has been used in production since 2017; thousands of projects rely on it to build & deploy various apps.

Quickstart

The quickstart guide shows how to set up the deployment of an example application (a cool voting app in our case) using werf.

Installation

The installation guide helps set up and use werf both locally and in your CI system.

Documentation

Detailed usage and reference for werf are available in documentation in multiple languages.

Developers can get all the necessary knowledge about application delivery in Kubernetes (including basic understanding of K8s primitives) in the werf guides. They provide ready-to-use examples for popular frameworks, including Node.js (JavaScript), Spring Boot (Java), Django (Python), Rails (Ruby), and Laravel (PHP).

Community & support

Please feel free to reach developers/maintainers and users via GitHub Discussions for any questions regarding werf. You're also welcome on Stack Overflow: when you tag a question with werf, our team is notified and comes to help you.

Your issues are processed carefully if posted to issues at GitHub.

For questions that may require a more detailed and prompt discussion, you can use:

  • #werf channel in the CNCF’s Slack workspace;
  • werf_io Telegram chat. (There is a Russian-speaking Telegram chat werf_ru as well.)

Follow @werf_io to stay informed about all important project's news, new articles, etc.

Contributing

This contributing guide outlines the process to help get your contribution accepted.

License

Apache License 2.0, see LICENSE.

Featured in

Console - Developer Tool of the Week Scheme

nelm's People

Contributors

ilya-lesikov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

shakahl

nelm's Issues

panic: Locker has lost lease for locked "release"

Before proceeding

  • I didn't find a similar issue

Problem

Not a user friendly panic-error could occur during build or deploy process:

ERROR: lost lease 5a211cdd-6a63-45b9-8e9b-bf85a057ab2b for lock "RELEASE_NAME"
panic: Locker has lost lease for locked "RELEASE_NAME" uuid 5a211cdd-6a63-45b9-8e9b-bf85a057ab2b. Will crash current process immediately!

goroutine 130 [running]:
[github.com/werf/werf/pkg/werf.DefaultLockerOnLostLease({{0xc000fad770](http://github.com/werf/werf/pkg/werf.DefaultLockerOnLostLease(%7B%7B0xc000fad770)?, 0xc00012c010?}, {0xc00028a1c8?, 0x21?}})
    /git/pkg/werf/main.go:107 +0xa8
[github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).leaseRenewWorker(0xc001465f00](http://github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).leaseRenewWorker(0xc001465f00), {{0xc000fad770?, 0x0?}, {0xc00028a1c8?, 0xb6476a?}}, {0x0, 0x0, 0x0, 0xc00000f488, 0x387d3b0}, ...)
    /go/pkg/mod/github.com/werf/[email protected]/pkg/distributed_locker/distributed_locker.go:148 +0x585
created by [github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).runLeaseRenewWorker](http://github.com/werf/lockgate/pkg/distributed_locker.(*DistributedLocker).runLeaseRenewWorker)
    /go/pkg/mod/github.com/werf/[email protected]/pkg/distributed_locker/distributed_locker.go:115 +0x3ea

This error related to the internal locking mechanics used in the werf, which uses optimistic approach: acquire lock in some system, then prolong taken lock lease every N seconds. When werf could not prolong taken lease because of some connectivity issues or network lags, this lock could be actually overtaken by another client, and werf process detects this situation.

Solution (if you have one)

Current default behaviour in such situation is panic as above. We should make this error more user friendly and describe possible reasons why this error could occur.

Additional information

No response

Keep secret values in values.yaml

Before proceeding

  • I didn't find a similar issue

Problem

I want to keep encrypted data in values.yaml.
It comfortable if many values files for different envs

Solution (if you have one)

No response

Additional information

No response

Common values yaml for build and deploy

Before proceeding

  • I didn't find a similar issue

Problem

There are 2 problems affected by the lack of such mechanism:

  1. There are cases when user needs some configurable values to be used in the werf.yaml and .helm/templates.
    For now user have to define environment variables for werf.yaml and duplicate such value in the .helm/values.yaml.
  2. There are no way to define input values for werf.yaml templating besides usage of environment variables.

Solution (if you have one)

Proposed solution is to have something like another values file werf-values.yaml, which is accessible from either werf.yaml or from .helm/templates.

Additional information

No response

Linting helm chart without werf.yaml configuration

As a chart developer, the user would like to work with werf linter (werf helm lint) for charts that use werf go templates without mocking werf.yaml configuration.

Adding/developing/debugging a new chart in Chart Repo will be more convenient and faster without meaningless manipulations (permanent customization for the individual chart) with mock werf.yaml.

werf helm secret edit: preserve `>-` yaml block type format

Before proceeding

  • I didn't find a similar issue

Problem

Run werf helm secret values edit and define following value:

some_key: >-
  some_text_line1;
  some_text_line2;
  some_text_line3;

After running edit second time we receive reformatted value as follows:

some_key: some_text_line1; some_text_line2; some_text_line3;

— which is correct and is actually the same string. But for better UX we need to save original formatting with >- block type.

Solution (if you have one)

No response

Additional information

No response

Autoupdate dependent charts

Need a special format for Chart.lock, that only fixates major and minor versions of the chart. werf helm dependency build then should auto update patch versions of such dependencies.

Support additional helm parameters

Hi,

we are running werf with something like:
werf helm upgrade -i test bitnami/nginx --wait --kube-apiserver https://127.0.0.1:6443 --kube--token xxxxxx

It seems that some parametes are not still supported. Error: unknown flag: --kube-apiserver.

Both the above parameters allow helm installation without any kube config file, we are using them for installing in a segregated kube namespace the helm chart using a service account token.

Is it possibile add those additional flags ?

error adding edge from "..." to "...": edge would create a cycle

Version

1.2.296+

Issue

werf converge suddenly starts producing errors like this:

Error: error building deploy plan: error connecting internal dependencies: error adding dependency: error adding edge from "update/default::ConfigMap:mycm" to "recreate/default:batch:Job:myjob": edge would create a cycle

Reason

New deployment engine (Nelm), activated by default since v1.2.296, cannot ignore mistakes made in your charts that result in the wrong deployment order of your resources. For example, if you have a Job with helm.sh/hook: pre-upgrade that mounts a ConfigMap, but the ConfigMap is a non-hook resource, then your deployment order will be Job > ConfigMap, but it must be vice versa. In the old deployment engine the resources will still be applied in this (wrong) order, but in the new deployment engine this would create a cycle in the underlying graph.

Mitigation

These errors indicate resource ordering mistakes made in resource manifests in your chart. Fixing the order of resources in your chart is the correct solution. Just make sure, that the resource that you depend on (e.g. ConfigMap) is deployed at the same "stage" as the resource with a dependency (e.g. Job), or at an earlier stage. helm.sh/hook and helm.sh/weight or werf.io/weight annotations will help you with that.

Alternatively, you can temporarily revert back to the old engine by export WERF_NELM=0.

Preserve yaml values type in secret-values.yaml

Before proceeding

  • I didn't find a similar issue

Problem

The original type of the value in the secret-values.yaml is not preserved after encrypting (or editing) the file. For example yaml specification allow to define several scalar types: boolean, integer, float, string, null, timestamp or binary.

Current secrets implementation in the werf forces original value to be either converted to string or null.

Solution (if you have one)

Save original yaml scalar type to the encoded value string in encryption procedure. Restore original yaml scalar type in decryption procedure.

For example SOPS editor keeps original value type.

Helm Repositories in Github Actions

Hey,

my current project requires some helm charts that are defined in the chart.yaml.

# .helm/Chart.yaml
apiVersion: v2
name: myproject
version: 1.0.0
dependencies:
  - name: traefik
    version: "9.17.5"
    repository: "https://helm.traefik.io/traefik"
  - name: keycloak
    version: "11.0.0"
    repository: "https://codecentric.github.io/helm-charts"
  - name: cert-manager
    version: "v1.3.1"
    repository: "https://charts.jetstack.io"

In order to deploy the application with the given helm dependencies I need to setup all required repositories.

      - name: Setup Traefik Repository
        run: helm repo add traefik https://helm.traefik.io/traefik

      - name: Setup Codecentric Repository
        run: helm repo add codecentric https://codecentric.github.io/helm-charts

      - name: Setup Jetstack Repository
        run: helm repo add jetstack https://charts.jetstack.io

It would be great to automate this because every time a developer adds a new helm dependency its likely that he breaks the build and adds the repository setup afterwards.

werf render doesn't generate image names and tags

Common information

Werf version: v1.2.12+fix2
Application: https://github.com/werf/quickstart-application

Problem description

According to help page the werf render command should generate digests for images:

werf render --help | head -2
Render Kubernetes templates. This command will calculate digests and build (if needed) all images 
defined in the werf.yaml.

But if I perorm the werf render (for example, in werf official quickstart application), it generates bad image names (the quay images are hardcoded inside the templates.):

werf render | grep 'image:'
      - image: REPO:TAG
        image: quay.io/flanteurope/werf-quickstart-application:postgres-9.4
      - image: quay.io/flanteurope/werf-quickstart-application:redis-alpine
      - image: REPO:TAG
      - image: REPO:TAG

Steps to reproduce

  1. git clone https://github.com/werf/quickstart-application.git
  2. cd quickstart-application
  3. werf render | grep 'image:'

No support for recursive helm-dependencies-building

Werf now performs dependencies building automatically during converge process of the werf chart. But neither of werf or helm does perform dependencies building recursively for downloaded subcharts (and subcharts of subcharts, etc.).

Explicit values param may break default values changes in further deploys

  1. Define default .helm/values.yaml with some array values.
  2. Deploy with werf converge.
  3. Deploy with werf converge --values .helm/values.yaml
  4. Add new element into .helm/values.yaml.
  5. Deploy with werf converge.

In the (5) step default values changes will not be used by the werf. Expected werf to use changed values.

`werf helm secret values` does not round-trip for numbers in Yaml

Before proceeding

  • I didn't find a similar issue

Version

1.2.248

How to reproduce

export WERF_SECRET_KEY=$(werf helm secret generate-secret-key)
echo "foo: 123" | werf helm secret values encrypt | werf helm secret values decrypt

Result

Number was changed to string:

foo: "123"

Expected result

Number remains as number:

foo: 123

Additional information

No response

`panic: interface conversion: cache.DeletedFinalStateUnknown is not runtime.Object: missing method DeepCopyObject`

v1.1.23+fix50

Command:

# all the options here most likely unrelated
werf deploy --stages-storage :local --timeout 21600 ...

... which tried to deploy a Deployment + PV + PVC + 2x Service + CM + VPA resulted in:

...
Status progress
panic: interface conversion: cache.DeletedFinalStateUnknown is not runtime.Object: missing method DeepCopyObject [recovered]
	panic: interface conversion: cache.DeletedFinalStateUnknown is not runtime.Object: missing method DeepCopyObject [recovered]
	panic: interface conversion: cache.DeletedFinalStateUnknown is not runtime.Object: missing method DeepCopyObject

goroutine 253 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x24c83e0, 0xc00389f6e0)
	/usr/local/go/src/runtime/panic.go:969 +0x166
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x24c83e0, 0xc00389f6e0)
	/usr/local/go/src/runtime/panic.go:969 +0x166
k8s.io/client-go/tools/watch.NewIndexerInformerWatcher.func3(0x2587e00, 0xc0021d59c0)
	/go/pkg/mod/k8s.io/[email protected]/tools/watch/informerwatcher.go:135 +0x9d
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.newInformer.func1(0x24da900, 0xc000e11ce0, 0x1, 0xc000e11ce0)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:399 +0x360
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc00390c790, 0xc00380ce10, 0x0, 0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/delta_fifo.go:492 +0x235
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc003920480)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:173 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0038e5f20)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000fc9f20, 0x2da3880, 0xc00380ce40, 0xc000d83601, 0xc000b0a2a0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0038e5f20, 0x3b9aca00, 0x0, 0xc003934001, 0xc000b0a2a0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc003920480, 0xc000b0a2a0)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:145 +0x2c4
k8s.io/client-go/tools/watch.NewIndexerInformerWatcher.func4(0xc000b0a3c0, 0xc00380ccf0, 0x2dfb6a0, 0xc003920480, 0xc000da8be0)
	/go/pkg/mod/k8s.io/[email protected]/tools/watch/informerwatcher.go:146 +0x8b
created by k8s.io/client-go/tools/watch.NewIndexerInformerWatcher
	/go/pkg/mod/k8s.io/[email protected]/tools/watch/informerwatcher.go:143 +0x3af
section_end:1636731822:step_script
ERROR: Job failed: exit status 1

Redeploy fixed it.

Improve early validation of Kubernetes manifests

Before proceeding

  • I didn't find a similar issue

Problem

Currently we do not properly validate Kubernetes manifests before deploying them. There is dry-run Apply happening in Plan construction, but we ignore its errors due to many false-positives.

Solution (if you have one)

We can validate resources early based on:

  • builtin schemas from Kubernetes library
  • schemas grabbed from Kubernetes API
  • schemas produced from CRDs in charts (might be difficult to do)

Additional information

No response

Instrument to promote best practices of writing werf configurations

Before proceeding

  • I didn't find a similar issue

Problem

We need some linting instrument to check that your werf project configuration: werf.yaml and .helm/templates confirms with some known best practices.

Solution (if you have one)

  1. werf lint command which performs some checks based on some rules.
  2. Mechanics for updating linting rules, to receive updates for such rules, ability to lock for some version of rules, updatable living database of such rules.
  3. Ability to enable such linting by default in the converge-like commands.

Additional information

Older issues:

ArgoCD integration v2

Before proceeding

  • I didn't find a similar issue

Problem

Current integration with ArgoCD is pretty bare-bones. The main issue is that ArgoCD deployer is very different from our deployment engine (Nelm) and doesn't know about lots of things (werf.io/weight, werf.io/deploy-dependency, tracking, ...). ArgoCD deployment engine doesn't even support some regular Helm features.

Solution (if you have one)

No response

Additional information

Refs: #72

The ambiguity of the werf helm secret rotate-secret-key command

The only command in the secret group is tied to the project and requires a git repository and configuration files.

Options for resolving ambiguity:

  • Move the command to the top level.
  • Сreate werf helm secret rotate file/values commands for point regeneration.

Attach labels or annotations to the namespace of a release

Currently, the namespace user deploying to is not managed as part of a release and created before creating the release. This means there is no way to attach labels/annotations to this namespace during werf converge unless the namespace is created and configured before werf converge.

We need a way to specify labels/annotations for all werf commands that create namespaces, i.e. werf converge and werf bundle apply.

Can be achieved with --namespace-labels and --namespace-annotations cli options and deploy.namespaceLabels{} and deploy.namespaceAnnotations{} settings in werf.yaml.

werf-helm-test does not track pods and breaks successfully deployed release

Before proceeding

  • I didn't find a similar issue

Version

1.2.175+fix1

How to reproduce

  1. Successfully deploy base application release with converge.
  2. Run release tests with werf helm test ....

Result

werf helm test successfuly exists immediately, while tests pods are failing.
Next werf converge will fail with KIND "NAME" already exists error. And it is impossible to fix this error without removing this deployed release.

Expected result

werf helm test command should work the same as helm test — track test pods.
werf helm test should not break previously deployed release with KIND "NAME" already exists error.

Additional information

No response

Secrets access levels

Разграничение доступа при работе пользователей с секретами.
К примеру, при определённой политике безопасности компании, разработчики не должны иметь возможности работать с продовыми секретами.

`{}` are not properly interpreted in `--set-(string)` value

Prerequisites:

# werf.yaml
project: projname99
configVersion: 1
# .helm/templates/test.yaml
out: | {{ $.Values | toYaml | nindent 2 }}

Commands:

werf render --set 'key={value}'
werf render --set-string 'key={value}'

Result in:

...
key:
  - value
...

Expected instead:

...
key: '{value}'
...

werf v1.2.37+fix1

`werf dismiss` v2

Before proceeding

  • I didn't find a similar issue

Problem

In werf v2 werf dismiss still works using the old deployment engine.

Solution (if you have one)

We should migrate it to Nelm.

Additional information

Refs: #99, #109, #60

Provide a way to conveniently vendor chart dependencies into the repo and update dependencies

Before proceeding

  • I didn't find a similar issue

Problem

Current way to vendor deps:

  1. Describe dependency in the .helm/Chart.yaml with repository field.
  2. Run werf helm dependency update.
  3. Unpack downloaded .helm/charts/CHART-VERSION.tgz and remove this tar gz.
  4. Comment repository line of target chart.
  5. Run werf helm dependency update again.
  6. Commit all new and changed files to the git.

Solution (if you have one)

Maybe some special werf dependency update --verdor command which performs steps 2, 3, 4 and 5 automatically.

Additional information

No response

Check absolute paths are working with helm.allowUncommittedFiles directive in werf-giterminism.yaml

export tmpdir_values=$(mktemp -d)
werf converge
        --values=${tmpdir_values}/settings/k8s/hello.yml
# werf-giterminism.yaml
helm:
  allowUncommittedFiles:
  - /tmp/**

Gives an error like:

Error: helm upgrade have failed: unable to read chart file "../../../../../../../../tmp/settings/k8s/hello.yaml": the file "../../../../../../../../tmp/settings/k8s/hello.yaml" not found in the project git repository
To provide a strong guarantee of reproducibility, werf reads the configuration and build's context files from the project git repository, and eliminates external dependencies. We strongly recommend following this approach, but if necessary, you can allow the reading of specific files directly from the file system and enable the features that require careful use. Read more about giterminism and how to manage it here: https://werf.io/documentation/advanced/giterminism.html.

Argo Rollouts tracking

Before proceeding

  • I didn't find a similar issue

Problem

Argo Rollouts can be a good option if you need advanced deployment strategies. Our Kubedog tracking subsystem doesn't track readiness of Rollout CR's.

Solution (if you have one)

Rollout Custom Resources should be tracked for readiness.

Additional information

No response

Can't skip ./crds deployment

Before proceeding

  • I didn't find a similar issue

Problem

Installation of CRDs from ./crds can't be disabled in werf.

Solution (if you have one)

There is the --skip-crds option in Helm, we should provide the same option for werf converge/werf bundle apply.

Additional information

No response

Consistent werf-converge and werf-dismiss process locking

Werf-converge and werf-dismiss commands lock release by name. Lock is stored in the kubernetes namespace.

When --with-namespace option has been specified werf-dismiss command does not use release lock at all for now.

Werf-dismiss command should:

  • lock release while deleting helm release;
  • unlock release;
  • delete kubernetes namespace when --with-namespace option has been specified;
  • do not wait until kubernetes namespace deleted, exit immediately.

Werf-converge command should:

  • await until kubernetes namespace deleted if namespace is being deleted at the moment;
  • lock release;
  • perform release install/upgrade;
  • unlock release.

Google service account encoded as base64 parameter

When we operate with GKE in environments without gcloud utility (mostly in CI/CD), we must provide the file with Google service account and perform export GOOGLE_APPLICATION_CREDENTIALS=</path-to-service-account-file>.

It is pretty inconvenient, and It would be great if werf could operate with Google SA encoded as base64 and passed as an option (like google-sa-base64) or environment variable (like GOOGLE_SA_BASE64). This will allow to decrease boilerplate preparatory actions.

Is it possible to implement this feature?

`werf helm secret rotate-secret-key` respects giterminism constraints

Command:

WERF_OLD_SECRET_KEY=WERF_OLD_SECRET_KEY werf helm secret rotate-secret-key 

May result in:

Error: unable to read werf giterminism config: the untracked file "werf-giterminism.yaml" must be committed

To provide a strong guarantee of reproducibility, werf reads the configuration and build's context files from the project git repository, and eliminates external dependencies. We strongly recommend following this approach, but if necessary, you can allow the reading of specific files directly from the file system and enable the features that require careful use. Read more about giterminism and how to manage it here: https://werf.io/documentation/advanced/giterminism.html.

From UX perspective it might be better to ignore giterminism constraints for rotate-secret-key and reencrypt all specified secrets.

namespaceSlug does not change ns

configVersion: 1
project: eventrouter
deploy:
  helmRelease: >-
    [[ project ]]
  helmReleaseSlug: false
  namespaceSlug: false

namespaceSlug has no effect

Add service value to indicate whether the image was rebuilt since last time

Before proceeding

  • I didn't find a similar issue

Problem

Hard to check whether the image was rebuilt since last time. Example use case: K8s Job should rebuild/reupload remote assets, but only when the original image was changed.

Solution (if you have one)

werf:
  changed:
    backend: true
    frontend: false

Additional information

No response

Add flag to disable werf.io/version annotation

Before proceeding

  • I didn't find a similar issue

Problem

I'm using werf as CMP for argo-cd, and always when I update werf version - all apps becomes OutOfSync.

Solution (if you have one)

Add cli flag for werf render, to disable werf annotations. All or only werf.io/version

Additional information

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.