Giter VIP home page Giter VIP logo

metacontroller's Introduction

GitHub release (latest SemVer) GitHub Release Date GitHub Docker Image Size (latest semver) Docker Pulls GitHub contributors Go Report Card codecov

Metacontroller

Metacontroller is an add-on for Kubernetes that makes it easy to write and deploy custom controllers in the form of simple scripts. This is a continuation of great work started by GKE here. We are excited to move forward with Metacontroller as a community maintained project. A big thank you to all of the wonderful Metacontroller community members that made this happen!

Documentation

Please see the documentation site for details on how to install, use, or contribute to Metacontroller.

Please follow this

Contact

Please file GitHub issues for bugs, feature requests, and proposals.

Join the #metacontroller channel on Kubernetes Slack.

Contributing

See CONTRIBUTING.md and the contributor guide.

Licensing

This project is licensed under the Apache License 2.0.

metacontroller's People

Contributors

ajayk avatar antoineco avatar calebamiles avatar crimsonfaith91 avatar dependabot[bot] avatar enisoc avatar fernandoataoldotcom avatar fpetkovski avatar grzesuav avatar jiancheung avatar kritzefitz avatar lionelvillard avatar lukebond avatar mgoodness avatar mikebryant avatar mikesmithgh avatar newtondev avatar niladrih avatar omsanchez-iov42 avatar reegnz avatar renovate-bot avatar rlguarino avatar roelvdberg avatar sathieu avatar semantic-release-bot avatar venkatamutyala avatar victorbac avatar vmax avatar warmchang avatar xocoatzin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metacontroller's Issues

Provide docker image variant based on `distroless`

with naming :

metacontrollerio/metacontroller:v4.0.1-distroless
so by default point to alpine, but if user want - it can use distroless one.

need to setup another dockerfile and attach it to pipeline run

Migrate initial code base from GoogleCloudPlatform/metacontroller

Migrate the master branch from GoogleCloudPlatform/metacontroller to metacontroller/metacontoller where it will be actively maintained. Work with GCP to archive GoogleCloudPlatform/metacontroller with a redirection notice that instructs users towards the new repository.

Migrate to go-modules

During my work on running tests I discover that in metac there were migration to gomodules done, which make building app and running unit tests easier, we should implement this.

Also question - how metacontroller package should be named ? I noticed it is metacontroller.app - why not simply metacontroller ?

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: renovate.json
Error type: Invalid JSON (parsing failed)
Message: Syntax error: expecting end of expression or separator near "] "pack

Flacky integration tests

From time to time, ingeration tests fails. After rerun - run passes. At first glance looks like API server is not ready when test is running, therefore it fails.

Example :
3_Run unit & integration tests.txt

go test -i ./test/integration/...
PATH="/go/src/metacontroller.io/hack/bin:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" go test ./test/integration/... -v -timeout 5m -args --logtostderr -v=1
I1128 13:49:24.761484     557 etcd.go:79] starting etcd on http://127.0.0.1:45695
I1128 13:49:24.761760     557 etcd.go:85] storing etcd data in: /tmp/integration_test_etcd_data632189156
I1128 13:49:24.763715     557 apiserver.go:72] starting kube-apiserver on http://127.0.0.1:34735
I1128 13:49:24.763790     557 apiserver.go:78] storing kube-apiserver data in: /tmp/integration_test_apiserver_data451004659
I1128 13:49:24.768606     557 main.go:78] Waiting for kube-apiserver to be ready...
W1128 13:49:24.929570     557 main.go:85] Kubectl error: exit status 1
I1128 13:49:26.671558     557 main.go:91] Kube-apiserver started
I1128 13:49:27.124037     557 apiserver.go:99] kube-apiserver exit status: signal: killed
I1128 13:49:27.125675     557 etcd.go:105] etcd exit status: signal: killed
cannot install metacontroller namespace: exit status 1
FAIL	metacontroller.io/test/integration/composite	2.381s
I1128 13:49:27.300148     612 etcd.go:79] starting etcd on http://127.0.0.1:34431
I1128 13:49:27.300278     612 etcd.go:85] storing etcd data in: /tmp/integration_test_etcd_data165717049
I1128 13:49:27.300729     612 apiserver.go:72] starting kube-apiserver on http://127.0.0.1:39351
I1128 13:49:27.300814     612 apiserver.go:78] storing kube-apiserver data in: /tmp/integration_test_apiserver_data248307780
I1128 13:49:27.303543     612 main.go:78] Waiting for kube-apiserver to be ready...
W1128 13:49:27.392095     612 main.go:85] Kubectl error: exit status 1
I1128 13:49:28.654643     612 main.go:91] Kube-apiserver started
=== RUN   TestSyncWebhook
I1128 13:49:31.348199     612 metacontroller.go:92] Starting CompositeController metacontroller
I1128 13:49:31.348209     612 controller.go:32] Waiting for caches to sync for CompositeController controller
I1128 13:49:31.348300     612 metacontroller.go:85] Starting DecoratorController metacontroller
I1128 13:49:31.348310     612 controller.go:32] Waiting for caches to sync for DecoratorController controller
    TestSyncWebhook: crd.go:72: Waiting for Parent CRD to appear in API server discovery info...
I1128 13:49:31.448421     612 controller.go:39] Caches are synced for DecoratorController controller
I1128 13:49:31.448872     612 controller.go:39] Caches are synced for CompositeController controller
    TestSyncWebhook: crd.go:85: Waiting for Parent CRD client List() to succeed...
    TestSyncWebhook: crd.go:72: Waiting for Child CRD to appear in API server discovery info...
    TestSyncWebhook: crd.go:85: Waiting for Child CRD client List() to succeed...
I1128 13:49:51.494864     612 controller.go:183] Starting DecoratorController dc
I1128 13:49:51.494885     612 controller.go:187] Waiting for DecoratorController dc caches to sync
I1128 13:49:51.494892     612 controller.go:32] Waiting for caches to sync for dc controller
    TestSyncWebhook: decorator_test.go:75: Waiting for child object to be created...
    TestSyncWebhook: fixture.go:132: error while waiting for condition: childs.test.metacontroller.io "test-sync-webhook" not found
I1128 13:49:51.595101     612 controller.go:39] Caches are synced for dc controller
I1128 13:49:51.596338     612 manage_children.go:254] Parent test-sync-webhook/test-sync-webhook: creating Child test-sync-webhook
I1128 13:49:51.755038     612 controller.go:211] Shutting down DecoratorController dc
--- PASS: TestSyncWebhook (20.42s)

[Feature request] API to trigger immediate sync

For controllers that depend on external state, Metacontroller has resyncPeriodSeconds currently.

I'd like to propose that Metacontroller support an API/endpoint that would trigger an immediate synchronization (called from the mutator of the external state).

Unable to set multiple ports in service-per-pod

When i set multiple ports in the service-per-pod-ports annotations, the service creation fails because of missing port names. Are multiple ports supported, and is there a way to add the port names to the annotation? The hook only adds port and targetPort.

I cannot get istio gateways with the decorator controller

apiVersion: metacontroller.k8s.io/v1alpha1
kind: DecoratorController
metadata:
  name: gateway-configmap-generator-controller
spec:
  resources:
  - apiVersion: networking.istio.io/v1beta1
    resource: gateways
  hooks:
    sync:
      webhook:
        url: http://gateway-configmap-generator-controller.metacontroller/sync
        timeout: 10s

Will error:

E0201 14:53:00.360131       1 discovery.go:103] Failed to fetch discovery info: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

Discovery - can metacontroller benefit from building atop of some tooling

Description

There is several options to build controllers in kubernetes :

Metacontroler currently does not use any of those, this issue is about discovering if any of those would fit as something which can be adopted to simplify metacontroller architecture etc.

Actions

  • quickly go over each of mentioned frameworks and check if it fits metacontroller.

Enhancement: Emit kubernetes events

Description

When handling resources, emit events to record what happened (i.e. sync started, sync failed, webhook execution failed etc). It will allow users find out in easy way what is happening with their resources without necessity to check metacontroller logs.

Scope

  • add api (in metacontroller codebase) which allow to attach event to particular resource
  • should cover wast range of kubernetes versions

Out of scope

  • not required to add event in all places in metacontroller sourcecode, can be done later
  • it is not about allowing third party controllers to add events when their processing sync/customize/finalize webhooks

Related issues : GoogleCloudPlatform/metacontroller#7,

Publish ARM64 image

Please publish ARM64 container image. The current image supports AMD64 only.
Thank you!

`Status` in CompositeController isn't set // populated from `sync` response

When I was writing one of the exaples, I notice that for CompositeController Status field is not updated correctly. I navigated issue arount https://github.com/metacontroller/metacontroller/blob/master/controller/composite/controller.go#L622 : ins subsequent method :

if changed := update(current); !changed {

https://github.com/metacontroller/metacontroller/blob/master/dynamic/clientset/clientset.go#L194 the updated current object have Status field, but after invocation :

		if rc.HasSubresource("status") {
			result, err = rc.UpdateStatus(current, metav1.UpdateOptions{})
		} else {
			result, err = rc.Update(current, metav1.UpdateOptions{})
		}

result does not have it.

Referencing some similar issues from original repo : GoogleCloudPlatform/metacontroller#159 GoogleCloudPlatform/metacontroller#201

Set up an adopters.md file

It would be great if companies can publicly declare that they use metacontroller so that the project gains transparency into the userbase.

Metacontroller CRD's are in protected group

Thanks @fpetkovski for spotting,

metacontroller crd's are in group called metacontroller.k8s.io, which mean it is protected - https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190612-crd-group-protection.md#k8sio-group-protection

so we temporary need to add annotations marking it is unapproved - https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190612-crd-group-protection.md#what-to-do-if-you-accidentally-put-an-unapproved-api-in-a-protected-group - as in #53

Final solution

Either we need to ask for approval https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190612-crd-group-protection.md#what-to-do-if-you-accidentally-put-an-unapproved-api-in-a-protected-group or we should change the group name.

I would firstly check if our api is approvable,and what requirements we need to met.

Discovery - running e2e tests (examples/test.sh) in CI

Description

This project have e2e tests in form of test.sh files in each of the examples, which could potentially be run with github actions.
For that to work:

  1. kubernetes must be installed/run in CI container
  2. metacontroller image must be build and loaded to the cluster,
  3. metacontroller manifests must be applied on cluster,
  4. kubectl available in the PATH
  5. examples/test.sh must be run

Todo

First part of the discovery should be exploration of various distributions of kubernetes suitable for CI run.
Options :

  • kind
  • minikube
  • k3s
  • any others ?

Examples cleanup - make go modules for examples using go language

Description

Currently some dependencies in go.mod exists as they are used in examples. During migration to go modules examples (which are using go language) should be also converted to modules.

Todo

Go over examples. if it is using golang, migrate it to use gomodules.

Refactor: Rebase on client-go dynamic informers

Description

Based on slack conversation :

if the new client-go dynamic informer does everything metacontroller needs (e.g. start/stop dynamically without process restart), it would be great to rebase onto it

Todo

  • select code which should be replaced
  • do the changes and run full set of tests (unit/integration/e2e)

Congratulations!

Hi -
I just wanted to congratulate you all on making release v0.4.1 and on moving this project forward again after all the transition effort.
Thanks,
Piers Harding.

... and I will close this issue :-)

Resources managed using metacontroller sometimes "hang" when the cache is being refreshed

First, here's some context on my setup:

  • I'm currently using metacontroller on a GKE cluster, which has a per-project rate limit of 10 QPS on the Kubernetes API server.
  • Metacontroller on my cluster is managing a lot of resources (~1000 parent resources).
  • Although ultimately, GKE's rate limit is the primary cause behind the issues I'm seeing, I suspect this is still something that metacontroller might want to handle since they probably can still be triggered on a cluster with a higher rate limit by proportionally increasing the number of resources.

This leads to what looks like resources failing to update whenever metacontroller is syncing that resource's cache.

Here's an example of how this happens:

  • A parent resource has a running pod child resource.
  • Metacontroller starts syncing its caches (either because it was redeployed or because the caches expired). This results in the parent controller's work queue immediately becoming very large (all 1000+ resources are enqueued).
  • GKE throttles the requests from metacontroller, which means that the parent controller takes a long time to work through its queue.
  • When the pod child resource finishes running, this gets detected by metacontroller and the parent resource gets enqueued.
  • However, it takes a long time before metacontroller gets around to actually syncing the resource (I've had cases where it took 20+ minutes). This makes it look like the parent resources or the metacontroller are stuck in a bad state.

Given this issue, I am wondering:

  • Why does metacontroller clear its cache periodically? Is this safe to disable? I'm currently mitigating this issue is by setting an unrealistically large cache flush interval (~100 years), but ideally I'd like to simply disable it as long as that doesn't break things.
  • If periodically clearing the cache is necessary or useful, is it possible to instead do it on a per-item basis (ie. a value that's 30 minutes old gets cleared and refreshed rather than all cache values at every 30 minute interval)?
  • Similarly, why does flushing the cache also result in immediately rebuilding the entire cache and syncing everything? Why not simply clear the cache and let it get rebuilt as things continue to get processed?
  • Is it possible to prioritize syncing based on a change to a resource's child over syncing based on changes to the cache? This change might be as simple as changing the event handler functions to insert syncs from the event handler at the front of the queue rather than adding it at the back of the queue.

After running test.sh, the replicas setting value is not reached And deleting catset fails

My environment:
K8S: v1.17.14
Metacontroller: v1.1.1

Problem1: catset cannot reach the number of copies

pod logs of metacontroller-0:

[root@node1 ~]# kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
nginx-backend-0   1/1     Running   0          11m
[root@node1 ~]# kubectl -n metacontroller logs -f --tail 10 metacontroller-0
      }
    }
  },
  "finalizing": false
}
I1229 08:14:54.972159       1 webhook.go:54] DEBUG: webhook timeout: 10s
I1229 08:14:54.982961       1 webhook.go:66] DEBUG: webhook url: http://catset-controller.metacontroller/sync response body: {"status":{"replicas":1,"readyReplicas":1},"children":[{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"nginx-backend-1"},"spec":{"containers":[{"image":"nginx:stable-alpine","name":"nginx","ports":[{"containerPort":80,"name":"web"}],"volumeMounts":[{"mountPath":"/usr/share/nginx/html","name":"www"}]}],"terminationGracePeriodSeconds":1,"hostname":"nginx-backend-1","subdomain":"nginx-backend","volumes":[{"name":"www","persistentVolumeClaim":{"claimName":"www-nginx-backend-1"}}]},"apiVersion":"v1","kind":"Pod"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"nginx-backend-0"},"spec":{"containers":[{"image":"nginx:stable-alpine","name":"nginx","ports":[{"containerPort":80,"name":"web"}],"volumeMounts":[{"mountPath":"/usr/share/nginx/html","name":"www"}]}],"terminationGracePeriodSeconds":1,"hostname":"nginx-backend-0","subdomain":"nginx-backend","volumes":[{"name":"www","persistentVolumeClaim":{"claimName":"www-nginx-backend-0"}}]},"apiVersion":"v1","kind":"Pod"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"www-nginx-backend-0"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"apiVersion":"v1","kind":"PersistentVolumeClaim"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"www-nginx-backend-1"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"apiVersion":"v1","kind":"PersistentVolumeClaim"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"www-nginx-backend-2"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"apiVersion":"v1","kind":"PersistentVolumeClaim"}]}
I1229 08:14:54.984309       1 webhook.go:66] DEBUG: webhook url: http://catset-controller.metacontroller/sync response body: {"status":{"replicas":1,"readyReplicas":1},"children":[{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"nginx-backend-1"},"spec":{"containers":[{"image":"nginx:stable-alpine","name":"nginx","ports":[{"containerPort":80,"name":"web"}],"volumeMounts":[{"mountPath":"/usr/share/nginx/html","name":"www"}]}],"terminationGracePeriodSeconds":1,"hostname":"nginx-backend-1","subdomain":"nginx-backend","volumes":[{"name":"www","persistentVolumeClaim":{"claimName":"www-nginx-backend-1"}}]},"apiVersion":"v1","kind":"Pod"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"nginx-backend-0"},"spec":{"containers":[{"image":"nginx:stable-alpine","name":"nginx","ports":[{"containerPort":80,"name":"web"}],"volumeMounts":[{"mountPath":"/usr/share/nginx/html","name":"www"}]}],"terminationGracePeriodSeconds":1,"hostname":"nginx-backend-0","subdomain":"nginx-backend","volumes":[{"name":"www","persistentVolumeClaim":{"claimName":"www-nginx-backend-0"}}]},"apiVersion":"v1","kind":"Pod"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"www-nginx-backend-0"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"apiVersion":"v1","kind":"PersistentVolumeClaim"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"www-nginx-backend-1"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"apiVersion":"v1","kind":"PersistentVolumeClaim"},{"metadata":{"labels":{"app":"nginx","component":"backend"},"name":"www-nginx-backend-2"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"apiVersion":"v1","kind":"PersistentVolumeClaim"}]}
I1229 08:14:54.985102       1 controller_revision.go:277] CatSet default/nginx-backend: updating ControllerRevision catsets.ctl.enisoc.com-4006d259a964ee57dd922da9a45e2c6ed993b02d
E1229 08:14:54.990604       1 controller.go:223] failed to sync CatSet "default/nginx-backend": CatSet default/nginx-backend: can't reconcile ControllerRevisions: can't update ControllerRevision catsets.ctl.enisoc.com-4006d259a964ee57dd922da9a45e2c6ed993b02d for CatSet default/nginx-backend: controllerrevisions.metacontroller.k8s.io "catsets.ctl.enisoc.com-4006d259a964ee57dd922da9a45e2c6ed993b02d" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

Problem2: The finalize hook is blocking and the catset deletion fails

++ kubectl get catsets nginx-backend -o 'jsonpath={.status.readyReplicas}'
+ [[ 0 -eq 3 ]]
+ sleep 1
^C++ cleanup
++ set +e
++ echo 'Clean up...'
Clean up...
++ kubectl patch catsets nginx-backend --type=merge -p '{"metadata":{"finalizers":[]}}'
catset.ctl.enisoc.com/nginx-backend patched
++ kubectl delete -f my-catset.yaml
service "nginx-backend" deleted
catset.ctl.enisoc.com "nginx-backend" deleted

After deleting finalize hook, catset is deleted successfully

hooks:
sync:
    webhook:
    url: http://catset-controller.metacontroller/sync
# finalize:
#   webhook:
#     url: http://catset-controller.metacontroller/sync

kubectl apply -f catset-controller.yaml

^C++ cleanup
++ set +e
++ echo 'Clean up...'
Clean up...
++ kubectl patch catsets nginx-backend --type=merge -p '{"metadata":{"finalizers":[]}}'
catset.ctl.enisoc.com/nginx-backend patched
++ kubectl delete -f my-catset.yaml
service "nginx-backend" deleted
catset.ctl.enisoc.com "nginx-backend" deleted
++ kubectl delete po,pvc -l app=nginx,component=backend
pod "nginx-backend-0" deleted
persistentvolumeclaim "www-nginx-backend-0" deleted
persistentvolumeclaim "www-nginx-backend-1" deleted
persistentvolumeclaim "www-nginx-backend-2" deleted
++ kubectl delete -f catset-controller.yaml
customresourcedefinition.apiextensions.k8s.io "catsets.ctl.enisoc.com" deleted
compositecontroller.metacontroller.k8s.io "catset-controller" deleted
deployment.apps "catset-controller" deleted
service "catset-controller" deleted
++ kubectl delete configmap catset-controller -n metacontroller
configmap "catset-controller" deleted

Make it possible to delete parent resources

I'm using metacontroller to implement a resource that works similarly to batch jobs. This works fine at first, but after a while the system starts to accumulate a lot of those "finished" jobs. I'd like to be able to implement something like ttlSecondsAfterFinished1 for my resource using metacontroller.

As far as I can tell, this is not currently possible. However, IIUC it should be relatively easy to add a delete flag to the data returned by the controller which would be used for "clean up" purposes.

I'm happy to volunteer my time to implement this, or an alternative solution if you prefer.

[1] See: https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically

Use Case: controller for pods launched by Airflow to enable Vertical Pod Autoscaling

Hello,

I asked about this in the Kubernetes Slack here and folks suggested I do a more detailed writeup and post an issue here.

I have created a repo with a pretty detailed description of the problem and how I think Metacontroller might help: https://github.com/RayBB/airflow-vpa

In short: If metacontroller could create a controller that acts as a thin passthrough to a label selector it will likely work for my use case. It may not be the optimal solution but would be much easier than getting other projects to change the way they work.

Does this sound like something that would be a good fit for metacontroller? I read through the docs and it seems reasonable but I am not sure how to get started.

Enhancement: Better recognition of differences between old/new object state

Description

When I was looking over internet I came to this discussion - kubernetes-sigs/kubebuilder#592 - which revealed new options to compare object difference - more effective than reflect.DeepEqual as comparing relevant fields (but we still need to check if metacontroller isn't updating any of them). It will allow to decrease number of calls to API-server, which can be promising for bigger clusters (with a huge number of objects managed by metacontroller)

Options

  1. As mention in kubernetes/apimachinery#75 , use semantic quality - https://godoc.org/k8s.io/apimachinery/pkg/api/equality . There is also mention of DeepDerivative function - kubernetes-sigs/kubebuilder#592 (comment)
  2. https://github.com/banzaicloud/k8s-objectmatcher - external library for comparing objects, made exactly for purpose described above
  3. Use https://github.com/mitchellh/hashstructure and calculate hash on desired object state. Then add it as annotation to object. On reconcile, check if new hash is the same as old one, if yes - ignore.

Roadmap

Before we even start, I would suggest to create metacontroller equality package, which at start would be just call to reflect.DeepEqual, and then gradually start with option 1., on additional flag, along with debug code (like, if debug enabled, call two functions and notify when they disagree)

Also need to check places when metacontroller is modyfying metadata to add additional check there.

Update deprecated api apiextensions.k8s.io/v1beta1

Update deprecated api group apiextensions.k8s.io/v1beta1 - code.

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md#deprecations-and-removals
The apiextensions.k8s.io/v1beta1 version of CustomResourceDefinition is deprecated and will no longer be served in v1.19. Use apiextensions.k8s.io/v1 instead. (#79604, @liggitt)

The CustomResourceDefinition API type is promoted to apiextensions.k8s.io/v1 with the following changes:

  • Use of the new default feature in validation schemas is limited to v1
  • spec.scope is no longer defaulted to Namespaced and must be explicitly specified
  • spec.version is removed; use spec.versions instead
  • spec.validation is removed; use spec.versions[*].schema instead
  • spec.subresources is removed; use spec.versions[*].subresources instead
  • spec.additionalPrinterColumns is removed; use spec.versions[*].additionalPrinterColumns instead
  • spec.conversion.webhookClientConfig is moved to spec.conversion.webhook.clientConfig
  • spec.conversion.conversionReviewVersions is moved to spec.conversion.webhook.conversionReviewVersions
  • spec.versions[*].schema.openAPIV3Schema is now required when creating v1 CustomResourceDefinitions
  • spec.preserveUnknownFields: true is disallowed when creating v1 CustomResourceDefinitions; it must be specified within schema definitions as x-kubernetes-preserve-unknown-fields: true
  • In additionalPrinterColumns items, the JSONPath field was renamed to jsonPath (fixes kubernetes/kubernetes#66531)
    The apiextensions.k8s.io/v1beta1 version of CustomResourceDefinition is deprecated and will no longer be served in v1.19.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.