Giter VIP home page Giter VIP logo

velero's Introduction

100

Build Status CII Best Practices GitHub release (latest SemVer)

Overview

Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises.

Velero lets you:

  • Take backups of your cluster and restore in case of loss.
  • Migrate cluster resources to other clusters.
  • Replicate your production cluster to development and testing clusters.

Velero consists of:

  • A server that runs on your cluster
  • A command-line client that runs locally

Documentation

The documentation provides a getting started guide and information about building from source, architecture, extending Velero and more.

Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.

Troubleshooting

If you encounter issues, review the troubleshooting docs, file an issue, or talk to us on the #velero channel on the Kubernetes Slack server.

Contributing

If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our Start contributing documentation for guidance on how to setup Velero for development.

Changelog

See the list of releases to find out about feature changes.

Velero compatibility matrix

The following is a list of the supported Kubernetes versions for each Velero version.

Velero version Expected Kubernetes version compatibility Tested on Kubernetes version
1.14 1.18-latest 1.27.9, 1.28.9, and 1.29.4
1.13 1.18-latest 1.26.5, 1.27.3, 1.27.8, and 1.28.3
1.12 1.18-latest 1.25.7, 1.26.5, 1.26.7, and 1.27.3
1.11 1.18-latest 1.23.10, 1.24.9, 1.25.5, and 1.26.1
1.10 1.18-latest 1.22.5, 1.23.8, 1.24.6 and 1.25.1

Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.

The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version.

If you are interested in using a different version of Kubernetes with a given Velero version, we'd recommend that you perform testing before installing or upgrading your environment. For full information around capabilities within a release, also see the Velero release notes or Kubernetes release notes. See the Velero support page for information about supported versions of Velero.

For each release, Velero maintainers run the test to ensure the upgrade path from n-2 minor release. For example, before the release of v1.10.x, the test will verify that the backup created by v1.9.x and v1.8.x can be restored using the build to be tagged as v1.10.x.

velero's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

velero's Issues

Set User-Agent

Right now it's ark/v1.7.1 (linux/amd64) kubernetes/$Format where v1.7.1 comes from kube / client-go. We should probably do something like:

ark/v0.3.2 (linux/amd64) 35b865d

BackupService.GetAllBackups should be forgiving when there's partial data

While working on #40, I accidentally ended up with a bucket with these contents:

mybackup/
  mybackup.log.gz

The code in GetAllBackups kept returning an error that "the specified key does not exist". It took me a while to figure out what was going on, which is that when it's iterating through the backup directories, if it can't retrieve mybackup.tar.gz, it immediately returns an error. I think we should instead only return an error if we failed to ListCommonPrefixes(). Any errors inside the for loop should be logged as errors, but we should keep trying to read whatever remains.

cc @skriss

Restore progress

Similar to #20, do something similar to periodically update the progress of a restore.

[Design Question] Restructure of command hierarchy

Closely related to #60, I was curious what peoples thoughts were to changing the order in which the CLI nests actions vs. resources. Particularly, would people be open to looking at the UX of a closer parallel to kubectl itself?

Ex.
ark backup create
ark create backup

As we continue to build more things "off of" the existing resources, such as the downloading of backup logs/data, it occurs to me that there is a strong correlation with the regular CRUD operations that exist for other resources in the ecosystem.

If people are interested, I could take a stab at reworking the hierarchy of the CLI.

PVC's are restored as lost on AWS

Hello,

I tried an example of cluster migration with the stable/prometheus helm chart with persistent volumes enabled.

I ran:
ark backup create cluster-backup --selector backup=ark --snapshot-volumes

The backup worked fine and was created but when I run:
ark restore create cluster-backup --restore-volumes

It restores everything however the PVC's come back as lost.

kubectl describe on the PVC says:
"Warning ClaimLost Bound claim has lost its PersistentVolume. Data on the volume is lost!"

restore says complete, but actually had error

It appears if a "sync" period has not happened yet, then a restore actually fails, but the ark command says it is successful:

NAME                         BACKUP        STATUS      WARNINGS   ERRORS    CREATED                         SELECTOR
everything4-20170814154904   everything4   Completed   0          1         2017-08-14 15:49:04 +0000 UTC   <none>

However the logs show the following and nothing was restored:

I0814 15:49:04.807551       1 restore_controller.go:183] processRestore for key "heptio-ark/everything4-20170814154904"
I0814 15:49:04.807723       1 restore_controller.go:190] Getting restore heptio-ark/everything4-20170814154904
I0814 15:49:04.807759       1 restore_controller.go:211] Cloning restore heptio-ark/everything4-20170814154904
I0814 15:49:04.812273       1 restore_controller.go:242] running restore for heptio-ark/everything4-20170814154904
E0814 15:49:04.812364       1 restore_controller.go:284] error getting backup: backup.ark.heptio.com "everything4" not found
I0814 15:49:04.812450       1 restore_controller.go:246] restore heptio-ark/everything4-20170814154904 completed
I0814 15:49:04.812476       1 restore_controller.go:249] updating restore heptio-ark/everything4-20170814154904 final status

The backup everything4 is in the bucket, but a sync hasn't happened yet so if you do an ark backup get you only see the following:

[ec2-user@ip-172-31-25-73 ~]$ ark backup get
NAME          STATUS      CREATED                         EXPIRES   SELECTOR
everything3   Completed   2017-08-14 15:39:35 +0000 UTC   23h       <none>

IMO seems like two things should be happening?:

  • if an error like the one shown above happens the restore should not be marked as Completed
  • when a request to create a store happens, it should force a sync

Enable pre/post-volume snapshot hooks for executing workload-specific actions

Some workloads (e.g. databases) may require pre- and/or post-volume snapshot actions to be taken (e.g. flushing caches, fsfreeze) to ensure that the snapshot/backup taken is valid. We should create user-pluggable hooks to allow this kind of workload-specific logic to be executed during backups & restores.

Simplify deployment yaml

I think we can simplify the deployment yaml to be cloud-provider agnostic. Right now the "common" one is used for AWS and GCP and it contains this snippet:

env:
  - name: AWS_SHARED_CREDENTIALS_FILE
    value: /credentials/cloud

Our GCP instructions say you need to rename AWS_SHARED_CREDENTIALS_FILE to GOOGLE_APPLICATION_CREDENTIALS, so it's not really a "common" file since you have to edit it for GCP.

For Azure, we have a different file where everything is identical except instead of using a volume/volumeMount to specify the cloud provider credentials, we expose all the data in our secret as environment variables:

envFrom:
  - secretRef:
      name: cloud-credentials

I propose we consider this:

  1. Remove the Azure deployment yaml
  2. Remove the env section from the common deployment
  3. Modify our cloud-credentials secret:
    1. Add a new require key, provider (valid values aws, azure, gcp)
    2. The remainder of the file is provider-specific
  4. Modify our server code to look at the provider value and proceed appropriately

WDYT @skriss @abiogenesis-now?

Improved backup resource selection

If you do ark backup create foo --include-namespaces andy, that will back up everything in the 'andy' namespace AND all cluster-scoped resources (the following list is from my local v1.7.3, supporting create + list):

  • apiservices
  • certificatesigningrequests
  • clusterrolebindings
  • clusterroles
  • customresourcedefinitions
  • namespaces
  • nodes
  • persistentvolumes
  • podsecuritypolicies
  • storageclasses
  • thirdpartyresources

If you want to back up a subset of resources, you have to specify some combination of --include-resources, --exclude-resources, and/or --selector.

If we decide that one of Ark's core use cases is to allow one or more namespaces to be backed up, we should find a way to limit the resources backed up to just those used by the namespace(s) in question.

Add `velero install` command

It would be great to have E2E scripts for each cloud provider that go through a full install of Ark including pre-reqs and server installation. We can have the user fill in the necessary options at the beginning. The docs are great for understanding what's going on step-by-step but it takes a lot of copy-pasting to get a working install going.

AWS compatible support no longer working

First off, thanks for the interesting project, appreciate the effort. I was walking through the tutorial for using Ark w/ Minio and a PV backup.

I believe that commit #35 has broken support for AWS compatible services, such as Minio, as it is now attempting to actually validate the EC2 AZs and is using the minio region which does not exist.

Looking into the code, the endpoint resolver is setup to deal with this for S3 however other services are explicitly not being supported.

Steps to reproduce:

  • Follow the examples with the Minio provider.

Outcome:

  • Ark server errors with:
caused by: Post https://ec2.minio.amazonaws.com/: dial tcp: lookup ec2.minio.amazonaws.com on 10.0.0.10:53: server misbehaving

Binary clients

I'm a first timer playing around with ark. My k8s master is not publically available, I access it via a Bastion over an SSH tunnel so using the docker alias method didn't work since https://kubernetes:9443 (which is the tunnel to my k8s api) isn't accessible via the container.

I made a change to the build to allow my mac to get a binary client and everything works fine.

Would you be open to generating clients? Or having the build make additional architectures? I'm happy to send a PR.

Document required IAM permissions

I understand that ARK is a moving target at this point, so it may not be advisable yet, but it would be helpful in Cloud Provider Specifics to catalog the required IAM policies necessary for the project to operate effectively.

As it stands now, the permissions are overly permissive. Pairing them down to be what's needed would reduce the friction of deploying ARK in a "real" environment.

Scheduled job runs at incorrect time

Hey Heptio Team,

My scheduled job configuration looks like this :

NAME                STATUS    CREATED                         SCHEDULE    BACKUP TTL   LAST BACKUP   SELECTOR
kubernetes-backup   Enabled   2017-08-09 14:58:47 +0000 UTC   0 9 * * *   24h0m0s      58m ago       <none>

So it should run 1 time per day ,at 9 AM , right ? https://crontab.guru/#0_9___*

However, I get backups every hour, when minute is equal to 9 :

NAME                               STATUS      CREATED                         EXPIRES   SELECTOR
kubernetes-backup-20170810130905   Completed   2017-08-10 13:09:05 +0000 UTC   23h       <none>
kubernetes-backup-20170810120905   Completed   2017-08-10 12:09:05 +0000 UTC   22h       <none>
kubernetes-backup-20170810110905   Completed   2017-08-10 11:09:05 +0000 UTC   21h       <none>
kubernetes-backup-20170810100905   Completed   2017-08-10 10:09:05 +0000 UTC   20h       <none>
kubernetes-backup-20170810090905   Completed   2017-08-10 09:09:05 +0000 UTC   19h       <none>
kubernetes-backup-20170810080905   Completed   2017-08-10 08:09:05 +0000 UTC   18h       <none>
kubernetes-backup-20170810070905   Completed   2017-08-10 07:09:05 +0000 UTC   17h       <none>
kubernetes-backup-20170810060905   Completed   2017-08-10 06:09:05 +0000 UTC   16h       <none>
kubernetes-backup-20170810050905   Completed   2017-08-10 05:09:05 +0000 UTC   15h       <none>
kubernetes-backup-20170810040905   Completed   2017-08-10 04:09:05 +0000 UTC   14h       <none>
kubernetes-backup-20170810030905   Completed   2017-08-10 03:09:05 +0000 UTC   13h       <none>
kubernetes-backup-20170810020905   Completed   2017-08-10 02:09:05 +0000 UTC   12h       <none>
kubernetes-backup-20170810010905   Completed   2017-08-10 01:09:05 +0000 UTC   11h       <none>
kubernetes-backup-20170810000905   Completed   2017-08-10 00:09:05 +0000 UTC   10h       <none>
kubernetes-backup-20170809230905   Completed   2017-08-09 23:09:05 +0000 UTC   9h        <none>
kubernetes-backup-20170809220905   Completed   2017-08-09 22:09:05 +0000 UTC   8h        <none>
kubernetes-backup-20170809210905   Completed   2017-08-09 21:09:05 +0000 UTC   7h        <none>
kubernetes-backup-20170809200905   Completed   2017-08-09 20:09:05 +0000 UTC   6h        <none>
kubernetes-backup-20170809190905   Completed   2017-08-09 19:09:05 +0000 UTC   5h        <none>
kubernetes-backup-20170809180905   Completed   2017-08-09 18:09:05 +0000 UTC   4h        <none>
kubernetes-backup-20170809170905   Completed   2017-08-09 17:09:05 +0000 UTC   3h        <none>
kubernetes-backup-20170809160905   Completed   2017-08-09 16:09:05 +0000 UTC   2h        <none>
kubernetes-backup-20170809150905   Completed   2017-08-09 15:09:05 +0000 UTC   1h        <none>
kubernetes-backup-20170809145847   Completed   2017-08-09 14:58:47 +0000 UTC   56m       <none>

Issue is my configuration or a bug? :)
Thanks!

Consider broadening ark service account permissions to cluster admin

The service account we use is currently allowed to list/watch/create all resources in all namespaces. While this is sufficient for all backup operations, it is only partially so for restore operations.

While most restore operations will succeed, attempting to restore roles and/or clusterroles that have greater privileges than the ark service account fail.

We might want to broaden the privileges to cluster-admin. WDYT @mattmoyer @jbeda @skriss?

Improve handling of preexisting resources during restore

When restoring, if the resource already exists, we record that as a warning and move on. I can think of a few things we could do:

  1. Upon seeing a preexisting resource, get it, diff against the backed up copy, and be silent if they're identical
  2. If they're different
    1. Log a warning
    2. Overwrite

We could provide options in restore.spec for how to handle this.

Are there other options we could pursue?

cc @skriss @jbeda
xref #23

Config change detection gets a false positive when the watch expires

For various reasons, when you Get() an object, its kind and apiVersion are blank. When you receive the same object via a Watch(), those fields are present. Any time the watch expires (timeout, too old resource version), the config informer relists/rewatches and the update handler in watchConfig receives the current value of the config. Because of the mismatch described above, the code thinks there is a config change and shuts down.

Exclude nodes from restores by default

It doesn't really make sense to restore a node, but there may be some value in including the node in the backup (especially once we get #16). We should make it so nodes are excluded from restores by default, AND we should make it "hard" for the user to say they want to include nodes as part of a restore.

xref #23

Add pprof support

Add pprof support. Expose it only to localhost, so you have to port-forward to get access to it.

Backup progress

Provide a way for users to see the progress of an in-flight backup. Some thoughts:

  • if backing up all namespaces, first get a list of all namespaces so we know how many there are to process
  • try to find out how many different types of GroupResources are being backed up, so we can record progress per namespace
  • periodically record status somewhere in backup.status

add --exclude-namespaces to restore

It's nice to always backup a superset of namespaces but then restore everything instead a particular one or two. For example:

ark restore create everything --exclude-namespaces=kube-system --restore-volumes

It would be nice to have --exclude-namespaces on the restore option.

`ark backup download` has issues when you run `ark` as a container

ark backup download is “not working” because when you run ark in a container, it tries to write to $(pwd)/<file>, and 1) that’s inside the container’s filesystem, and 2) $(pwd) is /, so it tries to write to /<file>. Additionally if you specify --output FILE, that path is inside the container, so it either needs to be a bind mount from the host, or you'll have to docker cp to get the file out.

It works if you’re running the binary natively.

If we set WORKDIR in the Dockerfile to something like /host_pwd, then bind mount -v $(pwd):/host_pwd, it works, although 1) you’re bind mounting whatever your current working dir is into the ark container, which might not be desirable 100% of the time, and 2) when it does download, it says Backup foo has been successfully downloaded to /host_pwd/foo-data.tar.gz instead of the real name of the dir on the host

restoring from one GKE cluster to another leaves target cluster unusable

The goal was to perform a backup on one GKE cluster and then restore into a fresh brand new GKE cluster of the same version, number of nodes, etc. However afterwards the following items happen to the target cluster:

Unexpected Behavior

  • Trying to view the dashboard UI, proxy, or look at logs for a pod will yield the error:
$ kubectl --namespace=heptio-ark logs ark-1716490312-bkzhm 
Error from server: Get https://gke-cluster-2-default-pool-c30ff04e-055z:10250/containerLogs/heptio-ark/ark-1716490312-bkzhm/ark: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-3159bbe8fc682764335a"?
  • Listing the nodes in the cluster will list nodes that aren't present in the cluster and were the names of the nodes in the original cluster:
$ kubectl get nodes
NAME                                       STATUS    AGE       VERSION
gke-cluster-1-default-pool-c7c701f6-0pjb   Unknown   4m        
gke-cluster-1-default-pool-c7c701f6-l5ml   Unknown   4m        
gke-cluster-1-default-pool-c7c701f6-wp5n   Unknown   4m        
gke-cluster-2-default-pool-c30ff04e-055z   Ready     19m       v1.7.2
gke-cluster-2-default-pool-c30ff04e-czlx   Ready     19m       v1.7.2
gke-cluster-2-default-pool-c30ff04e-jz2h   Ready     19m       v1.7.2
  • After restoring, the cluster will become unresponsive for about a minute with kubectl commands timing out and the cluster UI admin screen showing it unavailable. It eventually returns

  • Services will not get LoadBalancer IPs associated with them

  • The ark restore log has numerous errors. See this gist https://gist.github.com/djschny/f0879b2173478ce03238b0a68e4345c4

Steps to reproduce

  • Create two 1.7.2 three node clusters in GKE (cluster- and cluster-2)
  • Create some namespaces and deployments in cluster-1
  • Install ark on cluster-1
  • perform a backup ark backup create everything --exclude-namespaces=kube-system,heptio-ark --snapshot-volumes even making sure to exclude system/sensitive namespaces
  • Install ark on cluster-2
  • perform a restore ark restore create everything --restore-volumes

Restore does not work

Hey, the project is really awesome and you guys are trying to solve the right problem.

I'm using ark for backups and restores in GCP. I'm following the quickstart tutorial and everything goes smooth until the ark restore create nginx-backup.

My ark backup get results:

NAME           STATUS    CREATED                         EXPIRES   SELECTOR
nginx-backup   New       2017-09-13 16:51:44 +0000 UTC   23h       app=nginx

This is what I'm getting for ark restore create nginx-backup:

NAME                          BACKUP         STATUS    WARNINGS   ERRORS    CREATED                         SELECTOR
nginx-backup-20170913165738   nginx-backup   New       0          0         2017-09-13 16:57:39 +0000 UTC   <none>

so the status is NEW, not COMPLETED and no errors and warnings. Okay, then I looked at the restore yaml output (ark restore get nginx-backup-20170913165738 -o yaml) :

kind: Restore
metadata:
  creationTimestamp: 2017-09-13T16:57:39Z
  name: nginx-backup-20170913165738
  namespace: heptio-ark
  resourceVersion: "15966846"
  selfLink: /apis/ark.heptio.com/v1/namespaces/heptio-ark/restores/nginx-backup-20170913165738
  uid: a69b3569-98a4-11e7-815e-42010a8000ac
spec:
  backupName: nginx-backup
  labelSelector: null
  namespaceMapping: {}
  namespaces: null
  restorePVs: false
status:
  errors:
    ark: null
    cluster: null
    namespaces: null
  phase: ""
  validationErrors: null
  warnings:
    ark: null
    cluster: null
    namespaces: null

And still I see no useful information on how to fix it. It would be highly appreciated if you show me where to look further.

Thanks in advance.

--snapshot-volumes is not working on GCP

Hey Heptio Team,

If I run this command :

ark backup create everything --exclude-namespaces=kube-system,heptio-ark --snapshot-volumes

I get data on Google Cloud Storage about my cluster, but none of the persistent volumes are snapshoted.

ark-backup.json looks like this :

{
  "kind": "Backup",
  "apiVersion": "ark.heptio.com/v1",
  "metadata": {
    "name": "everything",
    "namespace": "heptio-ark",
    "selfLink": "/apis/ark.heptio.com/v1/namespaces/heptio-ark/backups/everything",
    "uid": "4cf6f5d8-7d08-11e7-acb8-42010a80004a",
    "resourceVersion": "39434763",
    "creationTimestamp": "2017-08-09T13:40:26Z"
  },
  "spec": {
    "includedNamespaces": [
      "*"
    ],
    "excludedNamespaces": [
      "kube-system",
      "heptio-ark"
    ],
    "includedResources": [
      "*"
    ],
    "excludedResources": null,
    "labelSelector": null,
    **"snapshotVolumes": true,**
    "ttl": "24h0m0s"
  },
  "status": {
    "version": 1,
    "expiration": "2017-08-10T13:40:26Z",
    **"phase": "Completed",
    "volumeBackups": null,
    "validationErrors": null**
  }
}

Anything I am missing?

Consideration for refactoring restore command

When creating a restore, I find it a bit confusing that the backup's name is in the form of an argument, as we aren't actually creating a backup, it is just a piece of a given restore.

One option would be:

  • Have the backup be an explicitly named, required flag.
  • Optionally support an argument which would then be the restore's name itself, not that of the backup

Current: ark restore create <backup name>
Proposed: ark restore create --backup <backup name>

A broader question would be, has there been consideration to flipping the verb and resource to more closely match how Kubectl works? ark create restore ...

Support S3 server side encryption with KMS

Hi,

we'd like to store encrypted backups in S3 using S3 SSE with KMS. Are you planning to support this?

The S3 go pkg should already support this so it should just be a matter of adding a config option and using it in object_storage_adapter.go in the AWS cloudprovider. If this is correct I could give it a go and submit a patch if you like.

design for multi-tenancy support

Allow users other than cluster-admin to use Ark. This has security implications:

  • need to ensure a user can only back up/restore PVs/snapshots they have access to
  • need to ensure a user can only back up kube resources they have access to
  • need to find a way to have the restore "run as" the user, so there's no privilege escalation

RFC: Backup templates

If the person taking a backup is not the same person who defines how an application is deployed to Kubernetes, they might not know what resources should be backed up for said application. A way to get around this could be to allow the application author to define what to back up, without actually initiating a backup or schedule. This is a "backup template".

A backup template will likely look similar or possibly identical to a backup. The sole distinction is that templates do not cause any backups to occur. If you're creating files you can kubectl create/apply, you could include a BackupTemplate in one of them.

Here is a tentative API definition:

type BackupTemplate struct {
   Spec BackupSpec
}

It's short and to the point. We will need to determine how to adjust ScheduleSpec - do we do something like this:

type ScheduleSpec struct {
  Template *BackupSpec
  TemplateRef *ObjectReference
}

where only one of Template or TemplateRef may be specified? Template would function exactly as it does today, as an inline definition. TemplateRef would be a reference to the namespace and name of a BackupTemplate to use instead of having an inline definition.

Alternatively, we could remove support for the inline definition, and instead require the creation of a BackupTemplate object first, at which point you could create a Schedule that references it.

Food for thought... cc @jbeda @skriss @jrnt30

Backup coverage reporting

It would be useful to be able to generate backup coverage reports - i.e. an easy way to see which objects in a cluster/namespace are/aren't being backed up by existing schedules. This could be done by listing all resources, then matching them to schedule backup specs based on namespace includes/excludes, resource includes/excludes, and label selector.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.