Giter VIP home page Giter VIP logo

kubevirt-velero-plugin's Introduction

Kubevirt Velero Plugin

This repository contains a Velero plugin. Thanks to the plugin, Velero can correctly backup and restore VirtualMachines, DataVolumes and other resources managed by KubeVirt and CDI.

For more information on Velero check https://velero.io/.

Plugin actions Included

The plugin registers backup and restore actions that operate on following resources: DataVolume, PersistentVolumeClaim, Pod, VirtualMachine, VirtualMachineInstance.

DVBackupItemAction

An action that backs up the PersistentVolumeClaim and DataVolume

Finds the PVC for DV and adds the "cdi.kubevirt.io/storage.prePopulated" or "cdi.kubevirt.io/storage.populatedFor" annotations

VMBackupItemAction

An action that backs up the VirtualMachine

It checks if a VM can be safely backed up and if the backup contains all required objects for the successful restore. The action also makes sure that every object in the vm and VMI graph will be added the the backup, for example: instancetype, different types of volumes, access credentials, etc.. It also returns the underlying DataVolume if a VM has DataVolumeTemplate and virtualmachineinstances as extra items to back up.

Note: any cluster scoped objects and network objects and configurations are not backed up and they should be available when restoring the VM.

VMIBackupItemAction

An action that backs up the VirtualMachineInstance

It checks if a VMI can be safely backed up and if the backup contains all required objects for the successful restore. The action also returns the underlying VM volumes (DataVolume and PersistentVolumeClaim) and launcher pod as extra items to back up.

VMRestoreItemAction

An action that restores the VirtualMachine

Adds a datavolumes to list of restored items.

VMIRestoreItemAction

An action that restores the VirtualMachineInstance

Skips the VMI if owned by a VM. The plugin also clears restricted labels, so the VMI is not rejected by kubevirt. The restricted labels contain runtime information about the underlying KVM object.

PodRestoreItemAction

An action that handles the virt-launcher Pod. It makes sure virt-launcher pod is always skipped.

Compatibility

Plugin versions and respective Velero/Kubevirt/CDI versions that are tested to be compatible.

Plugin Version Velero Version Kubevirt Version CDI Version
v0.2.0 v1.6.x, v1.7.x v0.48.x >= v1.37.0
v0.6.x v1.12.x >= v1.0.0 >= v1.57.0

Install

To install the plugin check current velero documentation https://velero.io/docs/v1.7/overview-plugins/. Below example for kubevirt-velero-plugin version v0.2.0 on Velero 1.7.0

velero plugin add quay.io/kubevirt/kubevirt-velero-plugin:v0.2.0

Backup/Restore Virtual Machines Using the Plugin

When the plugin is deployed it is already effective. There is nothing special required to make it work.

1. Create a Virtual Machine

kubectl create namespace demo
kubectl create -f example/datavolume.yaml -n demo
kubectl create -f example/vm.yaml -n demo

Wait for Vm running and with condition AgentConnected. Then login and add some data: virtctl console example-vm -n demo

2. Backup

./velero backup create demobackup1 --include-namespaces demo --wait

3. Destroy something

kubectl delete vm example-vm -n demo
kubectl delete dv example-vm -n demo

Try to login, to find a vm a dv or a pvc virtctl console example-vm -n demo

4. Restore

./velero restore create --from-backup demobackup1 --wait

The velero-example repository contains some basic examples of backup/restore using Velero.

Building the plugins

To build the plugin, run

$ make build-all

To build the image, run

$ make build-image

This builds an image tagged as registry:5000/kubevirt-velero-plugin:0.1. If you want to specify a different name or version/tag, run:

$ DOCKER_PREFIX=your-repo/your-name DOCKER_TAG=your-version-tag make build-image

Deploying the plugin to local cluster

Development version of the plugin is intended to work in local cluster build with KubeVirt's or CDI's make cluster-up. To deploy the plugin:

  1. make cluster-push-image to build and push image to local cluster
  2. make local-deploy-velero to deploy Velero to local cluster
  3. make add-plugin to add the plugin to Velero.

kubevirt-velero-plugin's People

Contributors

akalenyu avatar brybacki avatar dalia-frank avatar dependabot[bot] avatar lxs137 avatar maya-r avatar mhenriks avatar shellyka13 avatar tomob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kubevirt-velero-plugin's Issues

IP is empty in restored VM because of MAC address conflict

What happened:
IP is empty in restored VM because of MAC address conflict

What you expected to happen:
IP is set correctly

How to reproduce it (as minimally and precisely as possible):

  1. backup a VM with one pvc (populated by a dataVolume, with centos7 in it)
  2. restore it to another namespace (pvc data is restored by restic in velero)
  3. new vmi is started in the same node

Additional context:

ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 6  bytes 416 (416.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 416 (416.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 1e:90:ea:e6:c9:16 brd ff:ff:ff:ff:ff:ff

mac is 1e:90:ea:e6:c9:16

interfaces in vmi status

interfaces:
    - infoSource: domain, guest-agent
      interfaceName: eth0
      mac: 1e:90:ea:e6:c9:16
      name: default
      queueCount: 1

mac is 1e:90:ea:e6:c9:16

but in /etc/sysconfig/network-scripts/ifcfg-eth0

# Created by cloud-init on instance boot automatically, do not edit.
#
BOOTPROTO=dhcp
DEVICE=eth0
HWADDR=f6:91:68:c6:76:ab
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no

mac is f6:91:68:c6:76:ab, which is the same with the origin VM

Environment:

  • KubeVirt version (use virtctl version):
Client Version: version.Info{GitVersion:"v1.1.1", GitCommit:"689c0e66cc6893f311dd648ff32e247203b6c96a", GitTreeState:"clean", BuildDate:"2023-12-25T10:58:20Z", GoVersion:"go1.19.9", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{GitVersion:"v1.1.1", GitCommit:"689c0e66cc6893f311dd648ff32e247203b6c96a", GitTreeState:"clean", BuildDate:"2023-12-25T11:56:11Z", GoVersion:"go1.19.9", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.1", GitCommit:"bc401b91f2782410b3fb3f9acf43a995c4de90d2", GitTreeState:"clean", BuildDate:"2024-01-17T15:41:12Z", GoVersion:"go1.21.6", Compiler:"gc", Platform:"linux/amd64"}
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Kernel (e.g. uname -a): N/A
  • Install tools: N/A
  • Others: N/A

qemu-guest-agent is offline after VM restoration using Kubevirt Velero Plugin and hence unable to log into the restored VM

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind enhancement

What happened:
qemu-guest-agent is offline after VM restoration using Kubevirt Velero Plugin and hence unable to log into the restored VM

Also able to notice the below logs in the virt-launcher pod logs:

{"component":"virt-launcher","level":"error","msg":"Guest agent is not responding: QEMU guest agent is not connected","pos":"qemuDomainAgentAvailable:8375","subcomponent":"libvirt","thread":"29","timestamp":"2022-11-01T17:47:52.284000Z"}
{"component":"virt-launcher","level":"error","msg":"Guest agent is not responding: QEMU guest agent is not connected","pos":"qemuDomainAgentAvailable:8375","subcomponent":"libvirt","thread":"30","timestamp":"2022-11-01T17:48:07.284000Z"}
{"component":"virt-launcher","level":"error","msg":"Guest agent is not responding: QEMU guest agent is not connected","pos":"qemuDomainAgentAvailable:8375","subcomponent":"libvirt","thread":"28","timestamp":"2022-11-01T17:48:22.284000Z"}

What you expected to happen:
Should be able to log into the vm after restoration using the command
virtctl console <<vm_name>> -n <>

How to reproduce it (as minimally and precisely as possible):

  1. Tried to backup a VM using velero kubevirt plugin
  2. Backup was successful
  3. Tried to perform a restore operation
  4. Restoration was successful for all kubernetes objects, vm & vmi.
  5. But was unable to log into the VM and the logs reported guest agent failure.

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.2-13+d2965f0db10712", GitCommit:"d2965f0db1071203c6f5bc662c2827c71fc8b20d", GitTreeState:"clean", BuildDate:"2021-06-26T01:02:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.14-eks-fb459a0", GitCommit:"b07006b2e59857b13fe5057a956e86225f0e82b7", GitTreeState:"clean", BuildDate:"2022-10-24T20:32:54Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration: AWS

  • Install tools:

  • Others:

[Flaky test] KubeVirt Velero Plugin.Resource excludes Exclude label [smoke] Standalone VMI VMI included, Pod excluded: should succeed if VM is paused

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement
/go/src/kubevirt-velero-plugin/tests/resource_filtering_test.go:3315
Timed out after 30.000s.
Expected
<*errors.StatusError | 0xc000a05900>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {
SelfLink: "",
ResourceVersion: "",
Continue: "",
RemainingItemCount: nil,
},
Status: "Failure",
Message: "Operation cannot be fulfilled on pods "virt-launcher-test-vmi-r4cqb": the object has been modified; please apply your changes to the latest version and try again",
Reason: "Conflict",
Details: {
Name: "virt-launcher-test-vmi-r4cqb",
Group: "",
Kind: "pods",
UID: "",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 409,
},
}
to be nil
/go/src/kubevirt-velero-plugin/tests/resource_filtering_test.go:2825

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

vm restore failing when InstanceType and Preference is set on a VM

What happened:
Restore of VMs failing with missing Controllerrevision message

What you expected to happen:
Restore of the VM working fine

How to reproduce it (as minimally and precisely as possible):
Create a vm using a script like

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: ubuntubackup
namespace: default
spec:
running: true
dataVolumeTemplates:

  • apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
    creationTimestamp: null
    name: ubuntu-backup-vm-disk
    spec:
    pvc:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 16Gi
    storageClassName: nfs
    sourceRef:
    kind: DataSource
    name: ubuntu-datasource
    instancetype:
    kind: VirtualMachineInstancetype
    name: small
    preference:
    kind: VirtualMachinePreference
    name: ubuntu
    template:
    metadata:
    creationTimestamp: null
    kubevirt.io/domain: ubuntubackup
    spec:
    architecture: amd64
    domain:
    devices:
    disks:
    - disk:
    bus: virtio
    name: os
    - cdrom:
    bus: sata
    readonly: true
    name: cloudinitdisk
    interfaces:
    - masquerade: {}
    name: default
    networkInterfaceMultiqueue: true
    machine:
    type: q35
    resources: {}
    networks:
    • name: default
      pod: {}
      volumes:
    • dataVolume:
      name: ubuntu-backup-vm-disk
      name: os
      name: cloudinitdisk

Additional context:
Add any other context about the problem here.

Environment:

  • KubeVirt version (use virtctl version): Client Version: version.Info{GitVersion:"v1.1.1", GitCommit:"689c0e66cc6893f311dd648ff32e247203b6c96a", GitTreeState:"clean", BuildDate:"2023-12-25T10:58:20Z", GoVersion:"go1.19.9", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{GitVersion:"v1.1.1", GitCommit:"689c0e66cc6893f311dd648ff32e247203b6c96a", GitTreeState:"clean", BuildDate:"2023-12-25T11:56:11Z", GoVersion:"go1.19.9", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes version (use kubectl version):
    09:27 $ kubectl version
    Client Version: v1.29.0
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.28.4

  • VM or VMI specifications: see above

  • Cloud provider or hardware configuration: ClusterAPI on Vsphere

  • OS (e.g. from /etc/os-release): Ubuntu22

  • Kernel (e.g. uname -a): N/A

  • Install tools: N/A

  • Others: N/A

Solution:

there is a casesensitive parameter which is evaluated wrongly, see snippet below ... where I have corrected the case of the attributes
happy to create a PR if you want

func (p *VMBackupItemAction) addVMObjectGraph(vm *kvcore.VirtualMachine, extra []velero.ResourceIdentifier) []velero.ResourceIdentifier {
if vm.Spec.Instancetype != nil {
switch vm.Spec.Instancetype.Kind {
//TODO handle VirtualMachineClusterInstancetype
//case "virtualmachineinstancetype":
case "VirtualMachineInstancetype": <==========
p.log.Infof("Adding instance type %s to the backup", vm.Spec.Instancetype.Name)
extra = append(extra, velero.ResourceIdentifier{
GroupResource: schema.GroupResource{Group: "instancetype.kubevirt.io", Resource: "virtualmachineinstancetype"},
Namespace: vm.Namespace,
Name: vm.Spec.Instancetype.Name,
})
extra = append(extra, velero.ResourceIdentifier{
GroupResource: schema.GroupResource{Group: "apps", Resource: "controllerrevisions"},
Namespace: vm.Namespace,
Name: vm.Spec.Instancetype.RevisionName,
})
}
}

    if vm.Spec.Preference != nil {
            //TODO handle VirtualMachineClusterPreference
            switch vm.Spec.Preference.Kind {
            //case "virtualmachinepreference":
              case "VirtualMachinePreference":    <==========
                    p.log.Infof("Adding preference %s to the backup", vm.Spec.Preference.Name)
                    extra = append(extra, velero.ResourceIdentifier{
                            GroupResource: schema.GroupResource{Group: "instancetype.kubevirt.io", Resource: "virtualmachinepreference"},
                            Namespace:     vm.Namespace,
                            Name:          vm.Spec.Preference.Name,
                    })
                    extra = append(extra, velero.ResourceIdentifier{
                            GroupResource: schema.GroupResource{Group: "apps", Resource: "controllerrevisions"},
                            Namespace:     vm.Namespace,
                            Name:          vm.Spec.Preference.RevisionName,
                    })
            }
    }

    return extra

}

Handle DataVolumes that are not finished

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Use new CDI and Kubevirt API instead of importing the actual code

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:
Bump CDI and use CDI API.

That should cleanup the dependencies and give access to Status.ClaimName (to use in places where dv name is used when actually the claimName is needed).

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

[Flaky] Pods excluded, VM paused: VM+DV+PVC should be restored [It]

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind enhancement

What happened:
Test "Pods excluded, VM paused: VM+DV+PVC should be restored [It]" is flaky, needs to be stable

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

How to logout/exit from VM

I have launched VM using : https://kubevirt.io/labs/kubernetes/lab1.html

I am able to login into VM but how to logout/exit?

`./virtctl console testvm -n vm-restore
Successfully connected to testvm console. The escape sequence is ^]

login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
testvm login: cirros
Password:
$ ls
$ cd /
$ ls
bin home lib64 mnt root tmp
boot init linuxrc old-root run usr
dev initrd.img lost+found opt sbin var
etc lib media proc sys vmlinuz
$ exit

login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
testvm login: exit
Password:
^C
login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
testvm login:
`

Verify the plugin works with this velero 1.7.0

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:
Velero released https://github.com/vmware-tanzu/velero/releases/tag/v1.7.0, and switched to distroless images.

Need to verify if the plugin works with this velero: Sample issue for other plugin here: vmware-tanzu/velero#4192

What you expected to happen:
plugin works as designed

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Velero 1.9 CSI/volumesnapshot problems

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind enhancement

What happened:
When trying to update velero #84, a problems was discovered.

vmware-tanzu/velero#5400 When the original velero is fixed and new version release, it should be used and 3 updated tests verified.

One of the errors:

time="2022-10-13T10:12:06Z" level=error msg="Not find %!s(*string=0xc00180eda0) from the vscMap" backup=velero/test-backup-1665655891460820519 logSource="pkg/controller/backup_controller.go:944"

A different one:

ime="2022-10-13T09:53:53Z" level=info msg="Backed up a total of 2 items" backup=velero/test-backup-1665654791385176114 logSource="pkg/backup/backup.go:405" progress=
2022/10/13 09:53:53  info Waiting for CSI driver to reconcile volumesnapshot kvp-e2e-tests-c7c5q/velero-test-dv-775lr. Retrying in 5s
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x19c6f6d]

goroutine 3607 [running]:
github.com/vmware-tanzu/velero/pkg/controller.(*backupController).deleteVolumeSnapshot.func1(0xc000465680)
	/go/src/github.com/vmware-tanzu/velero/pkg/controller/backup_controller.go:940 +0xad
created by github.com/vmware-tanzu/velero/pkg/controller.(*backupController).deleteVolumeSnapshot
	/go/src/github.com/vmware-tanzu/velero/pkg/controller/backup_controller.go:936 +0xf7

The

What you expected to happen:

Backup Succeeded.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Questions to enable the kubevirt-velero-plugin

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:
I have some questions related to bring up the velero.

We have started the velero with aws plugin in the begining.
Then we executed the command to add the plugin for velero
velero plugin add quay.io/kubevirt/kubevirt-velero-plugin:v0.6.1
We observed that velero deployment is restarted with the added init container for the kubevirt
Now when i wanted the backup to be triggered with kubevirt plugin , i need to add the backupstorageLocation

apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  labels:
    component: velero
  name: kubevirt
  namespace: velero
spec:
  config:
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
  default: true
  objectStorage:
    bucket: velero
  provider: kubevirt-velero-plugin

We added the above mentioned backup storage location.
But i am facing below issue on velero pod

time="2023-12-18T19:33:21Z" level=error msg="Error getting backup store for this location" backupLocation=velero/kubevirt controller=backup-sync error="unable to locate ObjectStore plugin named velero.io/kubevirt-velero-plugin" logSource="pkg/controller/backup_sync_controller.go:100"
time="2023-12-18T19:33:21Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:152"
time="2023-12-18T19:33:21Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:137"
time="2023-12-18T19:33:21Z" level=warning msg="Unavailable BackupStorageLocations detected: available/unavailable/unknown: 1/1/0, )" controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:197"
time="2023-12-18T19:34:01Z" level=error msg="Error getting a backup store" backup-storage-location=velero/kubevirt controller=backup-storage-location error="unable to locate ObjectStore plugin named velero.io/kubevirt-velero-plugin" logSource="pkg/controller/backup_storage_location_controller.go:148"
time="2023-12-18T19:34:01Z" level=info msg="BackupStorageLocation is invalid, marking as unavailable" backup-storage-location=velero/kubevirt controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go

I can check below were added plugin list

..... 
velero.io/aws                                                 ObjectStore
kubevirt-velero-plugin/restore-pod-action                     RestoreItemAction
kubevirt-velero-plugin/restore-pod-action                     RestoreItemAction
kubevirt-velero-plugin/restore-pvc-action                     RestoreItemAction
kubevirt-velero-plugin/restore-pvc-action                     RestoreItemAction
kubevirt-velero-plugin/restore-vm-action                      RestoreItemAction
kubevirt-velero-plugin/restore-vm-action                      RestoreItemAction
kubevirt-velero-plugin/restore-vmi-action                     RestoreItemAction
kubevirt-velero-plugin/restore-vmi-action                     RestoreItemAction
velero.io/add-pv-from-pvc                                     RestoreItemAction
velero.io/add-pv-from-pvc                                     RestoreItemAction
velero.io/add-pvc-from-pod                                    RestoreItemAction
velero.io/add-pvc-from-pod                                    RestoreItemAction
velero.io/admission-webhook-configuration                     RestoreItemAction
....

What is the exact procedure to add the kubevirt plugin and take the backup of VM along with AWS plugin. What is the mistake i am making here ?

What you expected to happen:
backup should have happen correctly
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
No
Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version): 1.26
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

VirtualMachineClone during Restore either fails to restore or triggers cloning process again

What happened:

What you expected to happen:
VirtualMachineClone has several bad behaviors on restore.

First Behavior

  1. Clone a VM using VirtualMachineClone (Openshift Virtualization 4.15 from the UI plugin, 4.14 does not use VirtualMachineClone objects from the UI)
  2. Do a backup using Velero and kubevirt plugin
  3. Delete the original namespace and all VMs
  4. Restore to the original namespace
  5. Velero reports the restore as failing and is unable to complete the restore as unable to create the VirtualMachineClone because the source VM does not exist.
Time Type Object
11.549958142Z VirtualMachineClone centos-stream8-fuchsia-nightingale-41-clone-6y9d9d-ru3fsk-cr
11.577348972Z VirtualMachine centos-stream8-fuchsia-nightingale-41-clone-6y9d9d
12.963322544Z VirtualMachine centos-stream8-fuchsia-nightingale-41
15.363754974Z VirtualMachine rhel8-gold-emu-69
15.966218379Z VirtualMachine rhel9-bronze-leopard-14

This is caused by the default Velero restore order in alphabetical once you get past the default restore order set. The VirtualMachineClone always restores the VirtualMachine objects. At minimum, some documentation is needed to mention that the restore order has to be changed for the VirtualMachineClones to be successfully restored.

This was resolved by setting Velero restore order, but it is a required behavior for dealing with this specific object. Otherwise, Velero will always report the restore as failed. The VMs did get restored at least due to not automatically stopping the restore on a single object error.

Similar issue with VirtalMachineInstanceMigration objects. Always restores before VirtualMachine objects.

Second behavior
0. Set the Velero restore order to "virtualmachines,virtualmachineclones"

  1. Clone a VM using VirtualMachineClone (Openshift Virtualization 4.15 from the UI plugin, 4.14 does not use VirtualMachineClone objects from the UI) object
  2. Delete the clone.
  3. Do a backup using Velero and kubevirt plugin
  4. Delete the original namespace and all VMs
  5. Restore to the original namespace
  6. The clone comes back

This is because the status gets ignored that the original clone object was successful. The creation of the VirtualMachineClone object, regardless of the status field contents triggers the clone operation again.

Third Behavior
0. Set the Velero restore order to "virtualmachines,virtualmachineclones"

  1. Clone a VM using VirtualMachineClone (Openshift Virtualization 4.15 from the UI plugin, 4.14 does not use VirtualMachineClone objects from the UI) object
  2. Do a backup using Velero and kubevirt plugin
  3. Delete the original namespace and all VMs
  4. Restore to the original namespace
  5. The clone triggers despite the original status was successful causing a VirtualMachineSnapshot of the original VM. The clone never finishes, leaving a mysterious VirtualMachineSnapshot of unclear origin.

Again, this behavior is caused by ignoring the original VirtualMachineClone status on creation. This triggers a clone process of a previous clone process that was successful.

Status stays stuck in progress forever.

status:
  conditions:
    - lastProbeTime: null
      lastTransitionTime: '2024-04-18T17:12:20Z'
      reason: Still processing
      status: 'True'
      type: Progressing
    - lastProbeTime: null
      lastTransitionTime: '2024-04-18T17:12:20Z'
      reason: Still processing
      status: 'False'
      type: Ready
  phase: RestoreInProgress
  snapshotName: tmp-snapshot-c5fb2af7-bc3d-4d18-a7b0-57fff6af9178

How to reproduce it (as minimally and precisely as possible):
See above. Steps for each behavior included.

Additional context:
Add any other context about the problem here.

Environment:

  • KubeVirt version (use virtctl version): Openshift Virtualization 4.15 / KubeVirt 1.1
  • Kubernetes version (use kubectl version):
  • oc version
    Client Version: 4.14.9
    Kustomize Version: v5.0.1
    Server Version: 4.15.8
    Kubernetes Version: v1.28.7+c1f5b34
    [root@LAPTOP-8LVSJRV2 fedora39]#
  • VM or VMI specifications: Doesn't matter
  • Cloud provider or hardware configuration: On-prem hardware, doesn't matter
  • OS (e.g. from /etc/os-release): CoreOS - doesn't matter
  • Kernel (e.g. uname -a): Red Hat 9 kernel, doesn't matter
  • Install tools: virtctl, oc, Openshift UI Console

Support virtual machine snapshot during backup to perform quiesced VM backup

Is this a BUG REPORT or FEATURE REQUEST?:

kind enhancement

What happened:

Looking at documentation and code, it doesn't look like velero kubevirt plugin performs kubevirt VirtualMachineSnapshot before backup

What you expected to happen:

Perform virtual machine snapshot , and then use the snapshots as source of backup

How to reproduce it (as minimally and precisely as possible):

Backup

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

USE CDI bundled with kubevirtci

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:

CDI resources are stored in kubevirt-velero-plugin repo and managed by build/startup scripts.

What you expected to happen:

It would be good to use CDI bundled with kubevirtci (and update to newer kubevirtci) and remove cdi resources and scripts from dependencies

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Cannot backup Paused VM, virt freezer fails:

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind enhancement

What happened:

Found in VM+VMI included, Pod excluded: should succeed if VM is paused

Cannot backup Paused VM, virt freezer fails:

time="2022-06-03T13:51:37Z" level=info msg="Backing up item" backup=velero/test-backup-1654264228561761698 logSource="pkg/backup/item_backupper.go:121" name=virt-launcher-test-vm-6dr85 namespace=kvp-e2e-tests-bpf2z resource=pods
time="2022-06-03T13:51:37Z" level=info msg="running exec hook" backup=velero/test-backup-1654264228561761698 hookCommand="[/usr/bin/virt-freezer --freeze --name test-vm --namespace kvp-e2e-tests-bpf2z]" hookContainer=compute hookName="<from-annotation>" hookOnError=Fail hookPhase=pre hookSource=annotation hookTimeout="{30s}" hookType=exec logSource="pkg/podexec/pod_command_executor.go:124" name=virt-launcher-test-vm-6dr85 namespace=kvp-e2e-tests-bpf2z resource=pods
time="2022-06-03T13:51:37Z" level=info msg="stdout: " backup=velero/test-backup-1654264228561761698 hookCommand="[/usr/bin/virt-freezer --freeze --name test-vm --namespace kvp-e2e-tests-bpf2z]" hookContainer=compute hookName="<from-annotation>" hookOnError=Fail hookPhase=pre hookSource=annotation hookTimeout="{30s}" hookType=exec logSource="pkg/podexec/pod_command_executor.go:171" name=virt-launcher-test-vm-6dr85 namespace=kvp-e2e-tests-bpf2z resource=pods
time="2022-06-03T13:51:37Z" level=info msg="stderr: {\"component\":\"freezer\",\"level\":\"info\",\"msg\":\"Starting...\",\"pos\":\"main.go:46\",\"timestamp\":\"2022-06-03T13:51:37.954237Z\"}\n{\"component\":\"freezer\",\"level\":\"error\",\"msg\":\"Freezeing VMI failed\",\"pos\":\"main.go:81\",\"reason\":\"server error. command Freeze failed: \\\"LibvirtError(Code=55, Domain=10, Message='Requested operation is not valid: domain is not running')\\\"\",\"timestamp\":\"2022-06-03T13:51:37.965276Z\"}\n" backup=velero/test-backup-1654264228561761698 hookCommand="[/usr/bin/virt-freezer --freeze --name test-vm --namespace kvp-e2e-tests-bpf2z]" hookContainer=compute hookName="<from-annotation>" hookOnError=Fail hookPhase=pre hookSource=annotation hookTimeout="{30s}" hookType=exec logSource="pkg/podexec/pod_command_executor.go:172" name=virt-launcher-test-vm-6dr85 namespace=kvp-e2e-tests-bpf2z resource=pods
time="2022-06-03T13:51:37Z" level=error msg="Error executing hook" backup=velero/test-backup-1654264228561761698 error="command terminated with exit code 1" hookPhase=pre hookSource=annotation hookType=exec logSource="internal/hook/item_hook_handler.go:206" name=virt-launcher-test-vm-6dr85 namespace=kvp-e2e-tests-bpf2z resource=pods
time="2022-06-03T13:51:37Z" level=error msg="Error backing up item" backup=velero/test-backup-1654264228561761698 error="command terminated with exit code 1" logSource="pkg/backup/backup.go:441" name=virt-launcher-test-vm-6dr85

What you expected to happen:

Backup should succeed

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

k8s objects and logs reported after failed test

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:
k8s objects and logs are not available as an artifacts after failed tests

What you expected to happen:

k8s objects and logs available

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

Backup fail if specify includeResources

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

versions:

  • kubevirt : v0.50.0
  • kubevirt-velero-plugin: v0.4.0

Create backup with specific vm-related resources (we don't want to backup resources managed by other operator)

spec:
  includedNamespaces:
  - vm-n1
  includedResources:
  - virtualmachines.kubevirt.io
  - virtualmachineinstances.kubevirt.io
  - pods
  - datavolumes.cdi.kubevirt.io
  - persistentvolumeclaims
  - persistentvolumes

but backup failed with exceptions "VMI owned by a VM and the VM is not included in the backup"

time="2023-02-03T08:49:06Z" level=info msg="1 errors encountered backup up item" backup=acs-dr/vm-backup-zd2gx logSource="pkg/backup/backup.go:413" name=n1-kk
time="2023-02-03T08:49:06Z" level=error msg="Error backing up item" backup=acs-dr/vm-backup-zd2gx error="error executing custom action (groupResource=virtualmachineinstances.kubevirt.io, namespace=vm-n1, name=n1-kk): rpc error: code = Unknown desc = VMI owned by a VM and the VM is not included in the backup" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/backup/item_backupper.go:316" error.function="github.com/vmware-tanzu/velero/pkg/backup.(*itemBackupper).executeActions" logSource="pkg/backup/backup.go:417" name=n1-kk
time="2023-02-03T08:49:06Z" level=info msg="1 errors encountered backup up item" backup=acs-dr/vm-backup-zd2gx logSource="pkg/backup/backup.go:413" name=n1-kk
time="2023-02-03T08:49:06Z" level=error msg="Error backing up item" backup=acs-dr/vm-backup-zd2gx error="error executing custom action (groupResource=virtualmachines.kubevirt.io, namespace=vm-n1, name=n1-kk): rpc error: code = Unknown desc = VM cannot be safely backed up" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/backup/item_backupper.go:316" error.function="github.com/vmware-tanzu/velero/pkg/backup.(*itemBackupper).executeActions" logSource="pkg/backup/backup.go:417" name=n1-kk

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

MAC address conflict when restoring a virtual machine to alternate namespace

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement
/remove-lifecycle rotten

What happened:

When a virtual machine is restored to an alternate namespace, the virtual machine is restored with the same MAC address as the original virtual machine. This results in a MAC address conflict if the original virtual machine is still running on the original namespace.

What you expected to happen:

Provide a way to blank the MAC address on restore.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

The issue was resolved by updating the plugin to clear the MAC addresses on the restore item action

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Use declarative way of installing and configuring velero

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:

Currently velero is installed using a command line passing whole config as parameters velero install --provider aws ... and then backup and restore is also used by running velero cli. I think some tests and commands might use yaml files to create resources. Another problem is that this requires a velero binary.

It would be better to install velero from quay/dockerhub any repo, and then do not use commad line but just create backup/restore resources.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Describe plugins

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:
Readme and documentation does not contain any mention about the actual plugins and actions implemented by kubevirt-velero-plugin.

What you expected to happen:
Information about all the backup and restore actions available.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

CVE-2020-9283

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9283
From snyk.io scan of

Introduced through
k8s.io/[email protected], k8s.io/[email protected] and others
Fixed in
golang.org/x/[email protected]
Exploit maturity
PROOF OF CONCEPT
Show less detail
Detailed paths

Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › github.com/gophercloud/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › github.com/Azure/go-autorest/autorest/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › github.com/Azure/go-autorest/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › k8s.io/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › k8s.io/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › github.com/Azure/go-autorest/[email protected] › github.com/Azure/go-autorest/autorest/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › k8s.io/[email protected] › k8s.io/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › cloud.google.com/[email protected][email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › cloud.google.com/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › cloud.google.com/[email protected] › google.golang.org/[email protected][email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]
Introduced through: kubevirt.io/kubevirtci/cluster-up/cluster/kind-k8s-sriov-1.17.0/[email protected] › k8s.io/[email protected] › cloud.google.com/[email protected] › google.golang.org/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected] › golang.org/x/[email protected]

Overview
golang.org/x/crypto/ssh is a SSH client and server

Affected versions of this package are vulnerable to Improper Signature Verification. An attacker can craft an ssh-ed25519 or [email protected] public key, such that the library will panic when trying to verify a signature with it. Clients can deliver such a public key and signature to any golang.org/x/crypto/ssh server with a PublicKeyCallback, and servers can deliver them to any golang.org/x/crypto/ssh client.

The PV, PVC and DV of the virtual machine are backed up in the velero restic mode, and problems occur during the restore

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind bug

What happened:

  • When I restore the virtual machine with velero, I find that the virtual machine cannot be started. The PVC status is as follows:

Normal Provisioning 86m (x44 over 3h39m) rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-786866b564-r5jm8_fd7a9b7e-c7cd-449f-b1a6-1436c8f19d1f External provisioner is provisioning volume for claim "vm/test-pvc-dvbrglpn" Warning ProvisioningFailed 63m (x14 over 82m) rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-786866b564-r5jm8_c05622ec-0de5-4c69-88c9-8c55cd0d3ffe failed to provision volume with StorageClass "rook-ceph-block": error getting handle for DataSource Type VolumeSnapshot by Name cdi-tmp-513bb2a9-34b0-4f5e-8d89-761998a25d1a: error getting snapshot cdi-tmp-513bb2a9-34b0-4f5e-8d89-761998a25d1a from api server: volumesnapshots.snapshot.storage.k8s.io "cdi-tmp-513bb2a9-34b0-4f5e-8d89-761998a25d1a" not found Normal Provisioning 3m24s (x30 over 82m) rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-786866b564-r5jm8_c05622ec-0de5-4c69-88c9-8c55cd0d3ffe External provisioner is provisioning volume for claim "vm/test-pvc-dvbrglpn" Normal ExternalProvisioning 64s (x2982 over 12h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rook-ceph.rbd.csi.ceph.com" or manually created by system administrator

  • The status of the virtual machine is as follows:
    test-dvbrglpn 2d22h WaitingForVolumeBinding False
    What you expected to happen:
    The pvc is restored successfully, and the virtual machine starts normally
    How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Make the plugin releasable

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement

What happened:
There is no release process. no artifacts available.

What you expected to happen:

First release is done, and plugin artifacts can be downloaded and installed.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.