Giter VIP home page Giter VIP logo

lib-volume-populator's Introduction

lib-volume-populator

Shared library for use by volume populators.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

lib-volume-populator's People

Contributors

akalenyu avatar bswartz avatar cwangvt avatar dannawang0221 avatar dependabot[bot] avatar humblec avatar jsafrane avatar k8s-ci-robot avatar liranr23 avatar mkimuram avatar msau42 avatar nikhita avatar pohly avatar sneha-at avatar sunnylovestiramisu avatar testwill avatar ttakahashi21 avatar xing-yang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lib-volume-populator's Issues

Copy Spec.Selector from pvc to pvcPrime

Spec: corev1.PersistentVolumeClaimSpec{

When there are multiple PVs with same exact configurations aside from selector, pvcPrime will not be able to figure out which PVC to attach to and will create a new PV with no selector.

Name:          prime-260f3286-39e1-4290-9a94-6ffe7d425bee
Namespace:     s3populator
StorageClass:  s3sc
Status:        Bound
Volume:        pvc-3f9fa6ce-9538-4f77-8d6c-f98e6d88873c
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       populate-260f3286-39e1-4290-9a94-6ffe7d425bee
Events:        <none>

See above. Currently pvcPrime is not created with any Labels.

How to populate multiple PVs in one pass?

We want to write a volume populator for virt-v2v, a tool which imports virtual machines from VMware. The disks of these virtual machines would be mapped to block-based PVs.

The hello world example is pretty good, and shows that we could fairly easily write a populator for single disk VMs. However there doesn't seem to be a way to populate multiple PVs at the same time, so multi-disk VMs couldn't be imported. Multi-disk VMs are not especially common, but they do sometimes exist and it would be a shame to limit this to single disk VMs.

Is there a way to import to multiple PVs using a single volume populator that we may have missed?

hello world installation documentation doesn't work

The documentation in https://github.com/kubernetes-csi/lib-volume-populator/tree/master/example/hello-populator seems very incomplete. I applied the crd.yaml file and that was fine:

$ kubectl apply -f crd.yaml 
customresourcedefinition.apiextensions.k8s.io/hellos.hello.example.com created

but applying the deploy.yaml file failed:

$ kubectl apply -f deploy.yaml 
namespace/hello unchanged
serviceaccount/hello-account unchanged
clusterrole.rbac.authorization.k8s.io/hello-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/hello-binding unchanged
deployment.apps/hello-populator unchanged
error: resource mapping not found for name: "hello-populator" namespace: "" from "deploy.yaml": no matches for kind "VolumePopulator" in version "populator.storage.k8s.io/v1beta1"
ensure CRDs are installed first

Could you include more compete documentation about every step you need to do to actually build and deploy the example?

Cross-namespace feature is not optional

Cross-namespace populator support is still alpha, but the ReferenceGrant informers are not optional. This is causing the controller to wait forever for informer cache to sync and log spam in the populator like:

W0301 00:35:30.329470       1 reflector.go:539] sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132: failed to list *v1beta1.ReferenceGrant: referencegrants.gateway.networking.k8s.io is forbidden: User "system:serviceaccount:test:test-ksa" cannot list resource "referencegrants" in API group "gateway.networking.k8s.io" at the cluster scope

Missing operator for "volumepopulators.populator.storage.k8s.io"

Hi, dev team,

I'm trying to run the hello example on Kubernetes 1.27, but the crd for volumepopulators.populator.storage.k8s.io seems missing at https://github.com/kubernetes-csi/lib-volume-populator/blob/master/example/hello-populator/deploy.yaml#L86

kind: VolumePopulator
apiVersion: populator.storage.k8s.io/v1beta1

The corresponding definition and operator of volumepopulators.populator.storage.k8s.io are nowhere to find in this repo.

After some google, I find it from another repo: https://github.com/kubernetes-csi/volume-data-source-validator/blob/master/client/config/crd/populator.storage.k8s.io_volumepopulators.yaml. But there is still no operator for this CRD.

How to correctly set it up for CRD volumepopulators.populator.storage.k8s.io ?

Thanks
Regard

Populating a PVC with ReadWriteMany results in a stuck PVC

We noticed an issue where using our volume populator to populate a PVC with the ReadWriteMany accessMode, where the populator pod will complete successfully, there will be a Bound PV, but then the PVC will revert back to Pending, causing a new instance of the populator pod to be created (which will be stuck because the prime PVC is gone at this point).

@liranr23 noticed the PV has a different accessMode, ReadWriteOnce which the prime PVC is also created with

kind: PersistentVolume
apiVersion: v1
metadata:
  annotations:
    forklift.konveyor.io/populated-from: default/nfs-vol-pop
    pv.kubernetes.io/provisioned-by: nfs.csi.k8s.io
    volume.kubernetes.io/provisioner-deletion-secret-name: mount-options
    volume.kubernetes.io/provisioner-deletion-secret-namespace: default
  finalizers:
    - kubernetes.io/pv-protection

spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: nfs-vol-pop
    uid: e1356dd1-bbc8-4535-bb02-2454844f55a5
    apiVersion: v1
    resourceVersion: '119290924'
  persistentVolumeReclaimPolicy: Delete
  storageClassName: nfs-csi
  mountOptions:
    - nfsvers=4.1
  volumeMode: Filesystem
status:
  phase: Bound

And this is the PVC stuck in Pending

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-vol-pop
  namespace: default
  uid: e1356dd1-bbc8-4535-bb02-2454844f55a5
  annotations: ...
  finalizers:
    - kubernetes.io/pvc-protection
    - forklift.konveyor.io/populate-target-protection
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: nfs-csi
  volumeMode: Filesystem
  dataSource:
    apiGroup: forklift.konveyor.io
    kind: OvirtImageIOPopulator
    name: nfs-vol-pop
  dataSourceRef:
    apiGroup: forklift.konveyor.io
    kind: OvirtImageIOPopulator
    name: nfs-vol-pop
status:
  phase: Pending

So looking at the code, we see this access mode is hardcoded here

AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},

Is this intentional?

Changing the line to:

						AccessModes:      pvc.Spec.AccessModes,

Seems to fix the issue

In-tree volumes are populated

Kubernetes complains that only PVCs that refer to a CSI StorageClass can be populated, but hello-world populator will populate volumes for in-tree volume plugins anyway.

  1. Create StorageClass referring to an in-tree volume plugin:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: thin
    parameters:
      diskformat: thin
    provisioner: kubernetes.io/vsphere-volume

    (vSphere is not yet migrated to CSI in this cluster!)

  2. Create a PVC that refers to it + uses a data source:

    kind: PersistentVolumeClaim
    metadata:
      name: mypvc-new
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Mi
      dataSourceRef:
        apiGroup: hello.example.com
        kind: Hello
        name: example-hello
      volumeMode: Filesystem
  3. Kubernetes PV controller warns that data sources are not supported, but the volume is Bound + populated:

kubectl describe pvc
...
Status:        Bound
...
  Warning  ProvisioningFailed  4m16s  persistentvolume-controller  plugin "kubernetes.io/vsphere-volume" is not a CSI plugin. Only CSI plugin can provision a claim with a datasource

So either CSI is required and the PVC should not get Bound or the event is wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.