Giter VIP home page Giter VIP logo

scaleway-csi's Introduction

Scaleway Block Volume CSI driver

The Scaleway Block Volume Container Storage Interface (CSI) driver is an implementation of the CSI interface to provide a way to manage Scaleway Block Volumes through a container orchestration system, like Kubernetes.

WARNING: โš ๏ธ This project is under active development and should be considered alpha.

CSI Specification Compability Matrix

Scaleway CSI Driver \ CSI Version v1.2.0 v1.6.0
master branch yes yes
v0.1.x yes no
v0.2.x yes yes

Features

Here is a list of functionality implemented by the Scaleway CSI driver.

Block device resizing

The Scaleway CSI driver implements the resize feature (example for Kubernetes). It allows an online resize (without the need to detach the block device). However resizing can only be done upwards, decreasing a volume's size is not supported.

Raw Block Volume

Raw Block Volumes allows the block volume to be exposed directly to the container as a block device, instead of a mounted filesystem. To enable it, the volumeMode needs to be set to Block. For instance, here is a PVC in raw block volume mode:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-raw-pvc
spec:
  volumeMode: Block
  [...]

At-Rest Encryption

Support for volume encryption with Cryptsetup/LUKS. See more details in examples

Volume Snapshots

Volume Snapshots allows the user to create a snapshot of a specific block volume.

Volume Statistics

The Scaleway CSI driver implements the NodeGetVolumeStats CSI method. It is used to gather statistics about the used block volumes. In Kubernetes, kubelet exposes these metrics.

Kubernetes

This section is Kubernetes specific. Note that Scaleway CSI driver may work for older Kubernetes versions than those announced. The CSI driver allows to use Persistent Volumes in Kubernetes.

Kubernetes Version Compability Matrix

Scaleway CSI Driver \ Kubernetes Version Min K8s Version Max K8s Version
master branch v1.20 -
v0.1.x v1.17 -
v0.2.x v1.20 -

Examples

Some examples are available here.

Installation

These steps will cover how to install the Scaleway CSI driver in your Kubernetes cluster, using Helm.

Warning Please note that the manifest files provided in deploy/kubernetes are deprecated and no longer maintained.

Requirements

  • A Kubernetes cluster running on Scaleway instances (v1.20+)
  • Scaleway Project or Organization ID, Access and Secret key
  • Helm v3

Deployment

  1. Add the Scaleway Helm repository.

    helm repo add scaleway https://helm.scw.cloud/
    helm repo update
  2. Deploy the latest release of the scaleway-csi Helm chart.

    helm upgrade --install scaleway-csi --namespace kube-system scaleway/scaleway-csi \
        --set controller.scaleway.env.SCW_DEFAULT_ZONE=fr-par-1 \
        --set controller.scaleway.env.SCW_DEFAULT_PROJECT_ID=11111111-1111-1111-1111-111111111111 \
        --set controller.scaleway.env.SCW_ACCESS_KEY=ABCDEFGHIJKLMNOPQRST \
        --set controller.scaleway.env.SCW_SECRET_KEY=11111111-1111-1111-1111-111111111111

    Review the configuration values for the Helm chart.

  3. You can now verify that the driver is running:

    $ kubectl get pods -n kube-system
    [...]
    scaleway-csi-controller-76897b577d-b4dgw   8/8     Running   0          3m
    scaleway-csi-node-hvkfw                    3/3     Running   0          3m
    scaleway-csi-node-jmrz2                    3/3     Running   0          3m
    [...]

    You should see the scaleway-csi-controller and the scaleway-csi-node pods.

Development

Build

You can build the Scaleway CSI driver executable using the following commands:

make build

You can build a local docker image named scaleway-csi for your current architecture using the following command:

make docker-build

Test

In order to run the tests:

make test

Contribute

If you are looking for a way to contribute please read the contributing guide

Code of conduct

Participation in the Kubernetes community is governed by the CNCF Code of Conduct.

Reach us

We love feedback. Feel free to reach us on Scaleway Slack community, we are waiting for you on #k8s.

You can also join the official Kubernetes slack on #scaleway-k8s channel

You can also raise an issue if you think you've found a bug.

scaleway-csi's People

Contributors

adphi avatar dependabot[bot] avatar jtherin avatar mkroman avatar nox-404 avatar sh4d1 avatar tgenaitay avatar tomy2e avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scaleway-csi's Issues

Mounting issues with encrypted volumes

Describe the bug
Volumes gets in an unmountable state after trying to restart a pod using an encrypted PV

To Reproduce
Setup Scaleway CSI and create encrypted storageClass as outlined in the docs.
Deploy a StatefulSet such as a 3 replicaset mongodb
Wait for the workload to come up
PVs are provisioned and everything is fine
Kill one pod and wait for it to be recreated by Kubernetes
Just after the scheduler schedules the pod to run on a node it errors because it cannot mount the previously created and existing PV.
See errors shown in kube logs below

Expected behavior
PV should be attached to the new node where the new pod is scheduled and the pod should start

Details (please complete the following information):

  • Scaleway CSI version: 0.1.7
  • Platform: Rancher RKE2 v2.5.9
  • Orchestrator and its version: Kubernetes v1.20.11+rke2r2

Additional context

Errors shown

Warning FailedMount MountVolume.MountDevice failed for volume "pvc-3030ae10-3579-494a-a215-0017aea58332" : rpc error: code = Internal desc = error encrypting/opening volume with ID aeffa5d1-d5c3-406c-a728-d5d2c856aed9: luksStatus returned ok, but device scw-luks-aeffa5d1-d5c3-406c-a728-d5d2c856aed9 is not active

and

MountVolume.WaitForAttach failed for volume "pvc-83cf34a9-d36d-46e5-bbf2-199c426f518c" : volume fr-par-2/cbe3eca8-f623-4bbe-bc76-450eceb391b2 has GET error for volume attachment csi-879b1d2e5fa7ca784f356b823505c5506b57891aa56966b59c8ebfdae3497320: volumeattachments.storage.k8s.io "csi-879b1d2e5fa7ca784f356b823505c5506b57891aa56966b59c8ebfdae3497320" is forbidden: User "system:node:node-5" cannot get resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope: no relationship found between node 'node-5' and this object

Again, that only seems to happen for encrypted PVs.

Encryption support

Support for encryption

It would be great to support volume encryption to have at-rest security over volume's content

Describe the solution you'd like

  1. Create a secret with an encryption key inside
  2. Refer to this secret ([namespace]/[secret]) and key in that secret in the storageclass, pvc or pv

Then:

  • upon provisioning -> encrypt the volume (LUKS?)
  • upon mounting -> decrypt the volume (LUKS?)

Local SSD support

Can volumes with type l_ssd be supported? I tried adding a storage class for them:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: scw-lssd
provisioner: csi.scaleway.com
reclaimPolicy: Delete
parameters:
  type: l_ssd

But PVCs are just stuck on pending and there is no relevant log from the csi pod as far as I can tell.

Background for this is that I need object storage via Minio or Ceph and they strongly discourage network-attached storage. (I would prefer using Scaleway's object storage but the API key gives access to everything and that doesn't work for me)

Nomad: Volume exists but it fails to attach it to a node when the job starts

Describe the bug
I'm attempting to bring up a single instance Redis job with a 1GB volume for persistent storage. The volume is created successfully but when the Redis job starts the node plugin does not attach the volume to the node so the job fails.

To Reproduce
I use Terraform to manage my infrastructure.

Volume:

resource "nomad_external_volume" "redis_volume" {
  type         = "csi"
  plugin_id    = "csi.scaleway.com"
  volume_id    = "redis_volume"
  name         = "redis_volume"
  capacity_min = "1GB"
  capacity_max = "1GB"

  capability {
    access_mode     = "single-node-writer"
    attachment_mode = "file-system"
  }

  mount_options {
    fs_type = "ext4"
  }
}

Job volume:

    volume "database" {
      type = "csi"
      read_only = false
      source = "${nomad_external_volume.redis_volume.id}"
      attachment_mode = "file-system"
      access_mode = "single-node-writer"
      per_alloc = false
    }

Job mount:

      volume_mount {
        volume = "database"
        destination = "/data"
        read_only = false
      }

Expected behavior
The Scaleway volume should be attached to the node where the job is launched when the job is launched.

Details (please complete the following information):

  • Scaleway CSI version: master tag on DockerHub
  • Platform: Ubuntu 22.04.2 LTS
  • Orchestrator and its version: Nomad v1.5.6

Additional context
Logs from the controller plugin...

I0602 17:34:23.987342       1 controller.go:315] ControllerPublishVolume called with {VolumeId:nl-ams-1/04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 NodeId:nl-ams-1/672c2745-e8a8-44f2-89b1-00ac5b7e488c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:[] VolumeContext:map[encrypted:false] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0602 17:34:26.409020       1 controller.go:405] ControllerUnpublishVolume called with {VolumeId:nl-ams-1/04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 NodeId:nl-ams-1/672c2745-e8a8-44f2-89b1-00ac5b7e488c Secrets:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0602 17:34:26.567516       1 controller.go:405] ControllerUnpublishVolume called with {VolumeId:nl-ams-1/04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 NodeId:nl-ams-1/672c2745-e8a8-44f2-89b1-00ac5b7e488c Secrets:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E0602 17:34:27.750971       1 driver.go:117] error for /csi.v1.Controller/ControllerUnpublishVolume: rpc error: code = Internal desc = scaleway-sdk-go: volume should be attached to a server

Logs from the node plugin...

I0602 17:34:25.599970       1 node.go:60] NodeStageVolume called with {VolumeId:nl-ams-1/04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 PublishContext:map[csi.scaleway.com/volume-id:04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 csi.scaleway.com/volume-name:redis_volume csi.scaleway.com/volume-zone:nl-ams-1] StagingTargetPath:/local/csi/staging/redis_volume/rw-file-system-single-node-writer VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:[] VolumeContext:map[encrypted:false] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E0602 17:34:25.612710       1 driver.go:117] error for /csi.v1.Node/NodeStageVolume: rpc error: code = NotFound desc = volume 04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 is not mounted on node yet
I0602 17:34:26.335911       1 node.go:397] NodeUnpublishVolume called with {VolumeId:nl-ams-1/04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 TargetPath:/local/csi/per-alloc/d35eaa56-e74c-5262-5767-8f23401a3dd0/redis_volume/rw-file-system-single-node-writer XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0602 17:34:26.336290       1 mount_helper_common.go:34] Warning: Unmount skipped because path does not exist: /local/csi/per-alloc/d35eaa56-e74c-5262-5767-8f23401a3dd0/redis_volume/rw-file-system-single-node-writer
I0602 17:34:26.339489       1 node.go:172] NodeUnstageVolume called with {VolumeId:nl-ams-1/04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 StagingTargetPath:/local/csi/staging/redis_volume/rw-file-system-single-node-writer XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E0602 17:34:26.339735       1 driver.go:117] error for /csi.v1.Node/NodeUnstageVolume: rpc error: code = NotFound desc = volume with ID 04b0f5e0-7144-44a0-88b9-ebf8ef2165a6 not found

The volume ID shown there is not the same as the ID of the volume that has been created in my Scaleway account but I've so far not been able to work out why.

RWX support ?

Trying to provision this PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: scw-bssd
  resources:
    requests:
      storage: 1Gi

Will return:

Warning  ProvisioningFailed    2s (x5 over 15s)   csi.scaleway.com_control-plane-6fc9f6bc79-6mrnj_7a3ef33f-c1b9-4e77-8d85-7fae980b5eca  failed to provision volume with StorageClass "scw-bssd": rpc error: code = InvalidArgument desc = volumeCapabilities not supported: access mode not supported

No RWX support means not usable in a Cluster.

Missing roles for CSI resizer?

Describe the bug

Bug to confirm as I'm using my own Helm chart built on this repository manifests.

When launching the new CSI resizer container, got this errors in logs (the first one appears after fixing the first one):

E0611 15:00:16.112725       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:scaleway-csi:scaleway-csi-controller" cannot list resource "pods" in API group "" at the cluster scope
E0611 15:01:03.299833       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: unknown (get pods)

I had to add this in cluster roles:

  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["list", "watch"]

Details (please complete the following information):

  • Scaleway CSI version: 0.1.3
  • Platform:
  • Orchestrator and its version: K3S 1.17

Volume failed to mark as attached : Forbidden when patching volumeattachement/status

Describe the bug

We are seeing errors like this one after unsuccessful attach attempts.

We are trying to use the CSI driver to provision volumes in our kubernetes Cluster (Rancher RKE2).

In scaleway console everything looks good. Volumes are attached to the right instance (Kubernetes node running the deployment using the volume)

In Kubernetes, volumes appear as Bound but workload is stuck in creating state and shows error:

AttachVolume.Attach failed for volume "pvc-c5602274-0c49-4167-b626-fd2d78ad8434" : attachdetachment timeout for volume fr-par-2/dc28587c-fe28-403f-854e-2d3f05ce76be

Digging into the cis-attacher pod we see these errors:

I0920 11:30:20.498831 1 csi_handler.go:226] Error processing "csi-37c384c5d58e1937c2c7c055450fa082c5ec96a1295143dd0d61c5fbba077765": failed to mark as attached: volumeattachments.storage.k8s.io "csi-37c384c5d58e1937c2c7c055450fa082c5ec96a1295143dd0d61c5fbba077765" is forbidden: User "system:serviceaccount:kube-system:scaleway-csi-controller" cannot patch resource "volumeattachments/status" in API group "storage.k8s.io" at the cluster scope 20/09/2021 13:30:20 I0920 11:30:20.500312 1 csi_handler.go:226] Error processing "csi-4adf5e26220b50a8219466691a8479f195cb34041094c4ad1dde34847fbbb2d8": failed to mark as attached: volumeattachments.storage.k8s.io "csi-4adf5e26220b50a8219466691a8479f195cb34041094c4ad1dde34847fbbb2d8" is forbidden: User "system:serviceaccount:kube-system:scaleway-csi-controller" cannot patch resource "volumeattachments/status" in API group "storage.k8s.io" at the cluster scope 20/09/2021 13:30:21

We have verified we did apply the cluster role and role bindings for the service account, yet we get this error.
Our cluster runs with a pod security policy but there doesn't seem to be any issue in that area...

To Reproduce
Steps to reproduce the behavior:
Install RKE2 cluster (RKE Govt)
add scaleway CSI driver
apply example pic and pod yaml from this repo

Expected behavior
A clear and concise description of what you expected to happen.

Details (please complete the following information):

  • Scaleway CSI version: v0.1.5 and v0.1.7
  • Platform: Kubernetes v1.20.10+rke2r1 and Rancher 2.5.9
  • Orchestrator and its version:
    Kubernetes v1.20.10+rke2r1 and Rancher 2.5.9
    Additional context
    Add any other context about the problem here.

We have successfully installed and use the Longhorn CSI in our cluster. We wanted to add the scaleway CSI to provision PVCs from "out of the cluster" in encrypted mode.

Add support for merge_group in the github action

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Memory leak in livenessprobe v2.1.0

Describe the bug
Livenessprobe is slowly leaking memory on my Rancher cluster that use scaleway-csi. Memory on nodes is slowly being eaten up until the node run out of memory.
The issue was reported and acknowledged in livenessprobe repository for v2.1.0 (that you're using here and here) and fixed in livenessprobe v2.2.0

To Reproduce
Steps to reproduce the behavior:
Apply 0.1.5 yml and wait a few days. In about a week, livenessprobe will slowly eat all the available memory on a 4gb node
image

Expected behavior
Livenessprobe to not leak memory, obviously. :P

Details (please complete the following information):

  • Scaleway CSI version: 0.1.5
  • Platform:
  • Orchestrator and its version: Rancher v2.5.5 + k8s v1.19.6

Additional context
Add any other context about the problem here.

Upload Volume Snapshot to S3 and provision from S3

Is your feature request related to a problem? Please describe.
We use Velero with the csi plugin to backup our kubernetes cluster and cannot set up a multi-zone disaster recovery plan because the volume snapshots are attached to it.

Describe the solution you'd like
Scaleway allowing to copy a snapshot on the s3, we would like to automate this in the volume snapshot class.
This will then allow us to provision a volume by copying from the s3 (or through a volumesnapshot / volumesnapshotcontent) and thus have a multi-zone Disaster Recovery Plan solution.

Describe alternatives you've considered
We could use the Velero's Restic integration, but we know from experience that this does not work well for large volumes (>500GiB).

Additional context
Add any other context or screenshots about the feature request here.

Resize fails

Configuration

  • K3s v1.20.7+k3s1
  • Scaleway CSI Driver v0.1.5 or v0.1.8

Test

Increase disk size on PVCs managed by a StatefulSet following instructions found at Stack Overflow:

  • kubectl edit pvc <name> for each PVC in the StatefulSet, to increase its capacity.
  • kubectl delete sts --cascade=orphan <name> to delete the StatefulSet and leave its pods.
  • kubectl apply -f <name> to recreate the StatefulSet.
  • kubectl rollout restart sts <name> to restart the pods, one at a time. During restart, the pod's PVC will be resized.

Result

The PVC is never resized even when restarting the StatefulSet pods: PVC stuck in pending state: ExternalExpanding' Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.

Prestop hook in csi-node-driver-registrar is not working

Describe the bug

prestop hook in csi-node-driver-registrar is not working.

Socket /var/lib/kubelet/plugins_registry/csi.scaleway.com-reg.sock is still present on nodes after removing scaleway-csi.

I can notice in logs (kubelet ? not sure because i'm using k3s wich all all k8s components embedded into one) this error about a missing /bin/sh in the container:

Apr 15 13:43:13 anytrack-1 k3s[1031]: E0415 13:43:13.364679    1031 remote_runtime.go:351] ExecSync 23edbf368bba114a96cab615c46f76f17adaf27e36675fdd618f1225ab2d313a '/bin/sh -c rm -rf /registration/csi.scaleway.com-reg.sock /csi/csi.sock' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "c220a5d77843857837bd33597203f016721adf53c511948ba76c954be70f11d6": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown

As the csi-node-driver-registrar is using a distroless image, no /bin/sh is ok.

To Reproduce

Uninstall scaleway-csi

Expected behavior

Socket /var/lib/kubelet/plugins_registry/csi.scaleway.com-reg.sock should be deleted.

Details (please complete the following information):

  • Scaleway CSI version: 0.1.1
  • Platform: Ununtu 18.04
  • Orchestrator and its version: K3S 1.17.3

Additional context

The example for csi-node-driver-registrar deployment on https://github.com/kubernetes-csi/node-driver-registrar is also broken. They also have a pending PR that should remove the socket on SIGTERM kubernetes-csi/node-driver-registrar#61

I think the hook should be removed even if means some garbage file on the nodes.

Typos in error messages

Describe the bug
There are some typos in the following error messages:

  • required size is less than the minimun size
  • limit size is less than the minimun size
  • volume capabilites is nil

Failing to resolve VolumeSnapshotClass CRD

Describe the bug
When applying the deployment manifest, it fails with:

Failed to find exact match for snapshot.storage.k8s.io/v1beta1.VolumeSnapshotClass by [kind, name, singularName, shortNames]

To Reproduce

  1. Deploy the manifest

Expected behavior
The manifest should apply successfully.

Details (please complete the following information):

  • Scaleway CSI version: both 0.1.0 and 0.1.1
  • Platform: Ubuntu 18.04 LTS
  • Orchestrator and its version: Kubernetes v1.18

Additional context
@Sh4d1 managed to resolve the issue by making me apply parts of the manifest first (CustomResourceDefinition and VolumeSnapshotClass), then applying the manifest again, possibly hinting at some kind of a race condition.

Error in snapshot-controller container after making a snaphot

Describe the bug

Lot of this errror in snapshot-controller container after making a snaphot:

E0405 19:44:54.127089       1 snapshot_controller.go:325] error updating volume snapshot content status for snapshot snapcontent-afa5714e-2bf1-4456-b94c-658023f3d62c: snapshot controller failed to update snapcontent-afa5714e-2bf1-4456-b94c-658023f3d62c on API server: volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-afa5714e-2bf1-4456-b94c-658023f3d62c" is forbidden: User "system:serviceaccount:kube-system:scaleway-csi-controller" cannot update resource "volumesnapshotcontents/status" in API group "snapshot.storage.k8s.io" at the cluster scope.

To Reproduce
Steps to reproduce the behavior:

Expected behavior
A clear and concise description of what you expected to happen.

Details (please complete the following information):

  • Scaleway CSI version: v0.1.0
  • Platform: Ubuntu 18.04
  • Orchestrator and its version: K3S v1.17.3+k3s1

Additional context

Tried this fix with success on scaleway-csi-snapshotter clusterrole:

--- scaleway-csi-v0.1.0.yaml.orig	2020-04-05 17:33:33.557070333 +0000
+++ scaleway-csi-v0.1.0.yaml	2020-04-05 20:28:45.601751944 +0000
@@ -881,6 +881,9 @@
   - apiGroups: ["snapshot.storage.k8s.io"]
     resources: ["volumesnapshots/status"]
     verbs: ["update"]
+  - apiGroups: ["snapshot.storage.k8s.io"]
+    resources: ["volumesnapshotcontents/status"]
+    verbs: ["update"]
   - apiGroups: ["apiextensions.k8s.io"]
     resources: ["customresourcedefinitions"]
     verbs: ["create", "list", "watch", "delete", "get", "update"]

Wrong capacity displayed

Describe the bug

Not really a bug but a strange behaviour. Perhaps related to CSI and not Scaleway driver.

As Scaleway seems to charge block-storage in SI unit, i created 2 PVC in SI Unit ("3G" instead of "3Gi", "1G" instead of "1Gi"), but when i display them with kubectl i got a wrong display of capacity:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                                                                                          STORAGECLASS   REASON   AGE
pvc-********-****-****-****-957fed0bd0f8   954Mi      RWO            Delete           Bound    elasticsearch/elasticsearch-master-elasticsearch-master-0                                                      scw-bssd                26h
pvc-********-****-****-****-9ad3e1384953   3Gi        RWO            Delete           Bound    mongodb/mongodb                                                                                                scw-bssd                2d2h

Seems like the capacity is rounded.

I've verified disk size on machines and all is OK with the provisioned volumes (i got 3000000000 bytes and 1000000000 bytes)

I've tried another storageclass (not CSI) and the displayed capacity is OK:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                                                                                          STORAGECLASS   REASON   AGE
pvc-7b90cbdd-6a33-49c9-9719-f054cbcc8dcd   3G         RWO            Retain           Bound    default/test2                                                                                     local-path              16m
pvc-24fb7df9-75ef-46f4-ae94-c59f734f8e75   1G         RWO            Retain           Bound    default/test                                                                                      local-path              16m

Details (please complete the following information):

  • Scaleway CSI version: 0.1.1
  • Platform: Ubuntu 18.04
  • Orchestrator and its version: K3S corresponding to K8S 1.17.3

VolumeSnapshotClass missing and not documented

Describe the bug
kubectl apply -f scaleway-csi/deploy/kubernetes/scaleway-csi-v0.2.0.yaml

results in
error: resource mapping not found for name: "scw-snapshot" namespace: "" from "scaleway-csi/deploy/kubernetes/scaleway-csi-v0.2.0.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first

To Reproduce
create k3s cluster
install scaleway secret
kubectl apply -f scaleway-csi/deploy/kubernetes/scaleway-csi-v0.2.0.yaml

Expected behavior
No error.

README.md updated to add required steps if additional steps are required

Details (please complete the following information):

  • Scaleway CSI version: v0.20
  • Platform: macbook pro / arm64
  • Orchestrator and its version:

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.