Giter VIP home page Giter VIP logo

csi-driver-smb's Introduction

SMB CSI Driver for Kubernetes

linux build status windows build status Coverage Status Artifact Hub

About

This driver allows Kubernetes to access SMB server on both Linux and Windows nodes, csi plugin name: smb.csi.k8s.io. The driver requires existing and already configured SMB server, it supports dynamic provisioning of Persistent Volumes via Persistent Volume Claims by creating a new sub directory under SMB server.

Project status: GA

Container Images & Kubernetes Compatibility:

Driver Version supported k8s version supported Windows csi-proxy version
master branch 1.21+ v0.2.2+
v1.14.0 1.21+ v0.2.2+
v1.13.0 1.21+ v0.2.2+
v1.12.0 1.21+ v0.2.2+

Driver parameters

Please refer to smb.csi.k8s.io driver parameters

Install driver on a Kubernetes cluster

Examples

Troubleshooting

Kubernetes Development

Please refer to development guide

View CI Results

Links

csi-driver-smb's People

Contributors

abhijeetgauravm avatar andyzhangx avatar animeshk08 avatar ashishranjan738 avatar aymenfurter avatar bcho avatar boddumanohar avatar davhdavh avatar dependabot[bot] avatar fossabot avatar gossion avatar ialidzhikov avatar invidian avatar jayunit100 avatar k8s-ci-robot avatar kkmsft avatar lizebang avatar lizhuqi avatar luborpetr avatar masquerade0097 avatar mattcary avatar mayankshah1607 avatar sakuralbj avatar slaesh avatar steveizzle avatar tonycox avatar umagnus avatar vitaliy-leschenko avatar yerenkow avatar zeromagic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-driver-smb's Issues

enable code coverage test

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

#- go test -covermode=count -coverprofile=profile.cov ./pkg/...
#- $GOPATH/bin/goveralls -coverprofile=profile.cov -service=travis-ci

Need to add this repo in code coverage website, otherwise will get following error:

The command "go test -covermode=count -coverprofile=profile.cov ./pkg/..." exited with 0.
5.42s$ $GOPATH/bin/goveralls -coverprofile=profile.cov -service=travis-ci
Bad response status from coveralls: 422
{"message":"Couldn't find a repository matching this job.","error":true}
The command "$GOPATH/bin/goveralls -coverprofile=profile.cov -service=travis-ci" exited with 1.

Describe alternatives you've considered

Additional context

file access issue with smb-provisioner on Windows

What happened:
When mount multiple pods with smb-provisioner on Windows, there would be error like following:

The process cannot access the file because it is being used by another process.

could be related to dperson/samba setting, related issue: dperson/samba#330

https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/smb-provisioner/smb-server.yaml

Events:
  Type     Reason     Age               From                 Message
  ----     ------     ----              ----                 -------
  Normal   Scheduled  13s               default-scheduler    Successfully assigned default/busybox-smb-6dc9949fdf-tfnzp to 3460k8s000
  Normal   Pulled     8s (x2 over 10s)  kubelet, 3460k8s000  Container image "e2eteam/busybox:1.29" already present on machine
  Normal   Created    8s (x2 over 10s)  kubelet, 3460k8s000  Created container busybox
  Warning  Failed     8s (x2 over 10s)  kubelet, 3460k8s000  Error: failed to start container "busybox": Error response from daemon: hcsshim::CreateComputeSystem busybox: The process cannot access the file because it is being used by another process.
(extra info: {"SystemType":"Container","Name":"busybox","Owner":"docker","VolumePath":"\\\\?\\Volume{de73c5cb-eb16-4bbd-8690-d296f9fb5f93}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\busybox","Layers":[{"ID":"c004932e-c6ff-5599-9dd8-ef0a516f1ff3","Path":"C:\\ProgramData\\docker\\windowsfilter\\89fc1203de17533dc6ff14692c29bd8263ce2e97a748a9436464e84b4c363f29"},{"ID":"48b7e8b9-ed95-5184-9482-efb548c09374","Path":"C:\\ProgramData\\docker\\windowsfilter\\472e5b320a3654e53f229fcf8750b069a4630463aa55ab7dfdaa49c0a38d8897"},{"ID":"e4df7ab2-d104-5533-87ce-4e3d0d28b7b3","Path":"C:\\ProgramData\\docker\\windowsfilter\\b1026ded499d8d5625545af4c3b4272ca0eef7cefb130a26f045c76a5221fc5b"},{"ID":"0600eb5f-c455-5b88-8982-a94f9f031909","Path":"C:\\ProgramData\\docker\\windowsfilter\\9d776dabf4b393c6e8de3e26cb9f6d874a982302faf6889612ad47d457041ef7"},{"ID":"00f720e2-032e-55d9-9773-7f17e646d730","Path":"C:\\ProgramData\\docker\\windowsfilter\\85f034500ec6f26ab68661c8f64de69fa09f2107a26370c611ef51fcf6526a01"},{"ID":"4b68d70f-1bf3-521e-a6af-79bc89c863b0","Path":"C:\\ProgramData\\docker\\windowsfilter\\345b971231babda13f4615cef234ced8dd3f1cd8496f170def51a02ac456b6cd"},{"ID":"2f65e0e9-6237-5709-9017-117ffa5b08c1","Path":"C:\\ProgramData\\docker\\windowsfilter\\0378b1a51532f2d431f943edc9378fa78d290a33eb4044b16040cd67b4e777c5"},{"ID":"d9747d99-2bbf-53d0-b77a-307f7c3454ec","Path":"C:\\ProgramData\\docker\\windowsfilter\\f9e683c3ae54f0d6422ad60facadd343090c6abd522927a69b45e712ecb974be"},{"ID":"700c56b2-85b0-5ce8-978d-f18c5d498085","Path":"C:\\ProgramData\\docker\\windowsfilter\\b69124eb460d41297faa690ed36fc1442bc0c8d99c80f0dde9739b0944fa2c10"},{"ID":"576dd396-6937-5f19-9328-337cf5024d27","Path":"C:\\ProgramData\\docker\\windowsfilter\\94da01276906be01c57e374fa8b357f126785d9f25c9ee2e5c55137f3d4743e3"},{"ID":"67d7feed-a5f1-5e6d-b712-fa1316239201","Path":"C:\\ProgramData\\docker\\windowsfilter\\e5d5b06820145e28a845d134f9b5cb6ab2a24ea3fdd719df1954d68b6a42bef5"},{"ID":"8014b4cb-ad5a-5ca8-9cc0-16a4c72300e5","Path":"C:\\ProgramData\\docker\\windowsfilter\\bf5a61c3b82581c95eafa4b4bd4993058671fd697964a32938fe505536786493"}],"HostName":"busybox-smb-6dc9949fdf-tfnzp","MappedDirectories":[{"HostPath":"c:\\var\\lib\\kubelet\\pods\\bd7b8709-f876-44e6-bb21-35694fe7f9d7\\volumes\\kubernetes.io~csi\\pvc-5b6019d0-74fc-4646-9943-138975006f8b\\mount","ContainerPath":"c:\\mnt\\smb","ReadOnly":false,"BandwidthMaximum":0,"IOPSMaximum":0,"CreateInUtilityVM":false},{"HostPath":"c:\\var\\lib\\kubelet\\pods\\bd7b8709-f876-44e6-bb21-35694fe7f9d7\\volumes\\kubernetes.io~secret\\default-token-6mxbl","ContainerPath":"c:\\var\\run\\secrets\\kubernetes.io\\serviceaccount","ReadOnly":true,"BandwidthMaximum":0,"IOPSMaximum":0,"CreateInUtilityVM":false}],"HvPartition":false,"NetworkSharedContainerName":"fa04cbb476914eca9864e8bcaa53d57d2f2c36d1788f57e7e0150a4e3cc1f857"})
  Warning  BackOff  7s  kubelet, 3460k8s000  Back-off restarting failed container


azureuser@3460k8s000 c:\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-5b6019d0-74fc-4646-9943-138975006f8b>dir
 Volume in drive C is Windows
 Volume Serial Number is F01C-4478

 Directory of c:\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-5b6019d0-74fc-4646-9943-138975006f8b

08/10/2020  02:42 PM    <DIR>          .
08/10/2020  02:42 PM    <DIR>          ..
08/10/2020  02:42 PM    <SYMLINKD>     globalmount [\\52.177.137.222\share]
08/10/2020  02:42 PM                90 vol_data.json
               1 File(s)             90 bytes
               3 Dir(s)  118,885,298,176 bytes free

azureuser@3460k8s000 c:\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-5b6019d0-74fc-4646-9943-138975006f8b>cd blobal

azureuser@3460k8s000 c:\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-5b6019d0-74fc-4646-9943-138975006f8b>cd globalmount
The process cannot access the file because it is being used by another process.

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Volume cannot be accessed when csi-smb-node-xxxxx pod on the same node as the node of the pod which is using PV is restarted.

What happened:
SMB CSI volume becomes UN-Accessible when csi-smb-node is restarted.

What you expected to happen:
SMB CSI volume should still be accessible when csi-smb-node pods are restarted.

How to reproduce it:

  1. Create a pod(smb-app-00001) which uses SMB CSI volume
  2. get the node name(node-0001) on which the pod(smb-app-00001) is running
  3. get the csi-smb-node's pod name(such as csi-smb-node-xxxxx) on the node (node-0001) and restart the pod(csi-smb-node-xxxxx) or restart all the pods of daemonset csi-smb-node.
  4. check the SMB CSI volume inside the pod(smb-app-00001) again, you will find the volume is not accessible now.
    Error message: "ls: /mnt/smb: Resource temporarily unavailable. command terminated with exit code 1"
    or " Host is down"

Anything else we need to know?:
This is a critical issue, please fix it as soon as possible!

Environment:

  • CSI Driver version: v0.3.0

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

  • OS (e.g. from /etc/os-release):
    NAME="Container Linux by CoreOS"
    ID=coreos
    VERSION=2135.6.0
    VERSION_ID=2135.6.0
    BUILD_ID=2019-07-30-0722
    PRETTY_NAME="Container Linux by CoreOS 2135.6.0 (Rhyolite)"
    ANSI_COLOR="38;5;75"
    HOME_URL="https://coreos.com/"
    BUG_REPORT_URL="https://issues.coreos.com"
    COREOS_BOARD="amd64-usr"

  • Kernel (e.g. uname -a):
    Linux xxxxx 4.19.56-coreos-r1 #1 SMP Tue Jul 30 06:40:10 -00 2019 x86_64 Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz GenuineIntel GNU/Linux

  • Install tools:

  • Others:

Node for windows crashes , nil pointer reference

What happened:

I have an otherwise healthy windows node, but csi-driver-smb node continues to crash because of something related to safe_mounter_windows.go.... On windows, the csi-smb-node-win pod doesnt stay up for more then a few seconds....

Git Commit: 9a9f0e8e5158389dec157e2195dce71b78247630
Go Version: go1.14.4
Platform: windows/amd64

Streaming logs below:
I1111 13:23:46.103185    2092 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME
I1111 13:23:46.103185    2092 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER
I1111 13:23:46.103185    2092 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I1111 13:23:46.103185    2092 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY
I1111 13:23:46.103185    2092 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER
I1111 13:23:46.103185    2092 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I1111 13:23:46.103185    2092 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I1111 13:23:46.105164    2092 server.go:118] Listening for connections on address: &net.UnixAddr{Name:"C:\\\\csi\\\\csi.sock", Net:"unix"}
I1111 13:24:23.055069    2092 utils.go:107] GRPC call: /csi.v1.Identity/Probe
I1111 13:24:23.055069    2092 utils.go:108] GRPC request: {}
I1111 13:24:23.055069    2092 utils.go:113] GRPC response: {"ready":{"value":true}}
I1111 13:24:48.102229    2092 utils.go:107] GRPC call: /csi.v1.Node/NodeGetCapabilities
I1111 13:24:48.102229    2092 utils.go:108] GRPC request: {}
I1111 13:24:48.102229    2092 utils.go:113] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I1111 13:24:48.109258    2092 utils.go:107] GRPC call: /csi.v1.Node/NodeStageVolume
I1111 13:24:48.109258    2092 utils.go:108] GRPC request: {"secrets":"***stripped***","staging_target_path":"\\var\\lib\\kubelet\\plugins\\kubernetes.io\\csi\\pv\\pvc-406947d6-29a0-4388-808f-4d2bcedccba6\\globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["dir_mode=0777","file_mode=0777","uid=1001","gid=1001"]}},"access_mode":{"mode":1}},"volume_context":{"createSubDir":"false","source":"//smb-server.default.svc.cluster.local/share","storage.kubernetes.io/csiProvisionerIdentity":"1605115216184-8081-smb.csi.k8s.io"},"volume_id":"pvc-406947d6-29a0-4388-808f-4d2bcedccba6"}
I1111 13:24:48.110838    2092 nodeserver.go:195] targetPath(\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-406947d6-29a0-4388-808f-4d2bcedccba6\globalmount) volumeID(pvc-406947d6-29a0-4388-808f-4d2bcedccba6) context(map[createSubDir:false source://smb-server.default.svc.cluster.local/share storage.kubernetes.io/csiProvisionerIdentity:1605115216184-8081-smb.csi.k8s.io]) mountflags([dir_mode=0777 file_mode=0777 uid=1001 gid=1001]) mountOptions([AZURE\USERNAME])
I1111 13:24:48.110838    2092 safe_mounter_windows.go:147] IsLikelyNotMountPoint: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-406947d6-29a0-4388-808f-4d2bcedccba6\globalmount
I1111 13:24:48.110838    2092 safe_mounter_windows.go:205] Exists path: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-406947d6-29a0-4388-808f-4d2bcedccba6\globalmount
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x10 pc=0x897f8c]

goroutine 42 [running]:
github.com/kubernetes-csi/csi-driver-smb/pkg/mounter.(*CSIProxyMounter).ExistsPath(0xc000020820, 0xc00003a0e0, 0x62, 0x0, 0xa5a6e7, 0x19)
        /root/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/mounter/safe_mounter_windows.go:210 +0x17c
github.com/kubernetes-csi/csi-driver-smb/pkg/mounter.(*CSIProxyMounter).IsLikelyNotMountPoint(0xc000020820, 0xc00003a0e0, 0x62, 0x0, 0x0, 0xa36540)
        /root/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/mounter/safe_mounter_windows.go:148 +0xdf
github.com/kubernetes-csi/csi-driver-smb/pkg/smb.(*Driver).ensureMountPoint(0xc000050080, 0xc00003a0e0, 0x62, 0x0, 0xa71b49, 0x47)
        /root/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/smb/nodeserver.go:274 +0x70
github.com/kubernetes-csi/csi-driver-smb/pkg/smb.(*Driver).NodeStageVolume(0xc000050080, 0xb25320, 0xc0001fc090, 0xc000047800, 0xc000050080, 0xed9b77, 0xc00004d420)
        /root/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/smb/nodeserver.go:198 +0x6d9
github.com/container-storage-interface/spec/lib/go/csi._Node_NodeStageVolume_Handler.func1(0xb25320, 0xc0001fc090, 0xa1cbc0, 0xc000047800, 0xa53ed3, 0x10, 0xc000245ae8, 0x1)
        /root/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5941 +0x8d
github.com/kubernetes-csi/csi-driver-smb/pkg/csi-common.logGRPC(0xb25320, 0xc0001fc090, 0xa1cbc0, 0xc000047800, 0xc000005d00, 0xc000005d20, 0xc000155b78, 0x49e08f, 0x9f6700, 0xc0001fc090)
        /root/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/csi-common/utils.go:109 +0x187
github.com/container-storage-interface/spec/lib/go/csi._Node_NodeStageVolume_Handler(0xa3b2c0, 0xc000050080, 0xb25320, 0xc0001fc090, 0xc0000477a0, 0xa7cca8, 0xb25320, 0xc0001fc090, 0xc0002ee000, 0x19d)
        /root/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5943 +0x152
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00002f500, 0xb2b2e0, 0xc0000cd800, 0xc000066200, 0xc00004eb40, 0xf31da0, 0x0, 0x0, 0x0)
        /root/go/pkg/mod/google.golang.org/[email protected]/server.go:1024 +0x508
google.golang.org/grpc.(*Server).handleStream(0xc00002f500, 0xb2b2e0, 0xc0000cd800, 0xc000066200, 0x0)
        /root/go/pkg/mod/google.golang.org/[email protected]/server.go:1313 +0xd44
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00032df90, 0xc00002f500, 0xb2b2e0, 0xc0000cd800, 0xc000066200)
        /root/go/pkg/mod/google.golang.org/[email protected]/server.go:722 +0xa8
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /root/go/pkg/mod/google.golang.org/[email protected]/server.go:720 +0xa8

What you expected to happen:

windows node would not panic, and would print out an actionable error message if it needed to exit

How to reproduce it:

Running on a windows cluster, version 0.4.0 using the install script from https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/install-csi-driver-v0.4.0.md

Anything else we need to know?:

Environment:

provide storage class support

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

We could provide storage class support even it's static provisioning, and then this driver would pass storage parameters to NodeStageVolume

refer to https://kubernetes-csi.github.io/docs/secrets-and-credentials-storage-class.html

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: smb
provisioner: smb.csi.k8s.io
parameters:
  source: "//smb-server.default.svc.cluster.local/share"
  csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
  csi.storage.k8s.io/node-stage-secret-namespace: "default"
reclaimPolicy: Retain # can only support retain
volumeBindingMode: Immediate

Describe alternatives you've considered

Additional context

Unable to mount multiple shares in one pod

What happened:
When two shares are declared in a deployment the pod never comes ready. Have one or the other volume works perfectly.
What you expected to happen:
To be able to mount multiple share/volumes in a single pod
How to reproduce it:
Declare two PV/PVC per the documentation
Create a deployment mounting the PVC to separate volume mount points

  • /host/shareA mount to /a
  • /host/shareB mount to /b
    Having one or the other declared will work but having two the container never mounts all the volumes. Having a single SMB mount and a different volume (longhorn in my case) works.

`Events:
Type Reason Age From Message


Warning FailedScheduling 5m13s default-scheduler AssumePod failed: pod 95a37a18-771a-4479-859f-495f938ece39 is in the cache, so can't be assumed
Normal Scheduled 5m13s default-scheduler Successfully assigned beets/beets-5bbb955548-29kmd to rancher-worker2
Warning FailedMount 3m10s kubelet, rancher-worker2 Unable to attach or mount volumes: unmounted volumes=[m], unattached volumes=[default-token-48qlb m a]: timed out waiting for the condition
Warning FailedMount 56s kubelet, rancher-worker2 Unable to attach or mount volumes: unmounted volumes=[m], unattached volumes=[a default-token-48qlb m]: timed out waiting for the condition`

Anything else we need to know?:
Looking in the logs for the CSI SMB node it looks like it mounted just fine.

GRPC request: volume_id:"arbitrary-volumeid" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-music/globalmount" target_path:"/var/lib/kubelet/pods/3222296d-cab7-468f-b92c-7c1309d49772/volumes/kubernetes.io~csi/pv-music/mount" volume_capability:<mount:<mount_flags:"dir_mode=0777" mount_flags:"file_mode=0777" mount_flags:"vers=2.0" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"csi.storage.k8s.io/ephemeral" value:"false" > volume_context:<key:"csi.storage.k8s.io/pod.name" value:"beets-656454cffb-9ftf5" > volume_context:<key:"csi.storage.k8s.io/pod.namespace" value:"beets" > volume_context:<key:"csi.storage.k8s.io/pod.uid" value:"3222296d-cab7-468f-b92c-7c1309d49772" > volume_context:<key:"csi.storage.k8s.io/serviceAccount.name" value:"default" > volume_context:<key:"source" value:"//192.168.100.180/Music" > NodePublishVolume called with request {arbitrary-volumeid map[] /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-music/globalmount /var/lib/kubelet/pods/3222296d-cab7-468f-b92c-7c1309d49772/volumes/kubernetes.io~csi/pv-music/mount mount:<mount_flags:"dir_mode=0777" mount_flags:"file_mode=0777" mount_flags:"vers=2.0" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > false map[] map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:beets-656454cffb-9ftf5 csi.storage.k8s.io/pod.namespace:beets csi.storage.k8s.io/pod.uid:3222296d-cab7-468f-b92c-7c1309d49772 csi.storage.k8s.io/serviceAccount.name:default source://192.168.100.180/Music] {} [] 0} NodePublishVolume: mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-music/globalmount at /var/lib/kubelet/pods/3222296d-cab7-468f-b92c-7c1309d49772/volumes/kubernetes.io~csi/pv-music/mount with mountOptions: [bind] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-music/globalmount at /var/lib/kubelet/pods/3222296d-cab7-468f-b92c-7c1309d49772/volumes/kubernetes.io~csi/pv-music/mount successfully

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
    Rancher v2.3.5
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3",
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.5"
  • OS (e.g. from /etc/os-release):
    Ubuntu 20.04 LTS
  • Kernel (e.g. uname -a):
    5.4.0-37
  • Install tools:
  • Others:

Volume cannot be mounted

What happened:
I've followed all of the instructions for both options 1 and 2 in the e2e usage guide and get an error saying that k8s failed to mount the volume due to cifs filesystem not supported by the system. Googling the error the only things I can come up with say I need to install cifs-utils on my machine, which isn't possible since the pod never starts.

Here is the full error log from the pod (with the actual address replaced with server_ip/path)

Warning  FailedMount       54s (x3 over 5m29s)    kubelet, gke-vibe-nonprod-billy-blue-3a64113a-76wg  Unable to mount volumes for pod "statefulset-smb-0_default(af847eb9-d190-11ea-af8d-4201ac10000b)": timeout expired waiting for volumes to attach or mount for pod "default"/"statefulset-smb-0". list of unmounted volumes=[persistent-storage]. list of unattached volumes=[persistent-storage default-token-5lcvd]
  Warning  FailedMount       22s (x11 over 7m26s)   kubelet, gke-vibe-nonprod-billy-blue-3a64113a-76wg  MountVolume.MountDevice failed for volume "pvc-af8309cc-d190-11ea-af8d-4201ac10000b" : rpc error: code = Internal desc = volume(pvc-af8309cc-d190-11ea-af8d-4201ac10000b) mount "//server_ip/path" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-af8309cc-d190-11ea-af8d-4201ac10000b/globalmount" failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=1001,gid=1001,<masked> //server_ip/path /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-af8309cc-d190-11ea-af8d-4201ac10000b/globalmount
Output: mount error: cifs filesystem not supported by the system
mount error(19): No such device
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Here are the full logs from the csi driver (with the actual address replaced with server_ip/path)

I0729 11:51:59.242088       1 utils.go:118] GRPC response: ready:<value:true > 
I0729 11:52:27.493775       1 utils.go:111] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0729 11:52:27.493804       1 utils.go:112] GRPC request: 
I0729 11:52:27.493815       1 nodeserver.go:255] NodeGetCapabilities called with request {{} [] 0}
I0729 11:52:27.493829       1 utils.go:118] GRPC response: capabilities:<rpc:<type:STAGE_UNSTAGE_VOLUME > > 
I0729 11:52:27.600036       1 utils.go:111] GRPC call: /csi.v1.Node/NodeStageVolume
I0729 11:52:27.600158       1 utils.go:112] GRPC request: volume_id:"pvc-af8309cc-d190-11ea-af8d-4201ac10000b" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-af8309cc-d190-11ea-af8d-4201ac10000b/globalmount" volume_capability:<mount:<fs_type:"ext4" mount_flags:"dir_mode=0777" mount_flags:"file_mode=0777" mount_flags:"uid=1001" mount_flags:"gid=1001" > access_mode:<mode:SINGLE_NODE_WRITER > > secrets:<key:"password" value:"****" > secrets:<key:"username" value:"****" > volume_context:<key:"createSubDir" value:"true" > volume_context:<key:"source" value:"//server_ip/path" > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1596017528654-8081-smb.csi.k8s.io" > 
I0729 11:52:27.600319       1 nodeserver.go:142] NodeStageVolume called with request {pvc-af8309cc-d190-11ea-af8d-4201ac10000b map[] /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-af8309cc-d190-11ea-af8d-4201ac10000b/globalmount mount:<fs_type:"ext4" mount_flags:"dir_mode=0777" mount_flags:"file_mode=0777" mount_flags:"uid=1001" mount_flags:"gid=1001" > access_mode:<mode:SINGLE_NODE_WRITER >  map[password:**** username:payg_downgrade] map[createSubDir:true source://server_ip/path storage.kubernetes.io/csiProvisionerIdentity:1596017528654-8081-smb.csi.k8s.io] {} [] 0}
I0729 11:52:27.600356       1 nodeserver.go:202] targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-af8309cc-d190-11ea-af8d-4201ac10000b/globalmount) volumeID(pvc-af8309cc-d190-11ea-af8d-4201ac10000b) context(map[createSubDir:true source://server_ip/path storage.kubernetes.io/csiProvisionerIdentity:1596017528654-8081-smb.csi.k8s.io]) mountflags([dir_mode=0777 file_mode=0777 uid=1001 gid=1001]) mountOptions([dir_mode=0777 file_mode=0777 uid=1001 gid=1001])
I0729 11:52:29.242468       1 utils.go:111] GRPC call: /csi.v1.Identity/Probe
I0729 11:52:29.242555       1 utils.go:112] GRPC request: 
I0729 11:52:29.242592       1 utils.go:118] GRPC response: ready:<value:true > 
I0729 11:52:32.600562       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=1001,gid=1001,<masked> //server_ip/path /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-af8309cc-d190-11ea-af8d-4201ac10000b/globalmount)
E0729 11:52:32.607190       1 mount_linux.go:150] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=1001,gid=1001,<masked> //server_ip/path /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-af8309cc-d190-11ea-af8d-4201ac10000b/globalmount
Output: mount error: cifs filesystem not supported by the system
mount error(19): No such device
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

What you expected to happen:

I had expected that the volume would be mounted.

How to reproduce it:

Follow the instructions in the e2e guide.

Anything else we need to know?:

Environment:

  • CSI Driver version: Master
  • Kubernetes version (use kubectl version): v1.14.10-gke.42

second mount on same smb server address would fail on Windows

What happened:
Create two PVs referencing same smb server address, and mount those two PVs on the same node, the second mount would fail like following:

Events:
  Type     Reason       Age                    From                 Message
  ----     ------       ----                   ----                 -------
  Normal   Scheduled    16m                    default-scheduler    Successfully assigned default/busybox-smb-6dc9949fdf-mh6t9 to 1520k8s001
  Warning  FailedMount  5m25s (x2 over 9m56s)  kubelet, 1520k8s001  Unable to attach or mount volumes: unmounted volumes=[smb], unattached volumes=[default-token-zmz89 smb]: timed out waiting for the condition
  Warning  FailedMount  54s (x5 over 14m)      kubelet, 1520k8s001  Unable to attach or mount volumes: unmounted volumes=[smb], unattached volumes=[smb default-token-zmz89]: timed out waiting for the condition
  Warning  FailedMount  24s (x15 over 16m)     kubelet, 1520k8s001  MountVolume.MountDevice failed for volume "pv-smb" : rpc error: code = Internal desc = volume(abc-volumeid) mount "//f60068b32f4414710baa1f7.file.core.windows.net/pvc-7e46303a-5987-48d6-9f9c-34af9609dbdf" on "\\var\\lib\\kubelet\\plugins\\kubernetes.io\\csi\\pv\\pv-smb\\globalmount" failed with smb mapping failed with error: rpc error: code = Unknown desc = NewSmbGlobalMapping failed. output: "New-SmbGlobalMapping : Multiple connections to a server or shared resource by the same user, using more than one user \r\nname, are not allowed. Disconnect all previous connections to the server or shared resource and try again. \r\nAt line:1 char:190\r\n+ ... ser, $PWord;New-SmbGlobalMapping -RemotePath $Env:smbremotepath -Cred ...\r\n+                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n    + CategoryInfo          : NotSpecified: (MSFT_SmbGlobalMapping:ROOT/Microsoft/...mbGlobalMapping) [New-SmbGlobalMa \r\n   pping], CimException\r\n    + FullyQualifiedErrorId : Windows System Error 1219,New-SmbGlobalMapping\r\n \r\n", err: exit status 1

Windows server don't allow two New-SmbGlobalMapping mount with same remote-path:
https://github.com/kubernetes-csi/csi-driver-smb/blob/master/pkg/smb/smb_common_windows.go#L34

What you expected to happen:
Second mount should succeed even there is already a mount on the node.

related workaround fix in k8s v1.14.8: https://github.com/kubernetes/kubernetes/blob/211047e9a1922595eaa3a1127ed365e9299a6c23/pkg/util/mount/mount_windows.go#L94-L105

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

enable sanity test

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

refer to following link to set up a containerized samba in CI environment:
https://github.com/dperson/samba

# make sanity-test
CGO_ENABLED=0 GOOS=linux go build -a -ldflags "-X github.com/csi-driver/csi-driver-smb/pkg/smb.driverVersion=v0.1.0 -X github.com/csi-driver/csi-driver-smb/pkg/smb.gitCommit=26289639ca62e6f637b49b48c7291ce50e61648d -X github.com/csi-driver/csi-driver-smb/pkg/smb.buildDate=2020-05-11T03:17:55Z -s -w -extldflags '-static'" -o _output/smbplugin ./pkg/smbplugin
go test -v -timeout=10m ./test/sanity
=== RUN   TestSanity
--- FAIL: TestSanity (0.00s)
    sanity_test.go:43:
                Error Trace:    sanity_test.go:43
                Error:          Received unexpected error:
                                If you are running tests locally, you will need to set the following env vars: $TENANT_ID, $SUBSCRIPTION_ID, $AAD_CLIENT_ID, $AAD_CLIENT_SECRET, $RESOURCE_GROUP, $LOCATION
                Test:           TestSanity
    sanity_test.go:44:
                Error Trace:    sanity_test.go:44
                Error:          Expected value not to be nil.
                Test:           TestSanity
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x884f48]

goroutine 7 [running]:
testing.tRunner.func1(0xc0000f8200)
        /usr/local/go/src/testing/testing.go:874 +0x3a3
panic(0x905920, 0xda8490)
        /usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/csi-driver/csi-driver-smb/test/sanity.TestSanity(0xc0000f8200)
        /root/go/src/github.com/csi-driver/csi-driver-smb/test/sanity/sanity_test.go:49 +0x198
testing.tRunner(0xc0000f8200, 0x9be640)
        /usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:960 +0x350
FAIL    github.com/csi-driver/csi-driver-smb/test/sanity        0.008s
FAIL

Describe alternatives you've considered

Additional context

publish container linux and windows images in quay.io

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Currently docker files are under here:
https://github.com/kubernetes-csi/csi-driver-smb/blob/master/pkg/smbplugin/Dockerfile
https://github.com/kubernetes-csi/csi-driver-smb/blob/master/pkg/smbplugin/Windows.Dockerfile

could refer to:
https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/release-tools/travis.yml

do you know how to do that? thanks
@wozniakjan

cc @jingxu97 @msau42

Describe alternatives you've considered

Additional context

Force close on open smb-fileLock-handles and sessions

Is your feature request related to a problem? Please describe.
long period of locked file-handles and folders over a SMB share,
making a normal relocation of an k8s-pod painfull.

Describe the solution you'd like in detail
Forcefully close the last SMB connections to unlock the file-handles on the SMB-Server

Additional context
I used the flex-smb-driver and the main issue was not that most of the unix-server-apps
did not support it well, like mongodb and sqlserver or where just slow like some java-apps
It was simply the long open file-handle-locks on part of the mounts.
It already could be locked *.pid file that prevented to restart of a pod for more then 30min.

More then once was a manual force close required with:
Get-SmbOpenFile && Close-SmbOpenFile or Get-SmbSession && Close-SmbSession

fix unit test failures on Windows

What happened:

following unit test could not pass on Windows, we should either fix those failures on Windows or disable those tests running on Windows

C:\Users\xiazhang\go\src\github.com\kubernetes-csi\csi-driver-smb\pkg\smb>go test
E0812 10:32:35.859486 1580532 nodeserver.go:313] MakeDir failed on target: ./smb.go (mkdir ./smb.go: The system cannot find the path specified.)
--- FAIL: TestNodeStageVolume (0.04s)
    nodeserver_test.go:122: test case: [Error] Not a Directory, Unexpected error: rpc error: code = Internal desc = Could not mount target "./smb.go": mkdir ./smb.go: The system cannot find the path specified.
    nodeserver_test.go:122: test case: [Error] Failed SMB mount mocked by MountSensitive, Unexpected error: prepare stage path failed for ./error_mount_sens_source with error: could not cast to csi proxy class
    nodeserver_test.go:122: test case: [Success] Valid request, Unexpected error: prepare stage path failed for ./source_test with error: could not cast to csi proxy class
E0812 10:32:35.915490 1580532 nodeserver.go:313] MakeDir failed on target: ./smb.go (mkdir ./smb.go: The system cannot find the path specified.)
--- FAIL: TestNodePublishVolume (0.02s)
    nodeserver_test.go:274: test case: [Error] Not a directory, Unexpected error: rpc error: code = Internal desc = Could not mount target "./smb.go": mkdir ./smb.go: The system cannot find the path specified.
    nodeserver_test.go:274: test case: [Error] Mount error mocked by Mount, Unexpected error: prepare publish failed for ./target_test with error: could not cast to csi proxy class
    nodeserver_test.go:274: test case: [Success] Valid request read only, Unexpected error: prepare publish failed for ./target_test with error: could not cast to csi proxy class
    nodeserver_test.go:274: test case: [Success] Valid request, Unexpected error: prepare publish failed for ./target_test with error: could not cast to csi proxy class
--- FAIL: TestNodeUnpublishVolume (0.00s)
    nodeserver_test.go:327: test case: [Error] Unmount error mocked by IsLikelyNotMountPoint, Unexpected error: rpc error: code = Internal desc = failed to unmount target "./error_is_likely_target": could not cast to csi proxy class
    nodeserver_test.go:327: test case: [Success] Valid request, Unexpected error: rpc error: code = Internal desc = failed to unmount target "./abc.go": could not cast to csi proxy class
--- FAIL: TestNodeUnstageVolume (0.00s)
    nodeserver_test.go:378: test case: [Error] CleanupMountPoint error mocked by IsLikelyNotMountPoint, Unexpected error: rpc error: code = Internal desc = failed to unmount staging target "./error_is_likely_target": could not cast to csi proxy class
    nodeserver_test.go:378: test case: [Success] Valid request, Unexpected error: rpc error: code = Internal desc = failed to unmount staging target "./abc.go": could not cast to csi proxy class
IsCorruptedDir(./error_is_likely_target) returned with error: <nil>W0812 10:32:35.983487 1580532 nodeserver.go:303] ReadDir ./false_is_likely_target failed with open ./false_is_likely_target: The system cannot find the file specified., unmount this directory
E0812 10:32:35.984504 1580532 nodeserver.go:313] MakeDir failed on target: ./smb.go (mkdir ./smb.go: The system cannot find the path specified.)
--- FAIL: TestIsCorruptedDir (0.00s)
    smb_test.go:54: failed to create curruptedPath: symlink C:\Users\xiazhang\AppData\Local\Temp\csi-mount-test216866723 C:\Users\xiazhang\AppData\Local\Temp\csi-mount-test216866723\curruptedPath: A required privilege is not held by the client.
I0812 10:32:36.012492 1580532 smb.go:58]
DRIVER INFORMATION:
-------------------
Build Date: N/A
Compiler: gc
Driver Name: smb.csi.k8s.io
Driver Version: N/A
Git Commit: N/A
Go Version: go1.13.14

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

add e2e test

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Would work on this after migration to
https://github.com/kubernetes-csi/csi-driver-smb

refer to following link to set up a containerized samba in CI environment:
https://github.com/dperson/samba

Describe alternatives you've considered

  • step 1. suppose you are already on a k8s cluster, run make e2e-test on this cluster, could refer to https://github.com/kubernetes-sigs/azurefile-csi-driver/tree/master/test/e2e
    (need install driver manually before run make e2e-test)
  • step 2. use test-infra to set up a k8s cluster(probably azure k8s cluster) and then build csi driver, install csi driver by helm chart installation. Let's work on step 1 first.

Additional context

test: re-write unit test logic to assert on different error message depending on host machine

What happened:
A lot of the unit tests in pkg/smb are failing on Windows due to incompatible file paths.

What you expected to happen:
These unit tests must be compatible across Linux and Windows.

How to reproduce it:
Run go test ./pkg/smb/... on Windows

Anything else we need to know?:

To fix it, we may first want to have a way to ensure file path compatibility across Windows and Linux. Refer #107

The tests will still fail, as the desired failure messages are different on Windows and Linux. We may have to re-write these tests to assert on a Windows specific error message on Windows, and similarly for Linux.

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

[ARM64]Can't Mount SMB Volumes, Exec format error

I am finally able to run the entire project on a k3s RaspberryPi cluster. But now that I have created a StorageClass and a PVC using it, I get the following error when a Pod tries to mount it:

MountVolume.MountDevice failed for volume "pvc-8b5c7a54-6dd8-4e84-9d2d-e604c1a8a6da" : rpc error: code = Internal desc = volume(pvc-8b5c7a54-6dd8-4e84-9d2d-e604c1a8a6da) mount "//kraken.burbs/volumes" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8b5c7a54-6dd8-4e84-9d2d-e604c1a8a6da/globalmount" failed with mount failed: fork/exec /bin/mount: exec format error Mounting command: mount Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=1000,gid=1000,<masked> //kraken.burbs/volumes /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8b5c7a54-6dd8-4e84-9d2d-e604c1a8a6da/globalmount Output:

The storage class, PVC, and PV all got created without any issue and the PV and PVC both say bound.

The error above seems to be coming from the smb-csi image in the smb container.

Am I missing something? Do I need to install anything on the host of of the nodes?

add e2e test against csi-proxy master branch

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Current windows e2e test is against stable csi-proxy version, should add e2e test against csi-proxy master branch, difficulty is how to generate a new csi-proxy.tar.gz based on csi-proxy master branch: https://github.com/kubernetes-csi/csi-proxy

"csiProxyURL": "https://kubernetesartifacts.azureedge.net/csi-proxy/v0.1.0/binaries/csi-proxy.tar.gz",

Moreover, when it's done, we could enable this e2e test on https://github.com/kubernetes-csi/csi-proxy project too

Describe alternatives you've considered

This is to prevent bug like: kubernetes-csi/csi-proxy#75

Additional context

Attach pvc name to pv

Is your feature request related to a problem?/Why is this needed

When you have muliple PVs, it becomes hard to find your data since the PV names are just random UIDs.

root@oas2:~# kubectl -n default get pvc                                                                                                                                                  
NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE                                                  
persistent-storage-statefulset-smb-0   Bound    pvc-9e4a8166-47c6-4996-ba0b-b16508e381dd   10Gi       RWO            smb            26s                                                  

Describe the solution you'd like in detail

I'd like to have the PVC name (and namespace) attached in the PV name, similar to how ranchers local-path-provisioner does it.

Describe alternatives you've considered

None

Additional context

Example of expected outcome:

root@oas2:~# kubectl -n default get pvc                                                                                                                                                  
NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE                                                  
persistent-storage-statefulset-smb-0   Bound    pvc-9e4a8166-47c6-4996-ba0b-b16508e381dd-default-persistent-storage-statefulset-smb-0  10Gi       RWO            smb            26s                                                  

Update Usage of CSI Provisioner in csi-smb-controller

What happened:
I tried to run the chart on nodes with arm64 and the CSI Provisioner image must be v2.0.4 to support multiarch. The current version in the chart is defaulted to v1.4.0 which does not have a multiarch build and therefore will not run on arm64.

If I use the default repository of: mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v2.0.4; then I can't even pull the image.

If I use k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4; then the image pulls and actually runs on arm64.
However I do get this error:

unknown flag: --enable-leader-election
Usage of /csi-provisioner:
unknown flag: --enable-leader-election

Basically it shows a big list of flags that can be used and --enable-leader-election is actually not an option, nor is it here: https://github.com/kubernetes-csi/external-provisioner

I tracked down the csi-smb-controller file, it seems it needs an update to the args.

What you expected to happen:
Correctly run the latest and greatest csiProvisioner with up to date flags. This way Arm64 is one step closer to using smb-csi.

How to reproduce it:
Here is my values.yaml to run on some arm nodes:

image:
  smb:
    tag: latest
  csiProvisioner:
    repository: k8s.gcr.io/sig-storage/csi-provisioner
    # tag: v1.6.1
    tag: v2.0.4
  livenessProbe:
    repository: k8s.gcr.io/sig-storage/livenessprobe
    tag: v2.1.0
  nodeDriverRegistrar:
    repository: k8s.gcr.io/sig-storage/csi-node-driver-registrar
    tag: v2.0.1

attacher.MountDevice failed to create newCsiDriverClient: driver name smb.csi.k8s.io not found in the list of registered CSI drivers

Hello I deploy v0.2.0 of this driver on kubernetes cluster provisioned by rancher2 on top of openstack.
As you can see below pods from csi are there:

kube-system         csi-smb-controller-864498f867-bldgf                             3/3     Running             17         2d1h
kube-system         csi-smb-controller-864498f867-bqc9v                             3/3     Running             17         2d1h
kube-system         csi-smb-node-9bwrt                                              3/3     Running             0          2d1h
kube-system         csi-smb-node-kb85d                                              3/3     Running             0          2d1h
kube-system         csi-smb-node-mrbfd                                              3/3     Running             0          2d1h
kube-system         csi-smb-node-nhf2d                                              3/3     Running             0          2d1h

But when I try to use this driver to mount CIFS share from another cluster that is made in azure I get such error::

MountVolume.MountDevice failed for volume "pv-smb" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name smb.csi.k8s.io not found in the list of registered CSI drivers

PV and PVC looks fine:

NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS    REASON   AGE
pv-smb               100Gi      RWX            Retain           Bound    default/pvc-smb                             2d1h

NAMESPACE   NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     pvc-smb   Bound    pv-smb   100Gi      RWX                           2d1h

kubernetes version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T14:19:25Z", GoVersion:"go1.13.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

kubectl logs csi-smb-node-9bwrt -c smb -n kube-system show:

I0718 14:45:30.935428       1 main.go:84] set up prometheus server on [::]:39615
I0718 14:45:30.936100       1 smb.go:58] 
DRIVER INFORMATION:
-------------------
Build Date: "2020-07-12T03:32:01Z"
Compiler: gc
Driver Name: smb.csi.k8s.io
Driver Version: v0.3.0
Git Commit: 84ac39ff780b8db59b29f7f64d6dda2154356c13
Go Version: go1.14.4
Platform: linux/amd64

Streaming logs below:
I0718 14:45:30.936515       1 mount_linux.go:163] Detected OS without systemd
I0718 14:45:30.936887       1 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME
I0718 14:45:30.936959       1 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER
I0718 14:45:30.937019       1 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I0718 14:45:30.937068       1 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY
I0718 14:45:30.937124       1 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER
I0718 14:45:30.937177       1 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0718 14:45:30.937228       1 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0718 14:45:30.937520       1 server.go:118] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0718 14:45:31.095826       1 utils.go:111] GRPC call: /csi.v1.Identity/GetPluginInfo
I0718 14:45:31.095844       1 utils.go:112] GRPC request: 
I0718 14:45:31.095856       1 identityserver.go:32] Using default GetPluginInfo
I0718 14:45:31.095870       1 utils.go:118] GRPC response: name:"smb.csi.k8s.io" vendor_version:"v0.3.0" 
I0718 14:45:31.905532       1 utils.go:111] GRPC call: /csi.v1.Identity/GetPluginInfo
I0718 14:45:31.905550       1 utils.go:112] GRPC request: 
I0718 14:45:31.905557       1 identityserver.go:32] Using default GetPluginInfo
I0718 14:45:31.905561       1 utils.go:118] GRPC response: name:"smb.csi.k8s.io" vendor_version:"v0.3.0" 
I0718 14:46:27.720230       1 utils.go:111] GRPC call: /csi.v1.Identity/Probe
I0718 14:46:27.720336       1 utils.go:112] GRPC request: 
I0718 14:46:27.720365       1 utils.go:118] GRPC response: ready:<value:true > 
I0718 14:46:57.716983       1 utils.go:111] GRPC call: /csi.v1.Identity/Probe
I0718 14:46:57.717034       1 utils.go:112] GRPC request: 
I0718 14:46:57.717047       1 utils.go:118] GRPC response: ready:<value:true > 
I0718 14:47:27.716511       1 utils.go:111] GRPC call: /csi.v1.Identity/Probe
I0718 14:47:27.716580       1 utils.go:112] GRPC request: 

kubectl get CSIDriver -A:

NAME             ATTACHREQUIRED   PODINFOONMOUNT   MODES        AGE
smb.csi.k8s.io   false            true             Persistent   2d2h

dynamic provisioning fully support

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

PR(#61) added storage class support, while it's just an empty implementation, need to add fully dynamic provisioning support:

  • CreateVolume
    • Should create a new directory under smb server
  • DeleteVolume
    • Should delete current directly under smb server

To implement this feature, need to figure out whether there is a smb go-client

Describe alternatives you've considered

Additional context

smdshare is empty after creation on K8S

Hi,

I have tried to load a smbshare from local disk on kubernetes. I have followed the example and created the following script :

Install driver

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/rbac-csi-smb-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-driver.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-node.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-node-windows.yaml

Create secret

kubectl create secret generic smbcreds --from-literal username="172.16.1.20\publishingstudio" --from-literal password="********" --type="microsoft.com/smb" -n beta

user is on a specific domains called 172.16.1.20

Create a Samba Server deployment on local disk

kubectl create -f ./smb-server.yaml -n beta

smb-server.yaml  is as followed :

---
kind: Service
apiVersion: v1
metadata:
  name: smb-server
  labels:
    app: smb-server
  namespace: beta
spec:
  type: ClusterIP  # use "LoadBalancer" to get a public ip
  selector:
    app: smb-server
  ports:
    - port: 445
      name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: smb-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: smb-server
  template:
    metadata:
      name: smb-server
      labels:
        app: smb-server
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      containers:
        - name: smb-server
          image: andyzhangx/samba:win-fix
          env:
            - name: PERMISSIONS
              value: "0777"
            - name: USERNAME
              valueFrom:
                secretKeyRef:
                  name: smbcreds
                  key: username
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  name: smbcreds
                  key: password
          args: ["-u", "$(USERNAME);$(PASSWORD)", "-s", "share;/smbshare/;yes;no;no;all;none", "-p"]
          volumeMounts:
            - mountPath: /smbshare
              name: data-volume
          ports:
            - containerPort: 445
      volumes:
        - name: data-volume
          hostPath:
            path: //172.16.1.30/publishingstudio$ # modify this to specify another path to store smb share data
            type: DirectoryOrCreate

The path I tries to load is //172.16.1.30/publishingstudio$

SMB pod is created without any problem but folder is empty whereas it should contain data.

image

How can I fix this?
Is there any log I can look at?

thanks for your help.

Suggest on shipping a custom PodSecurityPolicy for csi-smb-node

Is your feature request related to a problem?/Why is this needed
When using a default restrictive PodSecurityPolicy in the cluster you need to have a custom PodSecurityPolicy in order to run the csi-smb-node DaemonSet.

Describe the solution you'd like in detail
An additional manifest containing a PodSecurityPolicy for csi-smb-node.

Describe alternatives you've considered
Creating an own PSP.

Additional context
Here I have an example of the PSP I created for my case, I think this could be used as a default.
Even If you opt to not integrate in the repo other people can find this PSP in the GitHub issues ;)

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: csi-smb-node
spec:
  allowedHostPaths:
  - pathPrefix: /var/lib/kubelet/plugins/smb.csi.k8s.io
  - pathPrefix: /var/lib/kubelet/
  - pathPrefix: /var/lib/kubelet/plugins_registry/
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  hostPorts:
  - max: 29643
    min: 29643
  - max: 29645
    min: 29645
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - hostPath
  - secret
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-smb-node
rules:
- apiGroups:
  - extensions
  resources:
  - podsecuritypolicies
  resourceNames:
  - csi-smb-node
  verbs:
  - use
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-smb-node
subjects:
- kind: ServiceAccount
  name: csi-smb-node-sa
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: csi-smb-node
  apiGroup: rbac.authorization.k8s.io
---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: csi-smb-node-sa
  namespace: kube-system

Change your DaemonSet in order to use the created ServiceAccount.

reenable unit tests on Windows

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Need to enable following unit tests on Windows since NodeXXX funcs would still run on Windows, should fix the test cases

~/go/src/github.com/kubernetes-csi/csi-driver-smb# grep skipIfTestingOnWindows ./* -R
./pkg/smb/nodeserver_test.go:   skipIfTestingOnWindows(t)
./pkg/smb/nodeserver_test.go:   skipIfTestingOnWindows(t)
./pkg/smb/nodeserver_test.go:   skipIfTestingOnWindows(t)
./pkg/smb/nodeserver_test.go:   skipIfTestingOnWindows(t)
./pkg/smb/smb_test.go:  skipIfTestingOnWindows(t)

Describe alternatives you've considered

Additional context

Possible missing public images for "gke.gcr.io/csi-node-driver-registrar:win-v1"

What happened:

Im trying to run 0.4 with the csi-proxy YAMLs, but running into some issues.

Seems like getting up and running w/ csi-proxy doesnt work for me. Could be a n00b error. But i thought it was odd that it was trying to dial unix C:// on a windows node.

without CSI proxy running

Events:                                                                                                     
  Type     Reason       Age                From               Message                                                                              
  ----     ------       ----               ----               -------                                                                      
  Normal   Scheduled    47s                default-scheduler  Successfully assigned default/iis-site-windows-55566495cc-78sws to win-h0c364gqvjh
  Warning  FailedMount  31s (x4 over 46s)  kubelet            MountVolume.MountDevice failed for volume "pvc-406947d6-29a0-4388-808f-4d2bcedccba6" : rpc error: code = Unavailable desc = connection e
rror: desc = "transport: Error while dialing dial unix C:\\\\var\\\\lib\\\\kubelet\\\\plugins\\\\smb.csi.k8s.io\\\\csi.sock: connect: No connection could be made because the target machine actively 
refused it."                                                                                                                                                                                          
  Warning  FailedMount  15s (x3 over 47s)  kubelet            MountVolume.MountDevice failed for volume "pvc-406947d6-29a0-4388-808f-4d2bcedccba6" : rpc error: code = Unavailable desc = transport is
 closing   

This eventually turns into:

  Warning  FailedMount  7s    kubelet            MountVolume.MountDevice failed for volume "pvc-406947d6-29a0-4388-808f-4d2bcedccba6" : rpc error: code = Unavailable desc = transport is closing

Which i suppose is related to the fact that my csi proxy isnt happy.

What you expected to happen:

The CSI driver wouldnt try to mount a unix:// socket in a windows container...

How to reproduce it:

Am running csi-driver-smb with/or without the csi-proxy stuff. Either way it seems like mounting of volumes fails (for me at least). There may be some defaults i didnt set propery..

Anything else we need to know?:

Environment: windows

  • CSI Driver version: 0.4.0
  • Kubernetes version (use kubectl version): 1.19
  • OS (e.g. from /etc/os-release): windows

failed to unmount due to "host is down"

What happened:
This issue happened on k8s v1.15.11, need to check whether latest k8s version has fixed this issue:

Jul 11 07:53:08 aks-agentpool-60632172-vmss000007 kubelet[4580]: I0711 07:53:08.728754    4580 controlbuf.go:382] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
Jul 11 07:53:18 aks-agentpool-60632172-vmss000007 kubelet[4580]: E0711 07:53:18.968266    4580 csi_mounter.go:428] kubernetes.io/csi: isDirMounted IsLikelyNotMountPoint test failed for dir [/var/lib/kubelet/pods/aebb8a69-b5fe-4b36-b8cc-59800e5e6fa6/volumes/kubernetes.io~csi/pvc-255b2e00-87d3-4d33-b02d-6cdc2d6394b1/mount]
Jul 11 07:53:18 aks-agentpool-60632172-vmss000007 kubelet[4580]: E0711 07:53:18.968312    4580 csi_mounter.go:378] kubernetes.io/csi: mounter.TearDownAt failed to clean mount dir [/var/lib/kubelet/pods/aebb8a69-b5fe-4b36-b8cc-59800e5e6fa6/volumes/kubernetes.io~csi/pvc-255b2e00-87d3-4d33-b02d-6cdc2d6394b1/mount]: stat /var/lib/kubelet/pods/aebb8a69-b5fe-4b36-b8cc-59800e5e6fa6/volumes/kubernetes.io~csi/pvc-255b2e00-87d3-4d33-b02d-6cdc2d6394b1/mount: host is down
Jul 11 07:53:18 aks-agentpool-60632172-vmss000007 kubelet[4580]: E0711 07:53:18.968397    4580 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/csi/smb.csi.k8s.io^pvc-255b2e00-87d3-4d33-b02d-6cdc2d6394b1\" (\"aebb8a69-b5fe-4b36-b8cc-59800e5e6fa6\")" failed. No retries permitted until 2020-07-11 07:53:19.968350377 +0000 UTC m=+174446.119043970 (durationBeforeRetry 1s). Error: "UnmountVolume.TearDown failed for volume \"smb\" (UniqueName: \"kubernetes.io/csi/smb.csi.k8s.io^pvc-255b2e00-87d3-4d33-b02d-6cdc2d6394b1\") pod \"aebb8a69-b5fe-4b36-b8cc-59800e5e6fa6\" (UID: \"aebb8a69-b5fe-4b36-b8cc-59800e5e6fa6\") : stat /var/lib/kubelet/pods/aebb8a69-b5fe-4b36-b8cc-59800e5e6fa6/volumes/kubernetes.io~csi/pvc-255b2e00-87d3-4d33-b02d-6cdc2d6394b1/mount: host is down"

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version): 1.15.11
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

fake CSIProxyMounter to enable unit test

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Need to fake CSIProxyMounter, same as fakeMounter, and enable a few unit tests on Windows, current unit tests failed with return fmt.Errorf("could not cast to csi proxy class"

C:\Users\xiazhang\go\src\github.com\kubernetes-csi\csi-driver-smb\pkg\smb>git diff
diff --git a/pkg/smb/nodeserver_test.go b/pkg/smb/nodeserver_test.go
index ecaeab12..f9a9f386 100644
--- a/pkg/smb/nodeserver_test.go
+++ b/pkg/smb/nodeserver_test.go
@@ -22,9 +22,11 @@ import (
        "fmt"
        "os"
        "reflect"
+       "runtime"
        "syscall"
        "testing"

+       "github.com/kubernetes-csi/csi-driver-smb/pkg/mounter"
        "github.com/kubernetes-csi/csi-driver-smb/test/utils/testutil"

        "github.com/container-storage-interface/spec/lib/go/csi"
@@ -198,7 +200,7 @@ func TestNodeExpandVolume(t *testing.T) {
 }

 func TestNodePublishVolume(t *testing.T) {
-       skipIfTestingOnWindows(t)
+       //skipIfTestingOnWindows(t)
        volumeCap := csi.VolumeCapability_AccessMode{Mode: csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER}
        errorMountSource := "./error_mount_source"
        alreadyMountedTarget := "./false_is_likely_exist_target"
@@ -284,9 +286,15 @@ func TestNodePublishVolume(t *testing.T) {
        // Setup
        _ = makeDir(alreadyMountedTarget)
        d := NewFakeDriver()
-       fakeMounter := &fakeMounter{}
-       d.mounter = &mount.SafeFormatAndMount{
-               Interface: fakeMounter,
+       if runtime.GOOS == "windows" {
+               csiProxyMounter, _ := mounter.NewCSIProxyMounter()
+               d.mounter = &mount.SafeFormatAndMount{
+                       Interface: csiProxyMounter,
+               }
+       } else {
+               d.mounter = &mount.SafeFormatAndMount{
+                       Interface: &fakeMounter{},
+               }
        }

        for _, test := range tests {

Without fake, there would be panic:

C:\Users\xiazhang\go\src\github.com\kubernetes-csi\csi-driver-smb\pkg\smb>go test -run TestNodePublishVolume
--- FAIL: TestNodePublishVolume (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x10 pc=0x8ae83c]

goroutine 22 [running]:
testing.tRunner.func1(0xc000150100)
        c:/go/src/testing/testing.go:874 +0x3aa
panic(0x996b00, 0xeebb70)
        c:/go/src/runtime/panic.go:679 +0x1c0
github.com/kubernetes-csi/csi-driver-smb/pkg/mounter.(*CSIProxyMounter).ExistsPath(0xc000067e10, 0xa51ca5, 0x8, 0x0, 0x38, 0x8)
        C:/Users/xiazhang/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/mounter/safe_mounter_windows.go:210 +0x17c
github.com/kubernetes-csi/csi-driver-smb/pkg/mounter.(*CSIProxyMounter).IsLikelyNotMountPoint(0xc000067e10, 0xa51ca5, 0x8, 0xc000093420, 0x40cdf0, 0xc00006d640)
        C:/Users/xiazhang/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/mounter/safe_mounter_windows.go:148 +0xdf
github.com/kubernetes-csi/csi-driver-smb/pkg/smb.(*Driver).ensureMountPoint(0xc000093858, 0xa51ca5, 0x8, 0x1, 0x2, 0xc00006d640)
        C:/Users/xiazhang/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/smb/nodeserver.go:274 +0x70
github.com/kubernetes-csi/csi-driver-smb/pkg/smb.(*Driver).NodePublishVolume(0xc000093858, 0xb1fe80, 0xc0000720b0, 0xc0000938e8, 0x1, 0xb135c0, 0xc0000c2d70)
        C:/Users/xiazhang/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/smb/nodeserver.go:71 +0x1f4
github.com/kubernetes-csi/csi-driver-smb/pkg/smb.TestNodePublishVolume(0xc000150100)
        C:/Users/xiazhang/go/src/github.com/kubernetes-csi/csi-driver-smb/pkg/smb/nodeserver_test.go:301 +0x7f1
testing.tRunner(0xc000150100, 0xa7d240)
        c:/go/src/testing/testing.go:909 +0xd0
created by testing.(*T).Run
        c:/go/src/testing/testing.go:960 +0x357
exit status 2
FAIL    github.com/kubernetes-csi/csi-driver-smb/pkg/smb        1.950s

Another way is use real CSIProxyMounter, and run csi-proxy.exe on Windows before running unit tests.
follow guide here:

Or run as a service using nssm:

Describe alternatives you've considered

Additional context

Cannot access to mounted volume from container (permssion denied) (selinux enabled)

Hi everyone,
I'm trying to consume CIFS shares from an OpenShift (OKD) 4 cluster using this CSI driver.

What happened:
When I try to access a mounted CIFS folder from a container I get a permission denied error. Here is the ls output from a centos:8 container with the CIFS folder mounted under /mnt/smb:

sh-4.4$ ls -l /mnt/
ls: cannot access '/mnt/smb': Permission denied
total 0
d????????? ? ? ? ?            ? smb

However if I log into the physical machine, I can see the shared folder correctly mounted under /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cifs/globalmount and I can access it as I would expect.

I suspect that the error might be due to the SELinux label applied to the globalmount folder (cifs_t instead of container_file_t):

[root@master1 pvc-cifs]# ls -alZ
total 16
drwxr-x---. 3 root root system_u:object_r:container_file_t:s0    46 Oct  5 15:32 .
drwxr-x---. 4 root root system_u:object_r:container_file_t:s0    80 Oct  5 15:32 ..
drwxrwxrwx. 2 root root system_u:object_r:cifs_t:s0           12288 Sep 18 20:54 globalmount
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0    68 Oct  5 15:32 vol_data.json

Here is the (correct?) SELinux labels of a mounted Ceph RBD volume with the ROOK CSI driver:

[root@master1 pvc-14f0f04d-85fd-452b-aac4-848d3c2bd3c3]# ls -alZ
total 4
drwxr-x---. 3 root root system_u:object_r:container_file_t:s0    46 Oct  5 15:32 .
drwxr-x---. 4 root root system_u:object_r:container_file_t:s0    80 Oct  5 15:32 ..
drwxr-x---. 3 root root system_u:object_r:container_file_t:s0 110 Oct  2 13:03 globalmount
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 135 Oct  2 13:03 vol_data.json

As I used to do when manually mounting CIFS shares in containers, I also tried to add context="system_u:object_r:container_file_t:s0" as a mountOption of the PV, but the result is the same.

Here is the mount from /proc/mounts (it is the same with or without "context="):

[root@master1 ~]# cat /proc/mounts
...
//myhost.local/someshared /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cifs/globalmount cifs rw,relatime,vers=3.0,cache=strict,username=xxxxxx,domain=xxxx.local,uid=0,noforceuid,gid=0,noforcegid,addr=10.150.120.104,file_mode=0666,dir_mode=0777,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1 0 0
//myhost.local/someshared /var/lib/kubelet/pods/4a688e32-70ff-4f48-8318-4b9655594163/volumes/kubernetes.io~csi/pvc-cifs/mount cifs rw,relatime,vers=3.0,cache=strict,username=xxxxxx,domain=xxxx.local,uid=0,noforceuid,gid=0,noforcegid,addr=10.150.120.104,file_mode=0666,dir_mode=0777,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1 0 0
...

Am I missing something? Any help is appreciated.

Thanks,
Flavio

What you expected to happen:
Correctly access the shared folder.

How to reproduce it:
Using my configuration it always happen:

# cifs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-cifs
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  csi:
    driver: smb.csi.k8s.io
    volumeAttributes:
      source: //myhost.local/someshared
    volumeHandle: pv-cifs-someshared
    nodeStageSecretRef:
      name: sec-cifs
      namespace: kube-system
  mountOptions:
  - dir_mode=0777
  - file_mode=0666
  - vers=3.0
# - context="system_u:object_r:container_file_t:s0"
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
# cifs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-cifs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  volumeName: pv-cifs

Anything else we need to know?:
I installed the CSI driver with the following commands
oc create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-driver.yaml
oc create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-node.yaml

Environment:

  • CSI Driver version: latest
  • Kubernetes version (use kubectl version): v1.18.2-0-g52c56ce
  • OpenShift version (use oc version): 4.5.0-0.okd-2020-09-18-202631
  • OS (e.g. from /etc/os-release): Fedora CoreOS 32.20200629.3.0
  • Kernel (e.g. uname -a): 5.6.19-300.fc32.x86_64
  • SELinux status (/etc/selinux/config): SELINUX=enforcing SELINUXTYPE=targeted

Move integration test in travis to test-infra

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

set up a few CI tests in test-infra, e.g.

  • pull-smb-csi-driver-unit
  • pull-smb-csi-driver-verify
  • pull-smb-csi-driver-windows-build
  • pull-smb-csi-driver-integration
  • pull-smb-csi-driver-sanity

could put into
https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes-csi

and refer to https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes-sigs/azurefile-csi-driver

Describe alternatives you've considered

Additional context

Domain is not properly parametrized

What happened:
Mount options are not correctly set up when domain is added in the secrets, causing a mount error and the container fails to start. Also, credentials are exposed in the Kubernetes event log.

What you expected to happen:
Volume mounting and container starting.

How to reproduce it:
add domain to the secrets according to https://github.com/kubernetes-csi/csi-driver-smb/blob/v0.1.0/deploy/example/e2e_usage.md:
kubectl create secret generic smbcreds --from-literal username=testuser --from-literal password="testpassword" --from-literal domain=testdomain

On pod creation, pod is stuck in Init:0/1 state.
Output of kubectl describe

Warning  FailedMount  1s    kubelet, aks-nodepool-XXXXXX  MountVolume.MountDevice failed for volume "pv-smb" : rpc error: code = Internal desc = volume(unique-volumeid) mount "//XXXXXX/test" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-smb/globalmount" failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,username=testuser,password=testpassword,vers=3.0,testdomain //XXXXXX/test /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-smb/globalmount
Output: mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Note how testdomain is just added at the end without the proper domain= parameter

Anything else we need to know?:

Possible fix:

I suppose the issue is here:

if domain != "" {

and it's missing the string formatting. Solution would be something like

	if domain != "" {
		options := []string{fmt.Sprintf("domain=%s", domain)}
		mountOptions = append(mountOptions, options)
	}

I am not aware if there are other implications or other modifications & more testing needed, otherwise I would have created a PR.

Also note that the same setup works properly with a local machine user instead of a domain user, thus ommiting the domain parameter.

Environment:

  • CSI Driver version: v1.0.1 release and master @ 1afa2fa
  • Kubernetes version (use kubectl version): 1.15
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

EDIT:

Using Linux container.

mask password in mountOptions

What happened:
Currently csi driver node logs would print out mountOptions, while it may contain password, e.g.
https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/logs/csi-smb-node.log#L40

mountOptions([dir_mode=0777 file_mode=0777 username=f8471372a68594910913ed4,password=...YGisUKDE3axDL6x1KyEj9j9PrUp0Yd7U/WbZ0Ip1uluWQr8FSkFgQhIO6fhaCGKd+aJSCrnTgR3m99OMQ== vers=3.0])
I0510 03:32:39.318471       1 nodeserver.go:193] volume(arbitrary-volumeid) mount "//f8471372a68594910913ed4.file.core.windows.net/kubernetes-dynamic-pvc-687cfca7-4880-4dbc-9daa-be4bc39b8eaa" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-smb/globalmount" succeeded
I0510 03:32:39.318497       1 utils.go:118] GRPC response: 
I0510 03:32:39.324390       1 utils.go:111] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0510 03:32:39.324404       1 utils.go:112] GRPC request: 

Related code is here:

klog.V(2).Infof("targetPath(%v) volumeID(%v) context(%v) mountflags(%v) mountOptions(%v)",
targetPath, volumeID, context, mountFlags, mountOptions)

What you expected to happen:
Use similar way here to only mask password by a common function, e.g.

  • MaskFieldValue(mountOptions, field string) string, the output could be [dir_mode=0777 file_mode=0777 username=f8471372a68594910913ed4,password=*** vers=3.0

var reqSecretsRegex, _ = regexp.Compile(`map\[password:.*? `)
s := fmt.Sprintf("NodeStageVolume called with request %v", *req)
klog.V(5).Info(reqSecretsRegex.ReplaceAllString(s, "map[password:**** "))

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

domain setting issue on Windows

What happened:
When using the CSI SMB driver in combination with csi-proxy.exe on windows nodes running windows containers, I noticed that if I create a kubernetes secret that included a "domain" key, whatever value I placed into that key was getting overriden with the word "AZURE". I don't know difinitively whether it was in the csi-proxy or the CSI SMB driver. Either way, this behavior breaks the ability to do on-prem Persistent volumes against windows fileshares managed by a local AD domain.

What made this problem harder to figure out was that everything was working as expected when the driver was used on a Linux worker node.

I had to recompile the csi-proxy with additional logging of the username and domain field in order to figure out this bug so it wasn't immediately obvious as to what was the root cause.

If there is a configuration setting something that overrides the credential secret with a hard coded value for the domain, I could not find it and it should be part of the documentation.

The workaround right now is to not specify the "domain" key at all when defining the kubernetes secret for the credential and instead append the domain with the username in an email format (e.g. "[email protected]").

What you expected to happen:
I expected that the domain key in the credential secret would be honored and would result in a username that included the proper domain prefix so that it could perform proper NTLM authentication against a local AD directory.

How to reproduce it:
Followed the kubernetes documentation for installing the csi smb driver along with installing the latest csi-proxy.exe on the windows nodes. Created a kubernetes secret that defined username, password, and domain as keys.

Anything else we need to know?:
My Kubernetes ecosystem is on-premise. All nodes are VMs. The cluster has both linux and windows nodes. The windows nodes leverage gMSA and are setup to run windows containers.

Environment:

  • CSI Driver version: 0.4.0
  • Kubernetes version (use kubectl version): 1.19.3
  • OS (e.g. from /etc/os-release): Ubuntu 20, Windows 2019
  • Kernel (e.g. uname -a): Linux master1 5.4.0-51-generic #56-Ubuntu SMP Mon Oct 5 14:28:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

samba server not support domain name

when i use
source: \\sfs-nas1.cn-south-1c.xxx.com\share-2d58d4ca
is does not work, but when i use
source: \\10.12.13.5\share-2d58d4ca
it works

thanks for your help!

password should not be exposed in logs

What happened:
password should not be exposed in logs:

 nodeserver.go:120] NodeStageVolume called with request {arbitrary-volumeid map[] /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-smb/globalmount mount:<mount_flags:"dir_mode=0777" mount_flags:"file_mode=0777" mount_flags:"vers=3.0" > access_mode:<mode:MULTI_NODE_MULTI_WRITER >  map[password:..dYGisUKDE3axDL6x1KyEj9j9PrUp0Yd7U/WbZ0Ip1uluWQr8FSkFgQhIO6fhaCGKd+aJSCrnTgR3m99OMQ== username:f8471372a68594910913ed4] map[source://f8471372a68594910913ed4.file.core.windows.net/kubernetes-dynamic-pvc-687cfca7-4880-4dbc-9daa-be4bc39b8eaa] {} [] 0}
I0510 03:32:33.846897       1 nodeserver.go:169] targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-smb/globalmount) volumeID(arbitrary-volumeid) context(map[source://xxx.file.core.windows.net/kubernetes-dynamic-pvc-687cfca7-4880-4dbc-9daa-be4bc39b8eaa]) mountflags([dir_mode=0777 file_mode=0777 vers=3.0]) mountOptions([dir_mode=0777 file_mode=0777 username=f8471372a68594910913ed4,password=...YGisUKDE3axDL6x1KyEj9j9PrUp0Yd7U/WbZ0Ip1uluWQr8FSkFgQhIO6fhaCGKd+aJSCrnTgR3m99OMQ== vers=3.0])

https://github.com/csi-driver/csi-driver-smb/blob/bcd8d774b631d8a6281ed7184a357183c84fff29/deploy/example/logs/csi-smb-node.log#L39

similar to fix: kubernetes-sigs/azurefile-csi-driver#64

related code:
https://github.com/csi-driver/csi-driver-smb/blob/bcd8d774b631d8a6281ed7184a357183c84fff29/pkg/csi-common/utils.go#L112

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

use new test-infra config

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

kubernetes/test-infra#18223 (comment)

We actually have a script that generates pull and CI jobs for all the kubernetes-csi repos.

It involves ".prow.sh" with "UNIT", which already does build, verify, and unit tests. Do you want to use that instead?

test-infra/config/jobs/kubernetes-csi/gen-jobs.sh

Line 390 in c212d67

     - ./.prow.sh 

Unfortunately the CI part is a bit hardcoded to the hostpath driver, so we would need to do some additional work to make it work for other drivers too

Describe alternatives you've considered

Additional context

cc @Sakuralbj

Support For ARM?

Is your feature request related to a problem?/Why is this needed
Can't install this on ARM nodes. The image repo does not seem to have anything ARM architecture related.

Describe the solution you'd like in detail
Compile for ARM. Push ARM image to public registry.

Describe alternatives you've considered
Compiling and building myself . . . probably not gonna work for me.

Additional context
I am running a k3s cluster on a couple of Raspberry PIs. My router has really good Samba support with a USB 3.0 drive. I want to use this SMB server from the router to serve volumes for my cluster.

Error creating PVC - Permission denied

What happened:
I get a permission denied error when the plugin tries to create the PV.

What you expected to happen:
The PV been created as expected.

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version: v0.3.0
  • Kubernetes version (use kubectl version): v1.17.1+6af3663
  • OS (e.g. from /etc/os-release): "Red Hat Enterprise Linux CoreOS 44.82.202008250531-0 (Ootpa)
  • Kernel (e.g. uname -a): 4.18.0-193.14.3.el8_2.x86_64
  • Error logs :
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I0930 18:53:26.822569       1 utils.go:111] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0930 18:53:26.822607       1 utils.go:112] GRPC request:
I0930 18:53:26.822617       1 nodeserver.go:257] NodeGetCapabilities called with request {{} [] 0}
I0930 18:53:26.822633       1 utils.go:118] GRPC response: capabilities:<rpc:<type:STAGE_UNSTAGE_VOLUME > >
I0930 18:53:26.830754       1 utils.go:111] GRPC call: /csi.v1.Node/NodeStageVolume
I0930 18:53:26.830894       1 utils.go:112] GRPC request: volume_id:"pvc-ded15e21-d93a-4a55-b575-9faca51c3db2" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ded15e21-d93a-4a55-b575-9faca51c3db2/globalmount" volume_capability:<mount:<fs_type:"ext4" mount_flags:"dir_mode=0777" mount_flags:"file_mode=0777" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > secrets:<key:"password" value:"****" > secrets:<key:"username" value:"****" > volume_context:<key:"createSubDir" value:"true" > volume_context:<key:"source" value:"//SERVER/PATH" > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1601473105541-8081-smb.csi.k8s.io" >
I0930 18:53:26.831075       1 nodeserver.go:146] NodeStageVolume called with request {pvc-ded15e21-d93a-4a55-b575-9faca51c3db2 map[] /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ded15e21-d93a-4a55-b575-9faca51c3db2/globalmount mount:<fs_type:"ext4" mount_flags:"dir_mode=0777" mount_flags:"file_mode=0777" mount_flags:"uid=0" mount_flags:"gid=0" > access_mode:<mode:MULTI_NODE_MULTI_WRITER >  map[password:**** username:geodenasaccesrec] map[createSubDir:true source://SERVER/PATH storage.kubernetes.io/csiProvisionerIdentity:1601473105541-8081-smb.csi.k8s.io] {} [] 0}
I0930 18:53:26.831137       1 nodeserver.go:204] targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ded15e21-d93a-4a55-b575-9faca51c3db2/globalmount) volumeID(pvc-ded15e21-d93a-4a55-b575-9faca51c3db2) context(map[createSubDir:true source://SERVER/PATH storage.kubernetes.io/csiProvisionerIdentity:1601473105541-8081-smb.csi.k8s.io]) mountflags([dir_mode=0777 file_mode=0777 uid=0 gid=0]) mountOptions([dir_mode=0777 file_mode=0777 uid=0 gid=0])
I0930 18:53:26.831217       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,<masked> //hSERVER/PATH /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ded15e21-d93a-4a55-b575-9faca51c3db2/globalmount)
E0930 18:53:26.860306       1 mount_linux.go:150] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,<masked> //hSERVER/PATH /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ded15e21-d93a-4a55-b575-9faca51c3db2/globalmount
Output: mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

E0930 18:53:26.860359       1 utils.go:116] GRPC error: rpc error: code = Internal desc = volume(pvc-ded15e21-d93a-4a55-b575-9faca51c3db2) 
mount "//SERVER/PATH" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ded15e21-d93a-4a55-b575-9faca51c3db2/globalmount" 
failed with mount failed: exit status 32

Deployed files :

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-smb
provisioner: smb.csi.k8s.io
parameters:
  source: "//SERVER/PATH"
  csi.storage.k8s.io/node-stage-secret-name: "cifs-csi-credentials"
  csi.storage.k8s.io/node-stage-secret-namespace: "cifs-csi-demo"
  createSubDir: "true"
reclaimPolicy: Retain 
volumeBindingMode: Immediate
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - vers=3.0
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-smb
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: "sc-smb"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: deployment-smb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
      name: deployment-smb
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      containers:
        - name: deployment-smb
          image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
          command:
            - "/bin/sh"
            - "-c"
            - ls -all /mnt/smb
          volumeMounts:
            - name: smb
              mountPath: "/mnt/smb"
              readOnly: false
      volumes:
        - name: smb
          persistentVolumeClaim:
            claimName: pvc-smb
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate

Enable golint check in travis

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Enable golint check in travis failed:
#5

Describe alternatives you've considered

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.