Giter VIP home page Giter VIP logo

csi-driver-nvmf's Introduction

CSI NVMf driver

Overview

This is a repository for NVMe-oF CSI Driver. Currently it implements bare minimum of th CSI spec.

Requirements

The CSI NVMf driver requires initiator and target kernel versions to be Linux kernel 5.0 or newer. Before using this csi driver, you should create a NVMf remote disk on the target side and record traddr/trport/trtype/nqn/deviceuuid.

Modprobe Nvmf mod on K8sNode

# when use TCP as transport
$ modprobe nvme-tcp
# when use RDMA as transport
$ modprobe nvme-rdma

Test NVMf driver using csc

Get csc tool from https://github.com/rexray/gocsi/tree/master/csc

$ go get github.com/rexray/gocsi/csc

1. Complile NVMf driver

$ make

2.1 Start NVMf driver

$ ./output/nvmfplugin --endpoint tcp://127.0.0.1:10000 --nodeid CSINode

2.2 Prepare nvmf backend target(Kernel or SPDK)

Kernel

Follow guide to set up kernel target to deploy kernel nvmf storage service on localhost.

SPDK

Follow guide to set up SPDK target to deploy spdk nvmf storage service on localhost.

You can get the information needed for 3.2 through spdk's script/rpc.py nvmf_get_subsystem

3.1 Get plugin info

$ csc identity plugin-info --endpoint tcp://127.0.0.1:10000
"csi.nvmf.com" "v1.0.0"

3.2 NodePublish a volume

The information here is what you used in step 2.2

export TargetTrAddr="NVMf Target Server IP (Ex: 192.168.122.18)"
export TargetTrPort="NVMf Target Server Ip Port (Ex: 49153)"
export TargetTrType="NVMf Target Type (Ex: tcp | rdma)"
export DeviceUUID="NVMf Target Device UUID (Ex: 58668891-c3e4-45d0-b90e-824525c16080)"
export NQN="NVMf Target NQN"
csc node publish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nvmf --vol-context targetTrAddr=$TargetTrAddr \
                   --vol-context targetTrPort=$TargetTrPort --vol-context targetTrType=$TargetTrType \
                   --vol-context deviceUUID=$DeviceUUID --vol-context nqn=$NQN nvmftestvol
nvmftestvol

You can find a new disk on /mnt/nvmf

3.3 NodeUnpublish a volume

$ csc node unpublish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nvmf nvmftestvol
nvmftestvol

Test NVMf driver in kubernetes cluster

TODO: support dynamic provision.

1. Docker Build image

$ make container

2.1 Load Driver

$ kubectl create -f deploy/kubernetes/

2.2 Unload Driver

$ kubectl delete -f deploy/kubenetes/

3.1 Create Storage Class(Dynamic Provisioning)

NotSupport Now

  • Create
$ kubectl create -f examples/kubernetes/example/storageclass.yaml
  • Check
$ kubectl get sc

3.2 Create PV and PVC(Static Provisioning)

Supported

  • Create Pv
$ kubectl create -f examples/kubernetes/example/pv.yaml
  • Check
$ kubectl get pv
  • Create Pvc
$ kubectl create -f exameples/kubernetes/example/pvc.yaml
  • Check
$ kubectl get pvc

4. Create Nginx Container

  • Create Deployment
$ kubectl create -f examples/kubernetes/example/nginx.yaml
  • Check
$ kubectl exec -it nginx-451df123421 /bin/bash
$ lsblk

Community,discussion,contribution,and support

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

csi-driver-nvmf's People

Contributors

haruband avatar k8s-ci-robot avatar lajiao117 avatar meinhardzhou avatar testwill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-driver-nvmf's Issues

Deployment according to the docs is not working

Hi,

first of all thanks for this project, I really like the idea to abstract nvme of mounting with a CSI plugin!

I'm not sure if I did something wrong or if it is just an error in the examples or the documentation, but after deploying everything this plugin does not work for me - the mounting never happens.

Few possible errors in the docs I found were

I basically used the files from deploy/kubernetes to deploy everything on vanilla k8s 1.23, the CSI driver also logs out that the registration was successful and my PV contains the csi.driver field as the registration output showed.
The PVC is of course also bound to the PV and I'm pretty confident in my k8s basics ๐Ÿ˜ƒ but if you need more infos I will happily provide everything!

Is it just me or did I miss something? Does this csi driver work for anyone else as documented?

Thanks in advance!
Vincent

Switch from k8s.gcr.io to registry.k8s.io

From kubernetes-announce:
"On the 3rd of April 2023, the old registry k8s.gcr.io will be frozen and no further images for Kubernetes and related subprojects will be pushed to the old registry.

This registry registry.k8s.io replaced the old one and has been generally available for several months. We have published a blog post about its benefits to the community and the Kubernetes project."

Failing to test the NVMf driver in a kubernetes cluster

Hello :)

I tried to test the NVMf driver in a kubernetes cluster by following https://github.com/kubernetes-csi/csi-driver-nvmf#test-nvmf-driver-in-kubernetes-cluster and failed to successfully bring up the nginx pod.

The test node is running Debian 11 with kernel 5.19.0-rc4+ and was setup with kubeadm v1.25.2.

First I wanted to setup a nvmf target like described in https://github.com/kubernetes-csi/csi-driver-nvmf/blob/master/doc/setup_kernel_nvmf_target.md (I used a file as the storage backend device and adjusted the addr_traddr accordingly). However, I go the following error:

sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2022-08.org.test-nvmf.example/namespaces/ /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2022-08.org.test-nvmf.example
ln: failed to create symbolic link '/sys/kernel/config/nvmet/ports/1/subsystems/nqn.2022-08.org.test-nvmf.example': Invalid argument

So instead I used the nvmetcli tool git://git.infradead.org/users/hch/nvmetcli.git and modified the tcp.json to point to /tmp/nvmet_test.img (removed nquid and uuid), use port 49153, use the nodes IP, and set the subsytems and nqn field to 'nqn.2022-08.org.test-nvmf.example'.
To setup the nvmf target I ran:

sudo nvme disconnect-all
sudo python3 nvmetcli clear
sudo python3 nvmetcli restore conv-test-tcp.json

Running sudo nvme discover -t tcp -a 192.168.121.114 -s 49153 gave me:

Discovery Log Number of Records 2, Generation counter 3
=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: unrecognized
treq:    not specified, sq flow control disable supported
portid:  1
trsvcid: 49153
subnqn:  nqn.2014-08.org.nvmexpress.discovery
traddr:  192.168.121.114
sectype: none
=====Discovery Log Entry 1======
trtype:  tcp
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified, sq flow control disable supported
portid:  1
trsvcid: 49153
subnqn:  nqn.2022-08.org.test-nvmf.example
traddr:  192.168.121.114
sectype: none

Because I am using podman instead of docker I replaced the docker commands in release-tools/build.make like that sed -i 's/docker /podman /g' release-tools/build.make.

I then adjusted pv.yaml with my targetTrAddr and the deviceUUID from cat /sys/kernel/config/nvmet/subsystems/nqn.2022-08.org.test-nvmf.example/namespaces/1/device_uuid.

Form there on I followed the instructions of https://github.com/kubernetes-csi/csi-driver-nvmf#test-nvmf-driver-in-kubernetes-cluster

The nginx pod is not starting and running kubectl describe pods gives me:

Name:             nginx-block-test1-55f6f8ff94-jfwd7
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=nginx
                  pod-template-hash=55f6f8ff94
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/nginx-block-test1-55f6f8ff94
Containers:
  nginx:
    Image:        nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /dev/nvmf from nvmf-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c6mdn (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  nvmf-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-nvmf-pvc
    ReadOnly:   false
  kube-api-access-c6mdn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16s   default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

And kubectl describe pvc outputs:

Name:          csi-nvmf-pvc
Namespace:     default
StorageClass:  csi-nvmf-sc
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: csi.nvmf.com
               volume.kubernetes.io/storage-provisioner: csi.nvmf.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       nginx-block-test1-55f6f8ff94-jfwd7
Events:
  Type    Reason                Age                   From                         Message
  ----    ------                ----                  ----                         -------
  Normal  ExternalProvisioning  3m36s (x82 over 23m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "csi.nvmf.com" or manually created by system administrator

What am I missing or doing wrong? :)

Failure to mount volumes with a same nqn more than twice in a single node

I tried to test the csi driver for nvme over tcp in a kubernetes cluster. But I encountered some problems when I mount volumes with a same nqn more than twice in a single node.

First problem is that it's not possible reconnect with a same nqn more than twice in a single mode. And second problem is that it's not possible readonly and readwrite mounts simultaneously for one block device.

Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.