Giter VIP home page Giter VIP logo

local-path-provisioner's Introduction

Local Path Provisioner

Build StatusGo Report Card

Overview

Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create either hostPath or local based persistent volume on the node automatically. It utilizes the features introduced by Kubernetes Local Persistent Volume feature, but makes it a simpler solution than the built-in local volume feature in Kubernetes.

Compare to built-in Local Persistent Volume feature in Kubernetes

Pros

Dynamic provisioning the volume using hostPath or local.

Cons

  1. No support for the volume capacity limit currently.
    1. The capacity limit will be ignored for now.

Requirement

Kubernetes v1.12+.

Deployment

Installation

In this setup, the directory /opt/local-path-provisioner will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data). The provisioner will be installed in local-path-storage namespace by default.

  • Stable
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml
  • Development
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Or, use kustomize to deploy.

  • Stable
kustomize build "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.26" | kubectl apply -f -
  • Development
kustomize build "github.com/rancher/local-path-provisioner/deploy?ref=master" | kubectl apply -f -

After installation, you should see something like the following:

$ kubectl -n local-path-storage get pod
NAME                                     READY     STATUS    RESTARTS   AGE
local-path-provisioner-d744ccf98-xfcbk   1/1       Running   0          7m

Check and follow the provisioner log using:

kubectl -n local-path-storage logs -f -l app=local-path-provisioner

Usage

Create a hostPath backend Persistent Volume and a pod uses it:

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Or, use kustomize to deploy them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl apply -f -

You should see the PV has been created:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE
pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            Delete           Bound     default/local-path-pvc   local-path               4s

The PVC has been bound:

$ kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
local-path-pvc   Bound     pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            local-path     16s

And the Pod started running:

$ kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
volume-test   1/1       Running   0          3s

Write something into the pod

kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"

Now delete the pod using

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

After confirm that the pod is gone, recreated the pod using

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Check the volume content:

$ kubectl exec volume-test -- sh -c "cat /data/test"
local-path-test

Delete the pod and pvc

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml

Or, use kustomize to delete them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl delete -f -

The volume content stored on the node will be automatically cleaned up. You can check the log of local-path-provisioner-xxx for details.

Now you've verified that the provisioner works as expected.

Configuration

Customize the ConfigMap

The configuration of the provisioner is a json file config.json, a Pod template helperPod.yaml and two bash scripts setup and teardown, stored in a config map, e.g.:

kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/opt/local-path-provisioner"]
                },
                {
                        "node":"yasker-lp-dev1",
                        "paths":["/opt/local-path-provisioner", "/data1"]
                },
                {
                        "node":"yasker-lp-dev3",
                        "paths":[]
                }
                ]
        }
  setup: |-
        #!/bin/sh
        set -eu
        mkdir -m 0777 -p "$VOL_DIR"
  teardown: |-
        #!/bin/sh
        set -eu
        rm -rf "$VOL_DIR"
  helperPod.yaml: |-
        apiVersion: v1
        kind: Pod
        metadata:
          name: helper-pod
        spec:
          priorityClassName: system-node-critical
          tolerations:
            - key: node.kubernetes.io/disk-pressure
              operator: Exists
              effect: NoSchedule
          containers:
          - name: helper-pod
            image: busybox

The helperPod is allowed to run on nodes experiencing disk pressure conditions, despite the potential resource constraints. When it runs on such a node, it can carry out specific cleanup tasks, freeing up space in PVCs, and resolving the disk-pressure issue.

config.json

Definition

nodePathMap is the place user can customize where to store the data on each node.

  1. If one node is not listed on the nodePathMap, and Kubernetes wants to create volume on it, the paths specified in DEFAULT_PATH_FOR_NON_LISTED_NODES will be used for provisioning.
  2. If one node is listed on the nodePathMap, the specified paths in paths will be used for provisioning.
    1. If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
    2. If more than one path was specified, the path would be chosen randomly when provisioning.

sharedFileSystemPath allows the provisioner to use a filesystem that is mounted on all nodes at the same time. In this case all access modes are supported: ReadWriteOnce, ReadOnlyMany and ReadWriteMany for storage claims.

In addition volumeBindingMode: Immediate can be used in StorageClass definition.

Please note that nodePathMap and sharedFileSystemPath are mutually exclusive. If sharedFileSystemPath is used, then nodePathMap must be set to [].

The setupCommand and teardownCommand allow you to specify the path to binary files in helperPod that will be called when creating or deleting pvc respectively. This can be useful if you need to use distroless images for security reasons. See the examples/distroless directory for an example. A binary file can take the following parameters:

Parameter Description
-p Volume directory that should be created or removed.
-m The PersistentVolume mode (Block or Filesystem).
-s Requested volume size in bytes.
-a Action type. Can be create or delete

The setupCommand and teardownCommand have higher priority than the setup and teardown scripts from the ConfigMap.

Rules

The configuration must obey following rules:

  1. config.json must be a valid json file.
  2. A path must start with /, a.k.a an absolute path.
  3. Root directory(/) is prohibited.
  4. No duplicate paths allowed for one node.
  5. No duplicate node allowed.

Scripts setup and teardown and the helperPod.yaml template

  • The setup script is run before the volume is created, to prepare the volume directory on the node.
  • The teardown script is run after the volume is deleted, to cleanup the volume directory on the node.
  • The helperPod.yaml template is used to create a helper Pod that runs the setup or teardown script.

The scripts receive their input as environment variables:

Environment variable Description
VOL_DIR Volume directory that should be created or removed.
VOL_MODE The PersistentVolume mode (Block or Filesystem).
VOL_SIZE_BYTES Requested volume size in bytes.

Reloading

The provisioner supports automatic configuration reloading. Users can change the configuration using kubectl apply or kubectl edit with config map local-path-config. There is a delay between when the user updates the config map and the provisioner picking it up.

When the provisioner detects the configuration changes, it will try to load the new configuration. Users can observe it in the log

time="2018-10-03T05:56:13Z" level=debug msg="Applied config: {"nodePathMap":[{"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES","paths":["/opt/local-path-provisioner"]},{"node":"yasker-lp-dev1","paths":["/opt","/data1"]},{"node":"yasker-lp-dev3"}]}"

If the reload fails, the provisioner will log the error and continue using the last valid configuration for provisioning in the meantime.

time="2018-10-03T05:19:25Z" level=error msg="failed to load the new config file: fail to load config file /etc/config/config.json: invalid character '#' looking for beginning of object key string"

time="2018-10-03T05:20:10Z" level=error msg="failed to load the new config file: config canonicalization failed: path must start with / for path opt on node yasker-lp-dev1"

time="2018-10-03T05:23:35Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate path /data1 on node yasker-lp-dev1

time="2018-10-03T06:39:28Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate node yasker-lp-dev3"

Volume Types

To specify the type of volume you want the provisioner to create, add either of the following annotations;

  • PVC:
annotations:
  volumeType: <local or hostPath>
  • StorageClass:
annotations:
  defaultVolumeType: <local or hostPath>

A few things to note; the annotation for the StorageClass will apply to all volumes using it and is superseded by the annotation on the PVC if one is provided. If neither of the annotations was provided then we default to hostPath.

Storage classes

If more than one paths are specified in the nodePathMap the path is chosen randomly. To make the provisioner choose a specific path, use a storageClass defined with a parameter called nodePath. Note that this path should be defined in the nodePathMap

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ssd-local-path
provisioner: rancher.io/local-path
parameters:
  nodePath: /data/ssd
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

Here the provisioner will use the path /data/ssd when storage class ssd-local-path is used.

Uninstall

Before uninstallation, make sure the PVs created by the provisioner have already been deleted. Use kubectl get pv and make sure no PV with StorageClass local-path.

To uninstall, execute:

  • Stable
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml
  • Development
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Debug

it providers a out-of-cluster debug env for developers

debug

git clone https://github.com/rancher/local-path-provisioner.git
cd local-path-provisioner
go build
kubectl apply -f debug/config.yaml
./local-path-provisioner --debug start --service-account-name=default

example

Usage

clear

kubectl delete -f debug/config.yaml

License

Copyright (c) 2014-2020 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

local-path-provisioner's People

Contributors

aisuko avatar ajdexter avatar alekc avatar amrecio avatar anitgandhi avatar anothertobi avatar anthonyenr1quez avatar ardumont avatar ariep avatar brandond avatar c4lliope avatar dchirikov avatar derekbit avatar ibuildthecloud avatar icefed avatar innobead avatar js185692 avatar kvaps avatar liupeng0518 avatar meln5674 avatar mgoltzsche avatar nicktming avatar nltimv avatar sbocinec avatar sergelogvinov avatar skyoo2003 avatar tamalsaha avatar tgfree7 avatar uempfel avatar yasker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

local-path-provisioner's Issues

Provisioner doesn't work if pods has tolerations/nodeSelector

Hey, thanks for sharing provisioner. It's really cool.

Steps to reproduce:

  1. Setup some kubernetes nodes with taints and mark them with some label
  2. Give pods tolerations and nodeSelector, so the nodes will be scheduled only to nodes with taints:

For example:

nodeSelector:
  node-role.kubernetes.io/sre: ""

tolerations:
  - key: "node-role.kubernetes.io/sre"
    operator: "Equal"
    effect: "NoSchedule"
  - key: "node-role.kubernetes.io/sre"
    operator: "Equal"
    effect: "NoExecute"

Provisioner doesn't work in this case because it uses helper pods to create dirs. Helper pods can't be scheduled on target node because of node taints. To make this example works helper pod should copy tolerations from pod which claiming storage.

Expand pvc

didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.

docker URL when in corp set up

How to configure docker registry URL to a private docker registry? Provisioner look for a busybox image from dockerhub to create the folders? How to set up provisioner in air gaped env?

Are these error messages?

I followed the instructions in the README.md. Everything seems to be working perfecty until it's time to write data to the volume (via the pod). A bunch of log lines are spit out and nothing is written:

kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"

I0225 10:36:49.840600   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Create stream
I0225 10:36:49.840850   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Stream added, broadcasting: 1
I0225 10:36:49.845332   22359 log.go:172] (0xc0009d00b0) Reply frame received for 1
I0225 10:36:49.845357   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Create stream
I0225 10:36:49.845365   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Stream added, broadcasting: 3
I0225 10:36:49.848761   22359 log.go:172] (0xc0009d00b0) Reply frame received for 3
I0225 10:36:49.848779   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Create stream
I0225 10:36:49.848787   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Stream added, broadcasting: 5
I0225 10:36:49.852192   22359 log.go:172] (0xc0009d00b0) Reply frame received for 5
I0225 10:36:49.977345   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Stream removed, broadcasting: 3
I0225 10:36:49.977387   22359 log.go:172] (0xc0009d00b0) Data frame received for 1
I0225 10:36:49.977405   22359 log.go:172] (0xc0007214a0) (1) Data frame handling
I0225 10:36:49.977427   22359 log.go:172] (0xc0007214a0) (1) Data frame sent
I0225 10:36:49.977447   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Stream removed, broadcasting: 1
I0225 10:36:49.977695   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Stream removed, broadcasting: 5
I0225 10:36:49.977721   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Stream removed, broadcasting: 1
I0225 10:36:49.977735   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Stream removed, broadcasting: 3
I0225 10:36:49.977766   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Stream removed, broadcasting: 5
I0225 10:36:49.977791   22359 log.go:172] (0xc0009d00b0) Go away received

They don't look like critical errors and data is actually being written, if I continue with the instructions, everything happens as expected.

support PVC selector field

Trying to create a PVC to use a PV that's been released. I get this error when trying to use selector field of a PVC to match the label foo=bar on my PV

Warning  ProvisioningFailed    13s
rancher.io/local-path_local-path-provisioner-ccbdd96dc-s87rg_5ced8b65-eec7-11e9-bd79-ca353f592fca  failed to provision volume with StorageClass "local-path": claim.Spec.Selector is not supported

Security context not respected

I'm trying to use local-path-provisioner with kind. While it seems to generally work with multi-node clusters, security contexts are not respected. Volumes are always mounted with root as group. Here's a simple example that demonstrates this:

apiVersion: v1
kind: Pod
metadata:
  name: local-path-test
  labels:
    app.kubernetes.io/name: local-path-test
spec:
  containers:
    - name: test
      image: busybox
      command:
        - /config/test.sh
      volumeMounts:
        - name: test
          mountPath: /test
        - name: config
          mountPath: /config
  securityContext:
    fsGroup: 1000
    runAsNonRoot: true
    runAsUser: 1000
  terminationGracePeriodSeconds: 0
  volumes:
    - name: test
      persistentVolumeClaim:
        claimName: local-path-test
    - name: config
      configMap:
        name: local-path-test
        defaultMode: 0555

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-path-test
  labels:
    app.kubernetes.io/name: local-path-test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: "1Gi"
  storageClassName: local-path

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-path-test
  labels:
    app.kubernetes.io/name: local-path-test
data:
  test.sh: |
    #!/bin/sh

    ls -al /test

    echo 'Hello from local-path-test'
    cp /config/text.txt /test/test.txt
    touch /test/foo

  text.txt: |
    some test content

Here's the log from the container:

total 4
drwxr-xr-x    2 root     root            40 Feb 22 09:50 .
drwxr-xr-x    1 root     root          4096 Feb 22 09:50 ..
Hello from local-path-test
cp: can't create '/test/test.txt': Permission denied
touch: /test/foo: Permission denied

As can be seen, the mounted volume has root as group instead of 1000 as specified by the security context. I also installed local-path-provisioner on Docker4Mac. The result is the same, so it is not a kind issue. Using the default storage class on Docker4Mac, it works as expected.

Permission denied when try to create a pvc

I am following the steps to install/configure local-path-provider.

I have a local-path-provisioner running.

I have created pvc and status is pending (waiting for the pod).

I am trying to create a nginx pod like the example and i am facing this problem:

When i check create-pvc-33e6692e.... pod i have this error:
mkdir: can't create directory '/data/pvc-33e6692e-a32d-11e9-85ac-42010a8e0036': Permission denied

My local path is already configured like root.root and 777 permissions.

Anyone can help me?

pv alway store data in one node

I have two node one is k3s-master and the another is k3s-node when i install the deployment local-path-provisioner ,it was schedulered to k3s-node,and i create a pvc with a pod to use it,but the pv always store data in k3s-master, how can i modify it to store data in k3s-node

High CPU Usage

See #14

I am using k3s on a i5 and the local path provisioner (version v0.0.11) sits at 18% CPU:
grafik

Steps to reproduce:

  1. Install k3s on x86 Hardware
  2. Use the local path provisioner

Interestingly enough I am also running k3s on a cloud server, not using the local-path-provisioner (but having it running) and it doesn't consume nearly as many resources there.

"didn't find available persistent volumes to bind"

0/5 nodes are available: 1 node(s) were out of disk space, 2 node(s) were not ready, 3 node(s) didn't find available persistent volumes to bind.

Hey, I ran through the quick start here and ran into the above issue on our baremetal rke cluster. The two not ready are legit, but there's plenty of space on the nodes.

It doesn't look like the provisioner was called at all to create the PV?
time="2019-03-01T15:23:54Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}"
time="2019-03-01T15:23:54Z" level=debug msg="Provisioner started"

[FEATURE REQUEST] More obvious folder names

This is a feature request but I would like to suggest a change to how the folders are named.

When browsing the file system the names are non-obvious, other provisioners, such as the NFS provisioner include the namespace and claim name in the folder name along with the PV name.

So where you have https://github.com/rancher/local-path-provisioner/blob/master/provisioner.go#L183-L184

NFS provisioner has https://github.com/kubernetes-incubator/external-storage/blob/master/nfs-client/cmd/nfs-client-provisioner/provisioner.go#L62-L65

It looks like a simple change and I can do a PR if you agree

Why the provisioner only support ReadWriteOnce

Why the provisioner only support ReadWriteOnce pvc and not ReadOnlyMany/ReadWriteMany.

Since it's just a node-local directory, there's no problem with having multiple writer/readers as long as the application support this.

multiple local-path-provisioner for different host volumes

Question:
A single local-path-provisioner is working fine for me.
I have two different host volumes on my nodes (fast local ssd on /mnt/sdd, slow glusterfs on /mnt/gvol)
Might it be possible to setup two seperate storage classes with its own local-path-provisioner or is this out of scope ?

Provisionner not follow the pod claim Node Affinity...

i've a kubernetes cluster of two node.
i've deployed local path provisioner on it.
i deploy a mariadb with node affinity on node A with pv claim using local path provisionner...
but provisionner whant to provisione the pvc on node B in place of node A where the pod are launch (the pod as node A affinity)... and fail by timeout to wait the pvc...

how to force local path provisionner to follow the pod affinity ?

High CPU usage

local-path-provisioner is one of the most CPU consuming processes in my cluster:

image

It seemingly ate 20 minutes of CPU time in just 22 hours:

local-path-storage   local-path-provisioner-5fbd477b57-mpg4s    1/1     Running     5          22h

During that time, it only created 3 PV's:

❯ k logs -n local-path-storage local-path-provisioner-5fbd477b57-mpg4s
time="2019-04-22T03:00:46Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}"
time="2019-04-22T03:00:46Z" level=debug msg="Provisioner started"
time="2019-04-22T05:13:19Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
time="2019-04-22T05:13:19Z" level=info msg="Creating volume pvc-5782375f-64bd-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T05:13:25Z" level=info msg="Volume pvc-5782375f-64bd-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T10:37:15Z" level=info msg="Deleting volume pvc-5782375f-64bd-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T10:37:19Z" level=info msg="Volume pvc-5782375f-64bd-11e9-a240-525400a0c459 has been deleted on kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T10:38:20Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
time="2019-04-22T10:38:20Z" level=info msg="Creating volume pvc-bee28903-64ea-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T10:38:24Z" level=info msg="Volume pvc-bee28903-64ea-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T11:25:29Z" level=info msg="Deleting volume pvc-bee28903-64ea-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T11:25:33Z" level=info msg="Volume pvc-bee28903-64ea-11e9-a240-525400a0c459 has been deleted on kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T11:26:18Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
time="2019-04-22T11:26:18Z" level=info msg="Creating volume pvc-72a9e446-64f1-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-72a9e446-64f1-11e9-a240-525400a0c459"
time="2019-04-22T11:26:23Z" level=info msg="Volume pvc-72a9e446-64f1-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-72a9e446-64f1-11e9-a240-525400a0c459"

Any ideas what's the matter? It it doing some kind of suboptimal high frequency polling? It bugs me that my brand new, almost empty single node Kubernetes setup already has LA 0.6 without any load coming towards it.

It was deployed with:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

without any modifications to the manifest.

Btrfs subvolume

It would be great if the provisioner could create a btrfs subvolume with quota for each PVC.

This would also give us snapshot and backup functionality using btrfs tools.

Provisioner falls over after etcd timeout

I start a helm chart on a kind installation using rancher. The chart provisions PVs for about 6 containers. This is running on pretty weak hardware, a circleci machine runner.

The chart doesn't come up. It's waiting for PV claims to be fulfilled (they're stuck in pending). The pods in question when start staying that the pod to create the volume already exists[1]. Which it does. And the path for the volume has been created. But pod creation hangs forever.

Turns out a request to etcd timed out [2] and this took down the provisioner. When it came back, it didn't recover the operations that were in progress.

[1] Pod describe:

 Normal     Provisioning          52s (x5 over 4m39s)    rancher.io/local-path_local-path-provisioner-69fc9568b9-vmmhj_2f07fa18-8215-11e9-87ef-420caaa87133  External provisioner is provisioning volume for claim "test-6505f1a5/datadir-test-6505f1a5-zookeeper-0"
  Warning    ProvisioningFailed    52s (x5 over 4m38s)    rancher.io/local-path_local-path-provisioner-69fc9568b9-vmmhj_2f07fa18-8215-11e9-87ef-420caaa87133  failed to provision volume with StorageClass "local-path": failed to create volume pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002: pods "create-pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002" already exists

[2]

circleci@default-306a0f9a-e068-4f47-bf37-c13d6eb031f2:~$ kubectl -n local-path-storage logs -p local-path-provisioner-69fc9568b9-vmmhj
time="2019-05-29T13:21:13Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}" 
time="2019-05-29T13:21:13Z" level=debug msg="Provisioner started" 
time="2019-05-29T13:23:21Z" level=debug msg="config doesn't contain node kind-worker, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:21Z" level=info msg="Creating volume pvc-ee13ad7e-8214-11e9-9d5f-0242ac110002 at kind-worker:/opt/local-path-provisioner/pvc-ee13ad7e-8214-11e9-9d5f-0242ac110002" 
time="2019-05-29T13:23:21Z" level=debug msg="config doesn't contain node kind-worker2, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:21Z" level=info msg="Creating volume pvc-ee186e49-8214-11e9-9d5f-0242ac110002 at kind-worker2:/opt/local-path-provisioner/pvc-ee186e49-8214-11e9-9d5f-0242ac110002" 
time="2019-05-29T13:23:22Z" level=debug msg="config doesn't contain node kind-worker, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:22Z" level=info msg="Creating volume pvc-ee331af8-8214-11e9-9d5f-0242ac110002 at kind-worker:/opt/local-path-provisioner/pvc-ee331af8-8214-11e9-9d5f-0242ac110002" 
time="2019-05-29T13:23:23Z" level=debug msg="config doesn't contain node kind-worker2, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:23Z" level=info msg="Creating volume pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002 at kind-worker2:/opt/local-path-provisioner/pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002" 
E0529 13:23:46.181575       1 leaderelection.go:286] Failed to update lock: etcdserver: request timed out
E0529 13:23:46.809377       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'local-path-provisioner-69fc9568b9-vmmhj_a1902f4d-8214-11e9-a599-420caaa87133 stopped leading'
F0529 13:23:50.398931       1 controller.go:647] leaderelection lost
goroutine 1 [running]:
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.stacks(0xc0003ea500, 0xc000698000, 0x45, 0xb4)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:766 +0xb1
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).output(0x2009ca0, 0xc000000003, 0xc000148930, 0x1f910d1, 0xd, 0x287, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:717 +0x303
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).printf(0x2009ca0, 0x3, 0x12e0f79, 0x13, 0x0, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:655 +0x14e
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.Fatalf(...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:1145
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run.func2()
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:647 +0x5c
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc000362540)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:148 +0x40
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc000362540, 0x14c7de0, 0xc000079c00)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:157 +0x10f
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.RunOrDie(0x14c7e20, 0xc000044040, 0x14d4160, 0xc000317200, 0x37e11d600, 0x2540be400, 0x77359400, 0xc000370900, 0x1358910, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:166 +0x87
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run(0xc0002fc4e0, 0xc0000bc7e0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:639 +0x36f
main.startDaemon(0xc000314640, 0x5, 0x4)
	/go/src/github.com/rancher/local-path-provisioner/main.go:134 +0x793
main.StartCmd.func1(0xc000314640)
	/go/src/github.com/rancher/local-path-provisioner/main.go:80 +0x2f
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.HandleAction(0x10fb200, 0x1359218, 0xc000314640, 0xc000300d00, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:487 +0x7c
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.Command.Run(0x12d6eca, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/command.go:193 +0x925
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.(*App).Run(0xc0002fc340, 0xc00003a050, 0x5, 0x5, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:250 +0x785
main.main()
	/go/src/github.com/rancher/local-path-provisioner/main.go:166 +0x2ba

Log records starts with ERROR: logging before flag.Parse:

K: 1.16.0
v0.0.11

now log looks like:

time="2019-10-02T14:23:12Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}" 
time="2019-10-02T14:23:12Z" level=debug msg="Provisioner started" 
ERROR: logging before flag.Parse: I1002 14:23:12.817611       1 leaderelection.go:187] attempting to acquire leader lease  local-path-storage/rancher.io-local-path...
ERROR: logging before flag.Parse: I1002 14:23:12.894052       1 leaderelection.go:196] successfully acquired lease local-path-storage/rancher.io-local-path
ERROR: logging before flag.Parse: I1002 14:23:12.894163       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"local-path-storage", Name:"rancher.io-local-path", UID:"b8b35657-ea50-4a08-a1a2-e63087b44ffe", APIVersion:"v1", ResourceVersion:"663678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' local-path-provisioner-56db8cbdb5-6lfgx_2a9d3a0c-e520-11e9-95fc-865370074a10 became leader
ERROR: logging before flag.Parse: I1002 14:23:12.894239       1 controller.go:572] Starting provisioner controller rancher.io/local-path_local-path-provisioner-56db8cbdb5-6lfgx_2a9d3a0c-e520-11e9-95fc-865370074a10!
ERROR: logging before flag.Parse: I1002 14:23:12.994573       1 controller.go:621] Started provisioner controller rancher.io/local-path_local-path-provisioner-56db8cbdb5-6lfgx_2a9d3a0c-e520-11e9-95fc-865370074a10!

create process timeout after 120 seconds

Followed the tutorial not changing a single thing...

Got this in the logs:
create process timeout after 120 seconds

And it keeps trying and trying to create a pvc volume...

I can see that there's no pv created too...

Do I need to create the folder /opt/local-path-provisioner manually? (even doing that it doesn't work)

Did anyone have the same issue?

Thanks!

Customizing Provisioner Name (e.g. to mock kubernetes.io/aws-ebs plugin)

Hello -
We have a use case where we would like to mimic the existence of AWS volume plugin on k3s, by using something like local storage path.

I noticed in the source code that the Provisioner Name is supports a flag with the name PROVISIONER_NAME.

I've deployed the following local-path-provisioner deployment at k3s and it seems to be starting up successfully, the pod logs are showing correct name as well.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aws-ebs-mock-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aws-ebs-mock-provisioner
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: aws-ebs-mock-provisioner
    spec:
      containers:
      - command:
        - local-path-provisioner
        - start
        - --config
        - /etc/config/config.json
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: PROVISIONER_NAME
          value: kubernetes.io/aws-ebs
        image: rancher/local-path-provisioner:v0.0.12
        imagePullPolicy: IfNotPresent
        name: local-path-provisioner
        volumeMounts:
        - mountPath: /etc/config/
          name: config-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      serviceAccount: local-path-provisioner-service-account
      serviceAccountName: local-path-provisioner-service-account
      volumes:
      - configMap:
          defaultMode: 420
          name: ebs-local-path-config
        name: config-volume

However, it doesn't seem that k3s is recognizing the plugin. PVCs are stlil showing the following event:

  Warning  ProvisioningFailed    104s (x182 over 46m)  persistentvolume-controller  no volume plugin matched

I am not sure if my understanding of the PROVISIONER_NAME flag is correct, or if this is a bug, or whether I need to take additional steps to register the volume with k3s using the name kubernetes.io/aws-ebs.

I would greatly appreciate any pointers!

Thank you

automatically create pv for pending pvc

I tried to run this command, but the pv is not automatic create for the pending pvc like GKE
I use local-path-provisioner deploy by helm chart with
helm install --name=local-path-provisioner ./deploy/chart/

Thank in advance

helm install --name=redis stable/redis

kubectl get pvc

NAME                                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-redis-core-redis-ha-server-0   Pending                                                     2m30s

kubectl get sc

Name:                  local-path
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           cluster.local/local-path-provisioner
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

kubectl get pod

local-path-provisioner-f856c74cf-jqv8v   1/1     Running   0          10m

Path Provisioner deletes volume from disk

After rebooting our development server it seems that the path provisioner completely deleted the mongodb volume directory. We are using Rancher 2.2.8 with Kubernetes 1.14.6

What can cause this? In our development server we use mongodb replica set helm chart but only one instance atm.

Here is the output from the local-path-provisioner which shows indeed that it got deleted

time="2019-10-07T11:04:15Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}" 
time="2019-10-07T11:04:15Z" level=debug msg="Provisioner started" 
time="2019-10-07T11:09:21Z" level=info msg="Deleting volume pvc-8316611a-de0f-11e9-856d-e23a50295529 at dev:/opt/local-path-provisioner/pvc-8316611a-de0f-11e9-856d-e23a50295529" 
time="2019-10-07T11:09:24Z" level=info msg="Volume pvc-8316611a-de0f-11e9-856d-e23a50295529 has been deleted on dev:/opt/local-path-provisioner/pvc-8316611a-de0f-11e9-856d-e23a50295529" ```

Error: attempting to acquire leader lease local-path-storage/rancher.io-local-path...

Issue

When I create a kind kubernetes cluster which uses as default StorageClass - rancher/local-path, then a pv or pvc cannot be created as the Log of the rancher provisioning controller is reporting the following election error

kc -n local-path-storage logs -f local-path-provisioner-7745554f7f-tm74b    
time="2020-04-21T15:00:23Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/var/local-path-provisioner\"]}]}" 
time="2020-04-21T15:00:23Z" level=debug msg="Provisioner started" 
ERROR: logging before flag.Parse: I0421 15:00:23.953416       1 leaderelection.go:187] attempting to acquire leader lease  local-path-storage/rancher.io-local-path...
ERROR: logging before flag.Parse: I0421 15:00:23.961715       1 leaderelection.go:196] successfully acquired lease local-path-storage/rancher.io-local-path
ERROR: logging before flag.Parse: I0421 15:00:23.962264       1 controller.go:572] Starting provisioner controller rancher.io/local-path_local-path-provisioner-7745554f7f-tm74b_d3eb09fa-83e0-11ea-aa04-9aae5601dfcc!
ERROR: logging before flag.Parse: I0421 15:00:23.962454       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"local-path-storage", Name:"rancher.io-local-path", UID:"c36aade7-becd-403c-a21f-ad8d43e2ecac", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' local-path-provisioner-7745554f7f-tm74b_d3eb09fa-83e0-11ea-aa04-9aae5601dfcc became leader
ERROR: logging before flag.Parse: I0421 15:00:24.062761       1 controller.go:621] Started provisioner controller rancher.io/local-path_local-path-provisioner-7745554f7f-tm74b_d3eb09fa-83e0-11ea-aa04-9aae5601dfcc!

What could be the issue ?

Advice Required

our setup : worker nodes - have an nfs mount shared across using nfs mounts - fetched from ldap - for eg : folder like /nfsdata - there is a huge amount of data in the range of PetaBytes in this storage already.

Is local-path-provisioner designed for such a usecase - i assume not - as it only supports ReadWriteOnce mode and there is node affinity set. Do you think there is another module which can work for our purpose ?

disabling pv affinity

I'm using local-path-provisioner with a directory backed by gluster which is shared through all my nodes. but I can't have multiple pods with shared pvc on multiple nodes because my the pv has nodeAffinity

Is it possible to disable the node affinity of the pv?

Persistence Volume Name is dynamic (Eg :pvc-5f9522a8-b900-11e9-b3d4-005056a04e8b)

Since the k8s local storage doesn't support dynamic provisioning yet, I found the local-file-provisioner from Rancher which seems to fill this gap. Deploying the examples mentioned in the documentation, I see PVs with random names generated:
Example:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-exciting-jellyfish-rabbitmq-ha-0 Bound pvc-5f9522a8-b900-11e9-b3d4-005056a04e8b 8Gi RWO standard 8m49s
data-exciting-jellyfish-rabbitmq-ha-1 Bound pvc-8b201f83-b900-11e9-b3d4-005056a04e8b 8Gi RWO standard 7m36s
data-exciting-jellyfish-rabbitmq-ha-2 Bound pvc-b6835ba6-b900-11e9-b3d4-005056a04e8b 8Gi RWO standard 6m23s

On the node, the folder /opt/local-path-provisioner/pvc-e8ade5da-b2bf-11e9-a8b0-d43d7eed0a97/ was generated. It is possible to get the link from PVC to PV. But this make it more complicated, especially when having many PVCs and manually looking in the volume is required for debugging. Or when exporting/importing backups.

I'd like to have more human readable volume name so that I can specify in the pvc file so that later I will know the directory name post creation of PV where my application will store the files

Should contrllor run as statefulset instead of deployment?

Citing from here:

https://arslan.io/2018/06/21/how-to-write-a-container-storage-interface-csi-plugin/

For the Controller plugin, we could deploy it as a StatefulSet. Most people associate StatefulSet with a persistent storage. But it’s more powerful. A StatefulSet also comes with scaling guarantees. This means we could use the Controller plugin as a StatefulSet and set the replicas field to 1. This would give us a stable scaling, that means that if the pod dies or we do an update, it never creates a second pod before the first one is fully shutdown and terminated. This is very important, because this guarantees that only a single copy of the Controller plugin will run.

What do you think @yasker ?

Local path provisioners is restarting with following error

 F0907 17:44:36.445267       1 controller.go:647] leaderelection lost
goroutine 1 [running]:
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.stacks(0xc00046c300, 0xc000694500, 0x45, 0xf1)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:766 +0xb1
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).output(0x2009ca0, 0xc000000003, 0xc0002269a0, 0x1f910d1, 0xd, 0x287, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:717 +0x303
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).printf(0x2009ca0, 0x3, 0x12e0f79, 0x13, 0x0, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:655 +0x14e
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.Fatalf(...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:1145
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run.func2()
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:647 +0x5c
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc0000ce0c0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:148 +0x40
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0000ce0c0, 0x14c7de0, 0xc000171bc0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:157 +0x10f
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.RunOrDie(0x14c7e20, 0xc000046060, 0x14d4160, 0xc0000dcc60, 0x37e11d600, 0x2540be400, 0x77359400, 0xc00007f7c0, 0x1358910, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:166 +0x87
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run(0xc0000c2b60, 0xc00018e600)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:639 +0x36f
main.startDaemon(0xc00032db80, 0x5, 0x4)
	/go/src/github.com/rancher/local-path-provisioner/main.go:134 +0x793
main.StartCmd.func1(0xc00032db80)
	/go/src/github.com/rancher/local-path-provisioner/main.go:80 +0x2f
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.HandleAction(0x10fb200, 0x1359218, 0xc00032db80, 0xc00037e600, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:487 +0x7c
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.Command.Run(0x12d6eca, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/command.go:193 +0x925
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.(*App).Run(0xc00016d6c0, 0xc00003a0c0, 0x4, 0x4, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:250 +0x785
main.main()
	/go/src/github.com/rancher/local-path-provisioner/main.go:166 +0x2b

Any way to persist data?

It seems the default option is "delete" on the Persistent Volume. I suspect that is by design, but I was curious if there is even a way to be able to persist the data.

securityContext fsGroup has no effect

My setup:

kind version
v0.5.1
kind create cluster
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Test 1: emptyDir

cat << EOF | k apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: security-context-works
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: sec-ctx-vol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: busybox
    command: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
    - name: sec-ctx-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
EOF

it works (2000):

kubectl exec -it security-context-works -- ls -la /data/
total 12
drwxr-xr-x    3 root     root          4096 Aug 28 12:12 .
drwxr-xr-x    1 root     root          4096 Aug 28 12:12 ..
drwxrwsrwx    2 root     2000          4096 Aug 28 12:12 demo

Test 2: rancher.io/local-path

cat << EOF | k apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-test
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi
EOF
cat << EOF | k apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: security-context-fails
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: data-test
    persistentVolumeClaim:
      claimName: data-test
  containers:
  - name: sec-ctx-demo
    image: busybox
    command: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
    - name: data-test
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
EOF

It fails (root):

kubectl exec -it security-context-fails -- ls -la /data/
total 12
drwxr-xr-x    3 root     root          4096 Aug 28 12:20 .
drwxr-xr-x    1 root     root          4096 Aug 28 12:20 ..
drwxrwxrwx    2 root     root          4096 Aug 28 12:20 demo

Any idea what is causing that? I was expecting the group to be 2000 for the demo directory.

Feature request create,delete,resize_cmd support

In order to have a reliable setup it does not make sense to have one big volumes with a lot of pvc as folders because any single pvc could fill the whole disk.

Therefore I would like to have some additional options like this:


                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/opt/local-path-provisioner"]
                        "create_cmd":"btrfs subvolume create ${PATH}",
                        "delete_cmd":"btrfs subvolume delete ${PATH}"
                },
                {
                        "node":"yasker-lp-dev1",
                        "paths":["/opt/local-path-provisioner", "/data1"]
                        "create_cmd":"lvm create blabla ${PATH}",
                        "delete_cmd":"lvm delete blabla ${PATH}",
                        "reszie_cmd":"lvm resize",
                        "available_cmd" "check_lvm_space.sh {SIZE}"
                },
                {
                        "node":"yasker-lp-dev3",
                        "paths":[]
                        "create_cmd":null,
                        "delete_cmd":null
                }

It would be greate to have examples for btrfs and lvm which are most commonly used... with variables for PVC name, storage size and so on we could also ensure that no single volume would max out a single node.

This solution is very flexible and would also support any other filesystem like zfs...

Unknown Permission Issues

Hi guys

I've been using the lpp for some time, then I deicided to try something new.

Use case: edge cluster with shared folders
Status: in vm simulation, using vbox shared folders

Configuration: lpp cm updated to point to shared folder /persistentvolume instead of /var/lib/rancher/k3s/storage

Issue: nginx can read files from the persisten volume configured on the shared folder without issues, prometheus can't write in the folder and crashes

The only real debug I can get is the error log of prometheus which says only "I can't write so I'm panicking"
Full debug here
https://rancher-users.slack.com/archives/CGGQEHPPW/p1583445098193200?thread_ts=1583445098.193200&cid=CGGQEHPPW

Suggestion: additional documentation, helpful info, logs, configs about permissions necessary for lpp to work

Helm chart deploy fail for k3s

Error: Chart requires kubernetesVersion: >=1.12.0 which is incompatible with Kubernetes v1.14.1-k3s.4
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-20T04:49:16Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1-k3s.4", GitCommit:"52f3b42401c93c36467f1fd6d294a3aba26c7def", GitTreeState:"clean", BuildDate:"2019-04-15T22:13+00:00Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

be able to control image for volume provisioning

kubectl -n local-path-storage-applog describe pod create-pvc-536367fb-8185-11e9-a1e6-eeeeeeeeeeee :

            Message
  ----     ------   ----                ----                   -------
  Normal   Pulling  30s (x4 over 119s)  kubelet, 10.246.89.91  pulling image "busybox"
  Warning  Failed   29s (x4 over 118s)  kubelet, 10.246.89.91  Failed to pull image "busybox": rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: EOF
  Warning  Failed   29s (x4 over 118s)  kubelet, 10.246.89.91  Error: ErrImagePull
  Normal   BackOff  16s (x6 over 117s)  kubelet, 10.246.89.91  Back-off pulling image "busybox"
  Warning  Failed   4s (x7 over 117s)   kubelet, 10.246.89.91  Error: ImagePullBackOff
[et2448@Davids-Work-MacBook-Pro ~ (⎈ |icp-global-context:kube-system)]$ 

would it be possible to pass any args to control the full image url? (host/registry/tag)? We have to run through a proxy

Not working for subPath

Thanks for providing this provisioner. I am setting up MySQL HA using Kubernetes Stateful example.

I have tried this provisioner, but it is creating dynamic PV & mounting also but not in DEFAULT provided directory, but on this path /var/lib/kubelet/pods/82d24112-fc50-11e8-90e7-005056b146f6/volume-subpaths.

I found that it is happeing if I use subPath, otherwise it works well.

Any idea, how can I resolve this issue?

Default Node Affinity doesn't match on node created by AWS (provider Amazon EC2)

Basic Info:
Rancher Version: 2.3.2
Kubernetes Version: v1.16.3-rancher1-1
Provider: Amazon EC2

Brief Description:
Pod which used the PVC created by this module cannot be scheduled due to the auto-generated PV's Node Affinity Rule: kubernetes.io/hostname = {{someNodeName}} doesn't match with the actual value of the node.

How to reproduce

  • Spin up a cluster with provider is Amazon EC2
  • Install this module and try to run the example
  • Pod volume-test cannot be scheduled with this message:
0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 node(s) had volume node affinity conflict.

More description
The Persistent Volume which is created by this module has the kubernetes.io/hostname value which is equal to the Private DNS value of the instance that run. But actually the kubernetes.io/hostname label has a value of the instance's name and cannot be edited

More image:

AWS Instance board:
AWS Instance board

Node's label:
Node's label

Node Affinity from the PV:
Capture3

How to re-use existing PVC?

Hi,
I've got this working in my set up, where the PVC is created on the node, in a mounted block storage, located at /mnt/dev_primary_lon1

However, the PVC is created in a UUID-based folder:
/mnt/dev_primary_lon1/pvc-xxxxxxx

If the node is somehow destroyed, but data should be save as it's on the block storage. But if I were to re-spin-up a new node, is there any way to point it to the existing PVC?

Or to simply create a pvc with a defined name in the first place?

Regards,
Andy

provisioner doesn't like when nodes go away, VolumeFailedDelete

I'm running on some bare metal servers, and if one of them goes away (effectively permanently), PVs and PVCs don't get reaped, so the pods (created as a statefulset) can't recover.

This may be okay. Let me know if you'd like a reproducable example, or if it's a conceptual thing.

Here's an example I just ran across. It's a STS of Elasticsearch pods. Having persistent data is great, but if a server goes away, the pod just sits in purgatory.

$ kubectl describe pod esdata-4 -n kube-logging
...
Status:               Pending
...
    Mounts:
      /var/lib/elasticsearch from esdata-data (rw,path="esdata_data")
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  esdata-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  esdata-data-esdata-4
    ReadOnly:   false
...
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  3m43s (x45 over 39m)  default-scheduler  0/5 nodes are available: 5 node(s) had volume node affinity conflict.


$ kubectl describe pv pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a
...
Annotations:       pv.kubernetes.io/provisioned-by: rancher.io/local-path
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-path
Status:            Released
Claim:             kube-logging/esdata-data-esdata-4
Reclaim Policy:    Delete
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [DEADHOST]
...
Events:
  Type     Reason              Age                  From                                                                                               Message
  ----     ------              ----                 ----                                                                                               -------
  Warning  VolumeFailedDelete  37s (x4 over 2m57s)  rancher.io/local-path_local-path-provisioner-f7986dc46-cg8nl_7ff46af3-9e4f-11e9-a883-fa15f9dfdfe0  failed to delete volume pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a: failed to delete volume pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a: pods "delete-pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a" not found

I can delete it manually, just kubectl delete pv [pvid]. I then have to create the pv and pvc, also manually, before the pod is happy. I assumed there'd be a timeout reaping PVCs from dead nodes.

cc @tamsky in case he's come across this, as I see he's been around this repo.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.