Giter VIP home page Giter VIP logo

kubernetes-nfs-volume-on-gke's People

Contributors

foo0x29a avatar guyromb avatar lvnilesh avatar mappedinn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-nfs-volume-on-gke's Issues

Stuck at step 6 when create PV and PVC for pods

Hi @mappedinn, please help me with this issue, I'm following your instruction to setup NFs but now i'm stuck at step 6

# Creation of NFS volume (PV and PVC)
kubectl create -f 04-pv-and-pvc-nfs.yml

The PVC never binds to the PV, status at PENDING, here is the error I got:

Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  31s		8s		3	persistentvolume-controller			Warning		ProvisioningFailed	Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported

So the ReadWriteMany is not acceptable here, but that's the only access mode that enable NFS sharing...

I'm using GKE v1.7.6.
Update: I tried with ReadWriteOnce and it works.
The busybox pod can write to mounted folder but apparently, it cannot scale because ReadWriteOnce doesn't allow that.

Thanks!

NFS cannot be used in GKE AutoPilot clusters

Is there any way to run NFS server without having it configured as privileged?

Error: admission webhook "validation.gatekeeper.sh" denied the request: [denied by autogke-disallow-privilege] container <nfs> is privileged; not allowed in Autopilot. Requesting user: <redacted> and groups: <["system:authenticated"]>

Pulumi

For those who might be interested, here is a Pulumi (https://www.pulumi.com/) script. I'm not sure if you want to integrate it into your repo.

# Massively based on https://github.com/mappedinn/kubernetes-nfs-volume-on-gke
import subprocess

import pulumi
from pulumi import ResourceOptions

from pulumi_gcp.compute import Disk
from pulumi_kubernetes.apps.v1 import Deployment
from pulumi_kubernetes.core.v1 import (
    Service,
    PersistentVolume,
    PersistentVolumeClaim,
)

nfs_size = "1Gi"

# Setup default gcloud first
# gcloud auth login
# gcloud config set project <YOUR_GCP_PROJECT_HERE>
# gcloud auth application-default login

########################################################################################
# Create a GCP persistent disk to be used by the NFS server

project_id = subprocess.run(
    ["gcloud", "config", "get-value", "project"], capture_output=True, text=True
).stdout.strip()

zone_id = subprocess.run(
    ["gcloud", "config", "get-value", "compute/zone"], capture_output=True, text=True
).stdout.strip()

nfs_disk_name = "gce-nfs-disk"
nfs_disk = Disk(
    nfs_disk_name,
    name=nfs_disk_name,
    size=nfs_size.replace("Gi", ""),
    project=project_id,
    zone=zone_id,
)

########################################################################################
# Create NFS server deployment with persistent disk

nfs_app_name = "nfs-server"
nfs_app_labels = {"role": nfs_app_name}
nfs_app_image = "gcr.io/google_containers/volume-nfs:0.8"

nfs_volumes = [
    # FOR GCE
    {
        "name": nfs_disk_name,
        "gcePersistentDisk": {"pdName": nfs_disk_name, "fsType": "ext4"},
    },
    # FOR LOCAL
    # {"name": "nfs-disk", "hostPath": {"path": "/tmp", "type": ""}},
]

nfs_deployment = Deployment(
    nfs_app_name,
    metadata={"name": nfs_app_name},
    spec={
        "replicas": 1,
        "selector": {"match_labels": nfs_app_labels},
        "revisionHistoryLimit": 5,
        "template": {
            "metadata": {"labels": nfs_app_labels},
            "spec": {
                "containers": [
                    {
                        "name": nfs_app_name,
                        "image": nfs_app_image,
                        "ports": [
                            {"name": "nfs", "containerPort": 2049},
                            {"name": "mountd", "containerPort": 20048},
                            {"name": "rpcbind", "containerPort": 111},
                        ],
                        "securityContext": {"privileged": True},
                        "volumeMounts": [
                            {"mountPath": "/exports", "name": nfs_disk_name}
                        ],
                    }
                ],
                "volumes": nfs_volumes,
            },
        },
    },
    opts=ResourceOptions(depends_on=[nfs_disk]),
)
pulumi.export("nfs_server_pod", nfs_deployment.metadata["name"])

########################################################################################
# Create NFS service to have a fixed IP which is independent from the ephemeral node IP

nfs_service = Service(
    nfs_app_name,
    metadata={"name": nfs_app_name},
    spec={
        "selector": nfs_deployment.spec["template"]["metadata"]["labels"],
        "ports": [
            {"name": "nfs", "port": 2049},
            {"name": "mountd", "port": 20048},
            {"name": "rpcbind", "port": 111},
        ],
    },
)
nfs_service_ip = nfs_service.spec.apply(
    lambda v: v["cluster_ip"] if "cluster_ip" in v else None
)
pulumi.export("nfs_service_ip", nfs_service_ip)

########################################################################################
# Create NFS PersistentVolume and PersistentVolumeClaim to be used by pods

nfs_persistent_volume = PersistentVolume(
    nfs_app_name,
    metadata={"name": nfs_app_name},
    spec={
        "capacity": {"storage": nfs_size},
        "accessModes": ["ReadWriteMany"],
        "nfs": {"server": nfs_service_ip, "path": "/"},
    },
    opts=ResourceOptions(depends_on=[nfs_deployment]),
)

nfs_persistent_volume_claim = PersistentVolumeClaim(
    nfs_app_name,
    metadata={"name": nfs_app_name},
    spec={
        "accessModes": ["ReadWriteMany"],
        "storageClassName": "",
        "resources": {"requests": {"storage": nfs_size}},
    },
    opts=ResourceOptions(depends_on=[nfs_persistent_volume]),
)

########################################################################################
# Create a test pod to see if it's all working

app_name = "nfs-busybox"
app_labels = {"name": app_name}
app_image = "busybox"
test_deployment = Deployment(
    app_name,
    metadata={"name": app_name},
    spec={
        "replicas": 1,
        "selector": {"match_labels": app_labels},
        "revisionHistoryLimit": 0,
        "template": {
            "metadata": {"labels": app_labels},
            "spec": {
                "containers": [
                    {
                        "name": app_name,
                        "image": app_image,
                        "imagePullPolicy": "IfNotPresent",
                        "command": [
                            "/bin/sh",
                            "-c",
                            "while true; do date >> /mnt/dates.txt; sleep 5; done",
                        ],
                        "volumeMounts": [
                            {"mountPath": "/mnt", "name": "nfs-server-pvc"}
                        ],
                    }
                ],
                "volumes": [
                    {
                        "name": "nfs-server-pvc",
                        "persistentVolumeClaim": {"claimName": nfs_app_name},
                    },
                ],
            },
        },
    },
    opts=ResourceOptions(
        depends_on=[
            nfs_deployment,
            nfs_service,
            nfs_persistent_volume,
            nfs_persistent_volume_claim,
        ]
    ),
)
pulumi.export("test_server_pod", test_deployment.metadata["name"])

Question: performance?

Hi, are you using this in production? Have you done some benchmarks, fine-tuned something?

Use service IP instead of internal dns

Thank you so much for documenting all this! It helped to finally set up an NFS server after all other instructions failed.

One point that I noticed: you're using the internal dns name for the PV. This kind of negates the whole point of setting up a service for the NFS server. The whole setup works fine if you use the ClusterIP of the service but obviously the PV yaml needs editing then.

How to use NFS in a cluster with multiple VMs?

In this example you have used a cluster with one single VM. Is there a way to use NFS with more than one VM?
Because after some search I have found that a GCE persistent disk with Read/Write many can only be attached to a unique VM and cannot be shared with others.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.