The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automatically:
- Expose the number of GPUs on each nodes of your cluster
- Keep track of the health of your GPUs
- Run GPU enabled containers in your Kubernetes cluster.
This repository contains NVIDIA's official implementation of the Kubernetes device plugin.
The list of prerequisites for running the NVIDIA device plugin is described below:
- NVIDIA drivers ~= 361.93
- nvidia-docker version > 2.0 (see how to install and it's prerequisites)
- docker configured with nvidia as the default runtime.
- Kubernetes version >= 1.10
The device plugin use NVIDIA nvml library to collect GPU information on each node and report it to Kubelet. We found that the plugin only collects the GPU deviceID and healthy status and report them to Kubernetes system. So we modify the data structure in getDevices
in nvidia.go
, reporting fake information of numbers of GPU in current node. To make it work, we store the origin information in the plugin and modify the Allocate
function and healthy check function which are related.
In Allocate
function, we found that the allocation of specified GPU is truly a simple process which return an ENV to Kubernetes, specifying CUDA_VISIBLE_DEVICES
. We use the local information of the origin GPU deviceID, transferring them to Kubernetes to make the whole procedure work. The Kubernetes system is not aware of how many or which devices are truly used, which reduces the complexity of the whole job greatly.
-
Configurable vGPU partition method
-
More flexible node relabel mechanism
-
More detailed and consistent log
The following steps need to be executed on all your GPU nodes. This README assumes that the NVIDIA drivers and nvidia-docker have been installed.
You will need to enable the nvidia runtime as your default runtime on your node.
We will be editing the docker daemon config file which is usually present at /etc/docker/daemon.json
:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
if
runtimes
is not already present, head to the install page of nvidia-docker
Once you have enabled this option on all the GPU nodes you wish to use, you can then enable GPU support in your cluster by deploying the following Daemonset:
$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta/nvidia-device-plugin.yml
NVIDIA GPUs can now be consumed via container level resource requirements using the resource name nvidia.com/gpu:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
containers:
- name: cuda-container
image: nvidia/cuda:9.0-devel
resources:
limits:
nvidia.com/gpu: 2 # requesting 2 GPUs
- name: digits-container
image: nvidia/digits:6.0
resources:
limits:
nvidia.com/gpu: 2 # requesting 2 GPUs
WARNING: if you don't request GPUs when using the device plugin with NVIDIA images all the GPUs on the machine will be exposed inside your container.
Please note that:
- the device plugin feature is beta as of Kubernetes v1.11.
- the NVIDIA device plugin is still considered beta and is missing
- More comprehensive GPU health checking features
- GPU cleanup features
- ...
- support will only be provided for the official NVIDIA device plugin.
The next sections are focused on building the device plugin and running it.
Option 1, pull the prebuilt image from Docker Hub:
$ docker pull nvidia/k8s-device-plugin:1.0.0-beta
Option 2, build without cloning the repository:
$ docker build -t nvidia/k8s-device-plugin:1.0.0-beta https://github.com/NVIDIA/k8s-device-plugin.git#1.0.0-beta
Option 3, if you want to modify the code:
$ git clone https://github.com/NVIDIA/k8s-device-plugin.git && cd k8s-device-plugin
$ git checkout 1.0.0-beta
$ docker build -t nvidia/k8s-device-plugin:1.0.0-beta .
$ docker run --security-opt=no-new-privileges --cap-drop=ALL --network=none -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.0.0-beta
$ kubectl create -f nvidia-device-plugin.yml
$ C_INCLUDE_PATH=/usr/local/cuda/include LIBRARY_PATH=/usr/local/cuda/lib64 go build
$ ./k8s-device-plugin