Giter VIP home page Giter VIP logo

jupyterhub-deploy-swarm's Introduction

jupyterhub-deploy-swarm

This repository represents my parctical research project for my master thesis. I used the jupyterhub-deploy-docker as the base for this repository. The main difference between them is, that this jupyterhub-fhjoanneum is using SwarmSpawner, whereas the jupyterhub-deploy-docker is using the DockerSpawner to generate notebook servers in a docker environment. The DockerSpawner is able to deploy containers only. That means DockerSpawner does not support the service concept of Docker which comes with version 1.12.0, which is the reason why this projects uses SwarmSpawner.

Also please checkout the wiki for more information.

Many thanks to all contributers who made this possible.

Docu will be updated over time!

Overview

Application Overview

Authenticator

This project is using the LDAPAuthenticator for jupyterhub.

Spawner

As mentioned before, the project is using the SwarmSpawner as the jupyterhub spawner class, which is able to create Docker services in a Docker swarm setup. It is necessary to provide a working Docker Swarm environment to run this jupyterhub-deploy-swarm example.

nbgrader

Nbgrader is also installed on each spawned notebook server. While spawning the servers, the spawner can distinguish between a teacher or a student and uses different images for each type of group. The difference between these images is, that teacher can create and assign Assignments, whereas the students can attempt and submit them back to the teacher.

Persistant Storage

The basic approach to store data via Docker is to use Docker Volumes. The only problem with this solution is, that the volumes are only available on the host, which created the volume. There is no way to share the volumes in the Docker Swarm yet. That's the reason why the project uses Docker NFS Volumes.

A single NFS-Server get started in a container on the main host. Every other host is able to communicate with this NFS-Container and therefore can create the Docker NFS Volumes.

Usage

The whole project is using the Makefile to perform actions, like creating a new Docker Volume or building a Docker Image. Here comes a detailed list of how to use the Makefile:

make <command>

Please make sure to clone this repository on the main node of a Docker Swarm. Then modify the jupyterhub_config.py and execute make run to start everything. This will trigger the docker pull command, which will load the walki12/jupyterhub Docker Image from Dockerhub, create a new overlay network for the application, the Docker Volumes for persistant storage and start the NFS-Container and the jupyterhub service.

Maintasks Commands

These commands are used to control the whole project (NFS, jupyterhub).

  • run
    • removes all running docker instancies (network/volumes/container/service) and creates and starts them again.
  • start
    • starts the NFS container and the jupyterhub service. (Volumes and Network must be available)
  • stop
    • stops the NFS container and removes the jupyterhub service. (Volumes and Network are still availabe)
  • remove
    • removes all running docker instancies (network/volumes/container/service)
  • restart  - runs stop and start.
  • rerun  - runs remove and run
  • rebuild  - runs removejupyterhub_build, jupyterhub_push, jupyterhub_updatenodes and run
    • it builds the jupyterhub Docker Image again and restarts the application with the new image.

NFS tasks Commands

NFS specific commands, which will only effect the NFS Container and/or the underlaying volumes.

  • nfs_run  - calls nfs_remove, nfs_start and nfs_config
  • nfs_config
    • connects into the NFS Container and executes commands (create group/user)
  • nfs_start  - starts the NFS Container like docker start <nfs-container-name> (Container must be available (stopped))
  • nfs_stop
    • stops the NFS Container.
  • nfs_remove
    • removes the NFS Container and the Docker NFS Container Volumes
  • nfs_restart  - runs nfs_stop and nfs_start (careful nfs_config did not get called)

Jupyterhub tasks Commands

Jupyterhub specific commands, which will only effect the Jupyterhub Service and/or the underlaying volumes.

  • jupyterhub_run  - calls jupyterhub_remove, creates the volume and calls jupyterhub_start
  • jupyterhub_start
    • starts the jupyterhub Service (Volumes/Network must be available)
  • jupyterhub_remove
    • removes the jupyterhub service and the Docker Volumes of the service
  • jupyterhub_restart
  • jupyterhub_build
    • builds a jupyterhub Docker Image
  • jupyterhub_push  - pushes the Docker Image of jupyterhub to Docker Hub

jupyterhub-deploy-swarm's People

Contributors

jtyberg avatar minrk avatar moisei avatar parente avatar wakonp avatar willingc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

syonoki wasat lazzda

jupyterhub-deploy-swarm's Issues

hardcoded NFS IP's

I thought I'd be able to just clone and make run as the docs say - but it looks like (I've only just begun to dig) the nfs container (and jupyterhub-network) don't have the IP addresses you've put in .env - and so the hub container can't start.
Sadly, Docker Swarm's error reporting is about as unhelpful as possible - the only state info I can derive, is from

dow184@TOWER-SL:~/src/jupyterhub/jupyterhub-deploy-swarm$ docker service scale jupyterhub_jupyterhub=1
jupyterhub_jupyterhub scaled to 1
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
1/1: starting container failed: error while mounting volume '/var/lib/docker/vo… 

Question about NFS server

Hi @wakonp, I see that you are still updating the documentation, but I am too excited with your solution and tried it myself already :) I have some basic questions about it:

  • How is it configured so that the NFS server container that is run at the main host can be seen and accessed by the other containers on the node host(s)? In the makefile I could not find anything that says this container should be run in a certain docker network.
  • Have you ever encountered a permission issue when mounting the nfs FS to the /exports inside the NFS server container? I have this situation that the FS that I want to mount has no no_root_squash attribute, so it makes it a bit tricky. I wonder if you have encountered such scenario.

Looking for Dockerfiles of Walki12/* Images

Hello!
Where can i find the Dockerfiles of all the Walki12 images?
I can't seem to start the nfs server and I'd like to see what's going on in that image. Besides I also want the ability to make changes to the image files.
Hope you can help me a bit further,

Anouar Manders

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.