Giter VIP home page Giter VIP logo

docker-stacks's Introduction

docker pulls docker stars

Jupyter Notebook Python, Spark, Mesos Stack

What it Gives You

  • Jupyter Notebook 4.1.x
  • Conda Python 3.x and Python 2.7.x environments
  • pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed
  • Spark 1.6.0 for use in local mode or to connect to a cluster of Spark workers
  • Mesos client 0.22 binary that can communicate with a Mesos master
  • Unprivileged user jovyan (uid=1000, configurable, see options) in group users (gid=100) with ownership over /home/jovyan and /opt/conda
  • tini as the container entrypoint and start-notebook.sh as the default command
  • A start-singleuser.sh script for use as an alternate command that runs a single-user instance of the Notebook server, as required by JupyterHub
  • Options for HTTPS, password auth, and passwordless sudo

Basic Use

The following command starts a container with the Notebook server listening for HTTP connections on port 8888 without authentication configured.

docker run -d -p 8888:8888 jupyter/pyspark-notebook

Using Spark Local Mode

This configuration is nice for using Spark on small, local data.

  1. Run the container as shown above.
  2. Open a Python 2 or 3 notebook.
  3. Create a SparkContext configured for local mode.

For example, the first few cells in the notebook might read:

import pyspark
sc = pyspark.SparkContext('local[*]')

# do something to prove it works
rdd = sc.parallelize(range(1000))
rdd.takeSample(False, 5)

Connecting to a Spark Cluster on Mesos

This configuration allows your compute cluster to scale with your data.

  1. Deploy Spark on Mesos.
  2. Configure each slave with the --no-switch_user flag or create the jovyan user on every slave node.
  3. Ensure Python 2.x and/or 3.x and any Python libraries you wish to use in your Spark lambda functions are installed on your Spark workers.
  4. Run the Docker container with --net=host in a location that is network addressable by all of your Spark workers. (This is a Spark networking requirement.)
    • NOTE: When using --net=host, you must also use the flags --pid=host -e TINI_SUBREAPER=true. See jupyter#64 for details.
  5. Open a Python 2 or 3 notebook.
  6. Create a SparkConf instance in a new notebook pointing to your Mesos master node (or Zookeeper instance) and Spark binary package location.
  7. Create a SparkContext using this configuration.

For example, the first few cells in a Python 3 notebook might read:

import os
# make sure pyspark tells workers to use python3 not 2 if both are installed
os.environ['PYSPARK_PYTHON'] = '/usr/bin/python3'

import pyspark
conf = pyspark.SparkConf()

# point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos)
conf.setMaster("mesos://10.10.10.10:5050")
# point to spark binary package in HDFS or on local filesystem on all slave
# nodes (e.g., file:///opt/spark/spark-1.6.0-bin-hadoop2.6.tgz)
conf.set("spark.executor.uri", "hdfs://10.122.193.209/spark/spark-1.6.0-bin-hadoop2.6.tgz")
# set other options as desired
conf.set("spark.executor.memory", "8g")
conf.set("spark.core.connection.ack.wait.timeout", "1200")

# create the context
sc = pyspark.SparkContext(conf=conf)

# do something to prove it works
rdd = sc.parallelize(range(100000000))
rdd.sumApprox(3)

To use Python 2 in the notebook and on the workers, change the PYSPARK_PYTHON environment variable to point to the location of the Python 2.x interpreter binary. If you leave this environment variable unset, it defaults to python.

Of course, all of this can be hidden in an IPython kernel startup script, but "explicit is better than implicit." :)

Connecting to a Spark Cluster on Standalone Mode

Connection to Spark Cluster on Standalone Mode requires the following set of steps:

  1. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being deployed, run the same version of Spark.
  2. Deploy Spark on Standalone Mode.
  3. Run the Docker container with --net=host in a location that is network addressable by all of your Spark workers. (This is a Spark networking requirement.)
    • NOTE: When using --net=host, you must also use the flags --pid=host -e TINI_SUBREAPER=true. See jupyter#64 for details.
  4. The language specific instructions are almost same as mentioned above for Mesos, only the master url would now be something like spark://10.10.10.10:7077

Notebook Options

You can pass Jupyter command line options through the start-notebook.sh command when launching the container. For example, to set the base URL of the notebook server you might do the following:

docker run -d -p 8888:8888 jupyter/pyspark-notebook start-notebook.sh --NotebookApp.base_url=/some/path

You can sidestep the start-notebook.sh script entirely by specifying a command other than start-notebook.sh. If you do, the NB_UID and GRANT_SUDO features documented below will not work. See the Docker Options section for details.

Docker Options

You may customize the execution of the Docker container and the Notebook server it contains with the following optional arguments.

  • -e PASSWORD="YOURPASS" - Configures Jupyter Notebook to require the given password. Should be conbined with USE_HTTPS on untrusted networks.
  • -e USE_HTTPS=yes - Configures Jupyter Notebook to accept encrypted HTTPS connections. If a pem file containing a SSL certificate and key is not found in /home/jovyan/.ipython/profile_default/security/notebook.pem, the container will generate a self-signed certificate for you.
  • -e NB_UID=1000 - Specify the uid of the jovyan user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with --user root. (The start-notebook.sh script will su jovyan after adjusting the user id.)
  • -e GRANT_SUDO=yes - Gives the jovyan user passwordless sudo capability. Useful for installing OS packages. For this option to take effect, you must run the container with --user root. (The start-notebook.sh script will su jovyan after adding jovyan to sudoers.) You should only enable sudo if you trust the user or if the container is running on an isolated host.
  • -v /some/host/folder/for/work:/home/jovyan/work - Host mounts the default working directory on the host to preserve work even when the container is destroyed and recreated (e.g., during an upgrade).
  • -v /some/host/folder/for/server.pem:/home/jovyan/.local/share/jupyter/notebook.pem - Mounts a SSL certificate plus key for USE_HTTPS. Useful if you have a real certificate for the domain under which you are running the Notebook server.
  • -p 4040:4040 - Opens the port for the Spark Monitoring and Instrumentation UI. Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook

Conda Environments

The default Python 3.x Conda environment resides in /opt/conda. A second Python 2.x Conda environment exists in /opt/conda/envs/python2. You can switch to the python2 environment in a shell by entering the following:

source activate python2

You can return to the default environment with this command:

source deactivate

The commands ipython, python, pip, easy_install, and conda (among others) are available in both environments.

JupyterHub

JupyterHub requires a single-user instance of the Jupyter Notebook server per user. To use this stack with JupyterHub and DockerSpawner, you must specify the container image name and override the default container run command in your jupyterhub_config.py:

# Spawn user containers from this image
c.DockerSpawner.container_image = 'jupyter/pyspark-notebook'

# Have the Spawner override the Docker run command
c.DockerSpawner.extra_create_kwargs.update({
	'command': '/usr/local/bin/start-singleuser.sh'
})

docker-stacks's People

Contributors

alewitt avatar apurvann avatar carreau avatar costerwi avatar depend avatar fperez avatar gitter-badger avatar jakirkham avatar jcfr avatar minrk avatar mkolovic avatar parente avatar poplav avatar rgbkrk avatar willingc avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.