Comments (3)
Any thoughts on this? Or should I contact you through another venue?
from docker-distributed.
Sorry I missed this issue:
- Why a separate overlay network? To provide some context, networking is the single aspect of Docker I'm having the hardest time wrapping my head around, so almost every configuration that involves custom networking of any kind is foreign to me. Would love some intuition for what the custom overlay network does, and how that differs from the default network behavior created by Compose.
I am not familiar with the details but this is the only I could get it to work in a multi-host docker swarm environment such as carina at the time. Maybe things have changed. I have not tested in a while.
- Why is pshutil needed for the web front-end? That part seems to be working without any customization needed (with the obvious exception of exposing port 8787 in the Dockerfile).
psutil is need by dask distributed to collect CPU and memory usage stats on the workers.
- I'm probably still going to work just on my own setup, since I'm not really interested in using Kubernetes or hosting Jupyter notebooks. Instead, I'm looking almost at a sort of static Python job submission setup--spin up a Docker-distributed cluster, then SSH into the scheduler node and submit a Python script to run a job on the distributed dask cluster (who knows, may even create a tiny web front-end just for submitting Python scripts to obviate the need for SSH). Also, knocking out my own implementation will help me understand how it works!
I think kubernetes has some primitives for batch jobs:
http://kubernetes.io/docs/user-guide/jobs/
I have never used it myself though.
- General comment: I am a big fan of how you used a single Docker image and then just ran separate bash scripts to execute different commands. Not sure why that didn't occur to me (I built three separate Docker images, which you can see in my repo linked above). I may refactor my own setup with that in mind.
The goal of this design was to make 100% sure I had the exact same version of the dask / distributed and machine learning libraries deployed everywhere (both in the interactive kernel of the jupyter notebook session and on the dask workers).
from docker-distributed.
Let me close this issue for now but feel free to add comments.
from docker-distributed.
Related Issues (3)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from docker-distributed.