bretfisher / node-docker-good-defaults Goto Github PK
View Code? Open in Web Editor NEWsample node app for Docker examples
License: MIT License
sample node app for Docker examples
License: MIT License
To be able to do dev both inside and out of Docker the node modules folders need to be completely separate to avoid clashing.
The code that creates the directory:
volumes:
- .:/opt/node_app/app
This only happens when the directory doesn't exist yet, and with the other code in the compose it will prevent the folder from being populated by content from the Docker container (which is good)
The problem is that the empty gets created as root, meaning npm installs outside of docker fail.
The simplest workarounds are to either create the directory before ever running docker compose up or don't use .
for the volume and instead explicitly specify the dir that the source code for the app is in.
Is there a nicer way to handle this?
I just cloned the repo and when i docker-compose up the app crashes because it cant find node_modules. "Error: Cannot find module 'express'". Do I need to change anything to get the test app working? Seems like it's not using the /opt/app_dep/node_modules folder
When Node and VSCode are set to use new node inspector, I see VSCode connect to the container port 9229, but when trying to set breakpoint I get:
Breakpoint ignored because generated code not found (source map problem?)
Haven't had a chance to dig in much, just putting this here in case others have it or know how to quickly solve it. Might help to try node newer than 6.10... just spitballin'
VSCode 1.12.2
Docker for Mac 17.05.0-ce-mac11
Node 6.10.3
CMD in container is: node --inspect ../node_modules/nodemon/bin/nodemon.js
Ports are open and responding
VSCode launch.json config:
{
"name": "Attach 9229 --inspect",
"type": "node",
"request": "attach",
"protocol": "inspector",
"port": 9229
}
Hi Bret,
I'm using typescript with node.
the command 'npm run build' generate the compiled sources on 'dist' folder.
I want to ask you how would you update the Dockerfile to use a prepiler as typescript or babel with node? What is best practice on this case?
Thanks,
Diego
Not sure if that is intended or not, but package.json file and code are copied as root
user, while node_modules
use node
user. To reproduce run:
docker build . -t dockerfile_test && docker run -it --rm dockerfile_test ls -al /opt/node_app
The container process is started using node
process, which is desired behavior, but how about the application files that are owned by root
user?
Currently this repo will server.close()
but that only stops new connections, and will not exit if existing long-polling or websocket connections exist. A more complete way would be:
server.close()
to stop new connections (note this might have a problem with front-end LB's that still have this container in their rotation. They would still send connections to it but it won't accept them. Orchestrator problems.).process.exit()
to hard stop remaining connections and stop container.More info:
Hi,
Thanks for providing these docker-good-defaults ๐
I'm having an issue on docker-compose up
though, getting the error:
ERROR: for node Cannot start service node: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown
docker-entrypoint.sh sits in the root of my project, I've not modified it from your example, nor the part of Dockerfile that refers to it;
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
Any ideas?
I got a solution that work with PM2, it will fix some problem about:
Isn't it better to use npm ci
instead of npm install
here we get exactly same node_modules by package-lock.json?
Hi โ๐ป Is this rep actual in 2022?
The node service exits prematurely. The output from the node container is:
PS C:\Users\johnf\Desktop> docker logs node-docker-good-defaults_node_1
standard_init_linux.go:207: exec user process caused "no such file or directory"
Please advise.
(Also, when is the Udemy Node course available?)
https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers
I believe will be awesome if this git repository could be updated with a "devcontainer.json" with best practices :-)
The Remote - Containers VS Code extension will solve the big challenge about to deal with node_modules in NodeJS applications.
When nodemon restarts the app it crashes:
Starting inspector on 0.0.0.0:9229 failed: address already in use
I tried upgrading to newer version of nodemon, not sure what else to do here other than remove the inspect
flag. Any help is appreciated!
I recently cloned the repo and did a fresh docker-compose up
. After visiting localhost
it works as expected.
Moving forward I tried to run some tests, but found out that /documents
test was failing. After looking at some logs, found out that it's unable to connect to database. I'm not sure why it's happening but I suspect it maybe because of mocha timeout not able to wait for the db connection.
Please advise, Thank you :)
node --debug
when running VSCode without docker (node running directly on Mac) works with break points.
Trying it when node is inside a container looks like it works, but VSCode never stops for breakpoints. Just like this user describes: microsoft/vscode#22306 (comment)
I've followed documentation from:
https://code.visualstudio.com/docs/nodejs/nodejs-debugging
and
https://github.com/weinand/vscode-recipes/tree/master/Docker-TypeScript
And I believe this setup worked in early 2017, but now doesn't.
VSCode 1.12.2
Docker for Mac 17.05.0-ce-mac11
Node 6.10.3
CMD in container: node --debug=5858 ../node_modules/nodemon/bin/nodemon.js
Ports are open and responding
VSCode launch.json config:
{
"name": "Attach 5858 --debug",
"type": "node",
"request": "attach",
"protocol": "legacy",
"port": 5858,
"address": "localhost",
"restart": false,
"sourceMaps": false,
"outFiles": [],
"localRoot": "${workspaceRoot}",
"remoteRoot": "/opt/app"
}
Having taken the excellent course, I've taken this and added the deploy section to deploy to a swarm but had a number of issues. Firstly around the use of volumes and then, when I removed the volumes (in a bid to simplify to try to understand what was happening) my app fails to start up with a 137 error every time. This seems to be a memory related issue but I'm not sure how to go about debugging this. Could it be that the removal of the volumes has caused the load of too much data into memory?
Compose now has a alpha feature for syncing files and avoiding bind-mounts for local development. This is great for many reasons:
node_modules
and volume workaroundsFirst, we need an example (maybe lives in a compose-sync
directory and then once feature is out of alpha/beta we make it the default example with a legacy
directory for the old way with bind-mounts.
First of all, a huge thank you (and all the contributors) for this knowledge nugget of a repo. Being a beginner to containers, it was a blast to go through the files.
I am getting a EACCESS
error when trying to install a package using the standard dce node npm i <package>
AFAIU, the error comes from a volume ownership issue when operating as non root, since the excluded node_modules
(via volume mount) is root owned by default, blocking writes and forcing dep installs with dce -w <parentDir> npm i <package>
:
It turns out there's a simplified version of this workaround to the volume problem. By creating the target node_modules
as a non root user on build time, the ownership stays the same when the volume is mounted. Besides resolving my issue and simplifying the installation command, this has the added benefits of having 1 package*
file in the container, and avoiding issues related to file bind mounts (see #28).
If this is to your liking, I am willing (and happy ๐ ) to make PR with the fix. Thanks for your work!
https://github.com/BretFisher/node-docker-good-defaults/blob/master/bin/www#L22
After getting through the docker-compose section of your class, I figured I'd try to get my local env running before continuing to docker swarm.
I used this repo as kind of a guide. My set up isn't complicated services = [nest.js, redis, postgres] (nest.js is a framework around express)
I'm also using yarn and nodemon installed locally.
CMD ['yarn', 'start:dev'] #=> nodemon
nodemon.json
{
"watch": ["src"],
"ext": "ts",
"ignore": ["src/**/*.spec.ts", "src/graphql.schema.ts"],
"exec": "ts-node -r tsconfig-paths/register src/main.ts"
}
docker-compose up
works just find, but when I save a file it fails ....
nest_1 | [Nest] 45 - 12/10/2018, 1:11:09 PM [RoutesResolver] AppController {/}: +124ms
nest_1 | [Nest] 45 - 12/10/2018, 1:11:09 PM [RouterExplorer] Mapped {/, GET} route +11ms
nest_1 | [Nest] 45 - 12/10/2018, 1:11:12 PM [NestApplication] Nest application successfully started +3203ms
nest_1 | Error: listen EADDRINUSE :::4000
nest_1 | at Server.setupListenHandle [as _listen2] (net.js:1286:14)
nest_1 | at listenInCluster (net.js:1334:12)
nest_1 | at Server.listen (net.js:1421:7)
nest_1 | at NestApplication.listen (/opt/app/node_modules/@nestjs/core/nest-application.js:205:25)
nest_1 | [nodemon] app crashed - waiting for file changes before starting...
Any ideas?
And can you add the answer to the relevant bit?
Defaults to node index.js rather then npm
Hi! i recently clone the project and everything works well, but when i made some changes nodemon crashes with this error
node_1 | [nodemon] restarting due to changes...
node_1 | [nodemon] starting `node --inspect=0.0.0.0:9229 ./bin/www`
node_1 | Starting inspector on 0.0.0.0:9229 failed: address already in use
node_1 | [nodemon] app crashed - waiting for file changes before starting...
and the changes i made doesn't work, only after i do docker-compose down and again docker-compose up
VSCode 1.39
Docker for Mac 2.2.0.3
Node 12.16.0
any help? thanks!
I don't see kubernetes support in this repo. Just docker-compose/docker swarm. If this is desired I would be happy to work on PR, otherwise I will just fork for my own uses. My goal is just to create a skeleton repo that people can get off the ground quickly, and this repo is an excellent foundation! Obviously there are many production aspects that would need to be customized to the user's specific use case, but it may be useful for someone learning how everything fits together. I would consider myself only a novice kubernetes user so I would enjoy the exercise.
Hi Bret,
We have tried to do the same incorporating your suggestions.
https://github.com/MumbaiHackerspace/Visage/blob/master/services/photos/Dockerfile but using the slim version for now. In fact our next iteration we will use multi stage build in the Dockerfile
When I try to install a dependecy in a running container using the command listed in the repo's readme docker-compose exec server npm install --save package
, it throws the following error:
xxxx-API git:(feature/firbase-push-notifications) โ docker-compose exec server npm install --save firebase-admin
npm WARN checkPermissions Missing write access to /opt/server/node_modules
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.
npm WARN The package @types/uuid is included as both a dev and production dependency.
npm ERR! path /opt/server/node_modules
npm ERR! code EACCES
npm ERR! errno -13
npm ERR! syscall access
npm ERR! Error: EACCES: permission denied, access '/opt/server/node_modules'
npm ERR! { [Error: EACCES: permission denied, access '/opt/server/node_modules']
npm ERR! stack:
npm ERR! 'Error: EACCES: permission denied, access \'/opt/server/node_modules\'',
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'access',
npm ERR! path: '/opt/server/node_modules' }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator (though this is not recommended).
npm ERR! A complete log of this run can be found in:
npm ERR! /home/node/.npm/_logs/2019-02-10T21_18_38_008Z-debug.log
I created the following example to highlight what I can't make it work:
I don't have any local node_modules
. I'm running everything through Docker.
As soon as I bootstrap the project with $ docker-compose up
everything looks great.
If I try to change the Hello World!
sentence in index.js
nodemon
reloads the server as expected.
But as soon as I want to add a dependency like lodash
while developing and while my container is up and running the package.json
is not updated.
As explained in this repo (node-docker-good-defaults
) I'm using $ docker-compose exec -w /opt/node_app node npm install --save lodash
to install the new dependency and reflect the changes on my host project. Unfortunately it isn't working.
Any hints?
What is the recommended way to install new modules in development?
Problem is that if I do install in container, only package.json in /opt
is changed so when i rebuild container we lose new module definition in package.json on host which we actually copy in Dockerfile.
Only way I found is that I manually edit package.json on host and rebuild container.
With the current configuration the MongoDB server is starting up setting up the secure user account and then restarting. On my Mac this appears to be be fine. However on Windows 7 the timing of the restart causing the connection that has bee acquired by the node app to be dropped and node fails to start up.
Locally I have resolved this by making Mongo insecure which then prevents the restart.
I am not sure the correct solution but options may be:
Would be interested in others input on a preferred direction.
My very old repo on a mongo node docker example is very outdated and should be archived, but I get asked about it because this simple repo is just a bit too simple. It doesn't show an example of how node would talk to a mongo database (one of the most popular db's for node apps I think).
Hi,
Just cloned this and running "docker-compose up" fails (eventually) to build on my Windows machine:
`Building node
Step 1/17 : FROM node:10
---> f09e7c96b6de
Step 2/17 : ARG NODE_ENV=production
---> Using cache
---> 8798a1c8075e
Step 3/17 : ENV NODE_ENV $NODE_ENV
---> Using cache
---> f5deba597200
Step 4/17 : ARG PORT=3000
---> Using cache
---> 3cfb4711b51f
Step 5/17 : ENV PORT $PORT
---> Using cache
---> df1e71469d97
Step 6/17 : EXPOSE $PORT 9229 9230
---> Using cache
---> 0812ab63129c
Step 7/17 : RUN npm i npm@latest -g
---> Using cache
---> 5b75fadb1282
Step 8/17 : WORKDIR /opt
---> Using cache
---> d4f2926ed1c9
Step 9/17 : COPY package.json package-lock.json* ./
---> Using cache
---> 96500661a754
Step 10/17 : RUN npm install --no-optional && npm cache clean --force
---> Running in 7f7629f34a6f
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/is-obj-34b0d206/readme.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/is-accessor-descriptor-a0138ffa/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-convert-9072abfc/conversions.js'
npm ERR! code E404
npm ERR! 404 Not Found: [email protected]
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/kind-of-9b8b4aa1/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/arr-flatten-d8e87bef/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/kind-of-9b8b4aa1/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/nan-53be3a9b/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/he-bba7ac0f/LICENSE-MIT.txt'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/unset-value-3bf1a3af/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/unset-value-3bf1a3af/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/unset-value-3bf1a3af/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/lodash.debounce-d98c7fea/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/lodash.debounce-d98c7fea/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/fs.realpath-a0512e6a/old.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/atob-fe7edb82/LICENSE.DOCS'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/mongodb-core-41aba90b/lib/topologies/shared.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/readdirp-675a2d24/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/proxy-addr-c6c31365/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/safe-buffer-a5a71a84/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/base-e8c8abd4/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-name-6f95ad24/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-name-6f95ad24/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-name-6f95ad24/index.js'
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2018-11-28T23_20_17_863Z-debug.log
ERROR: Service 'node' failed to build: The command '/bin/sh -c npm install --no-optional && npm cache clean --force' returned a non-zero code: 1`
I've not changed anything in the source files.
Docker version:
Engine 18.09.0
Compose 1.23.1
Machine 0.16.0
Any ideas?
I've noticed this on macOS host using Linux container:
The npm install
invariably changes the package-lock.
Reasons are discussed here:
https://npm.community/t/package-lock-json-keeps-changing-between-platforms-and-runs/1129
Perhaps npm ci
would work better?
Loosely related to #28
Now that this example includes mongo, another common question for local development is "how do I load sample data into the app for development"
Sure you could use lots of manual methods with docker exec
and maybe bind-mount some .js data, but the easiest way IMO is to use the mongo images built-in way of auto-running anything in:
/docker-entrypoint-initdb.d/*.js
Ideally, we would bind-mount the files to execute there in the compose file just for local development.
/documents
endpoint of this app.Hi! Thanks for your work of gathering all this stuff in one place!
But how do you managed to make npm install
work with direct package.json mounts? I get this errors every time:
resource busy or locked, rename '/opt/package.json.3249071875' -> '/opt/package.json'
because npm tries to rewrite file from scratch rather than make atomic writes.
Thank you for a great project!
I was wondering if someone could explain the role and use of the file_env function in docker-entrypoint.sh a bit more?
I have a lot of env. variables I set and have not seen the need to use file_env at all, so when specifically would I need to use it?
Trying to follow your example and when you are building your image, you put the node modules that are installed into /opt/node_modules
. That folder will hold the dev dependencies needed as well. Am I missing the part where you put them into /opt/app/node_modules
?
I cloned your project and it seems like your node modules is empty as well
Hi,
what is the best practice to use this project.
Should i copy all files in my project directory ,or use git submodule?
CP is easy, but with submodule i can track new improvements.
Bret, this is a pretty awesome project, and I'm really trying to put it to good use. There is something I just don't understand.
If I do a pure clone of the project, and then attempt to build and run a container using $ docker, I can't see the server on localhost. It works fine using $ docker-compose.
Am I doing this correctly?
docker build -t <yourWebAppName> .
# -t flag tag the image with a custom namedocker run <yourWebAppName>
docker ps -a
## identify running containerdocker port running_container_hash
returns nulldocker-compose up
docker port <hash>
returns as expected. http://localhost in the browser works fine.80/tcp -> 0.0.0.0:80
9229/tcp -> 0.0.0.0:9229
5858/tcp -> 0.0.0.0:5858
I'm hoping to use your simple repo to do A-B-A testing between $ docker build / run and $ docker-compose. I'd like to input a tiny bit of data on the console and I'm having troubles with docker-compose.yml methods. I'm trying to understand how the interactive modes work with Docker. The fact that I can't see localhost when I $ docker build/run makes me nervous. Am I doing this right?
Note: I'm running the mac version of docker. I'm running everything in the command line. Not using any IDE or Kitematic for anything.
Again, many thanks to you for the work (and documentation) you've done on this project.
--LB
Hello,
We currently use a private package or repository within one of our services. We're following this project and are running into the issue where we can't install a package because it does not have permission to access our private repository.
[2/4] Fetching packages...
error Command failed.
Exit code: 128
Command: git
Arguments: ls-remote --tags --heads ssh://[email protected]:9999/secret/private.git
Directory: /opt
Output:
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I see a wide variety of ways to solve this problem. Some include copying the host SSH key into the container and others recommending the use of docker secrets. I would love to hear your recommendations and perhaps some project examples that solve this problem.
Might add that the example above uses yarn in our Dockerfile
and we're hosting our private code inside a private bitbucket instance.
Thanks!
There are node
options to enable stack traces for deprecations, warnings, and synchronous IO
node ---help
--trace-deprecation show stack traces on deprecations
--trace-warnings show stack traces on process warnings
--trace-sync-io show stack trace when use of sync IO
Do you think it's a good idea to enable these for development?
Do you think there's a problem with having these enabled in production builds?
For non-swarm configuration. should we still use version 2?
According to the official docs the recommended version is 3. they also added the --compatibility
flag which should mitigate some (all?) v3 drawbacks..
Following the instructions from the README, I cannot install additional Node packages with docker-compose exec node npm install --save <package name>
due to write permissions of the 'node' user. If I update docker-compose.yml to set the user as 'root', I can again install pacakges. I'm not sure if this is the best way to solve the install permissions or not.
node-docker-good-defaults/Dockerfile
Line 36 in 8b1c01c
Are you setting workdir twice on purpose? Doesn't setting it once work for the rest of the dockerfile
Update to latest nodemon release to remove this depends:
https://github.com/BretFisher/node-docker-good-defaults/blob/master/package-lock.json#L963
First of all, thank you (+contributers) so much for creating this example project. It really helped me get past cold feet and actually set up a first nice Docker project!
As this guide popped up as my starting point I imagine others starting here as well. For completeness, I think it's worth highlighting multi-stage builds and smaller parent images to decrease your image size.
It's worth noting that the production version of this Hello World app still includes all the devDependencies, unit-tests and files that are not strictly necessary. Of course, multi-stage builds make more sense in larger applications with client-side js.
Also, just using node:alpine as parent image help getting from 600mb+ to ยฑ70mb, worth mentioning too.
If leaving those out was intentional that's fine, you point to a good further reading resource, if you agree I'm more than happy to create a PR with these comments/changes implemented.
Cheers!
If I use babel and babel-node, should I build code inside docker ?
Your target to build the docker for production, and use docker-compose to fit development env.
Please give me some tips about the babel for dev and prod. Thanks
Finding this repo just saved us a ton of research on running node + docker in prod ๐
Maybe I'm too newbie with Vscode, but I don't understand at all how can I debug this nodejs app in a total way.
If I would to start the docker container with docker-compose up then try to debug www file, for example, I'm not able to do it.
So, I've some questions:
Hi,
usually you have a config/ folder where a basic default.js file is in, which exports an object of configuration values.
e.g.:
module.exports = {
session: {
key1: "abcde"
},
mailserver: {
relayHost: "127.0.0.1"
}
// ... a lot more values
}
And then for each env you create a specific copy of that. e.g.: config/production.js. Which is gitignored.
What is your best practice for handling (complex & nested) config values, which are different for each environment?
Of course you don't want to use 'docker secret' for that right?
Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.