How to Dockerize a NodeJS App

What is Docker

docker

Docker is very similar to any other virtual machine: you can take an Ubuntu image with some kind of hello-world inside, type docker run ubuntu hello-world, and “hello-world” will run, sincerely believing that it lives in Ubuntu and there is nobody around.

But in fact, Docker is not a virtual machine, not a virtual machine manager, or even a hypervisor. Docker is a platform for creating, launching, and managing containers that, although look like small virtual machines, actually act as a tall fence with barbed wire. The application is not able to go outside as well as you can’t get unexpected guests from out there – absolutely what is needed for the production servers. Due to the fact that containers do not need to emulate hardware and contain a guest OS, they work at a speed that virtual machines have never dreamed of.

How to Use Docker

1. Docker is a great sandbox. If you are going to learn Erlang, then it is not necessary to install it on a real machine. You can run the container, install everything there, have fun, and throw the whole container away in a week.

2. It is a convenient way to deliver applications to servers. Applications usually have dependencies that have to be configured from server to server. You can install the application and its dependencies in a container, and deliver everything at once.

3. Docker is a good way to resolve application dependency conflicts on a single server. For example, someone needs Node.js 4, and someone needs Node.js 6. Of course, the problem can be solved, but if everything is added in containers, then it does not exist at all.

4. It’s nice to update applications (and dependencies) with Docker. You just need to add a new container.

5. Docker can save you money. For example, the website uses on the server no more than 5% of the CPU, but you pay for all 100%. If you add the site in a container and specify that more than 5% of the CPU cannot be used, then the remaining 95% can be used by other containers. Everyone has their quotas for resources, and no one bothers anyone. The server is fully used, and there is no need to add additional machines.

How to Dockerize a NodeJS App

Now suppose we have a Node.js application, which for some reason needs to be transferred to the Docker container and run. Probably you wonder how it behaves on a ‘clean’ environment or just would like to show off to the customer. There are various reasons.

To be specific, let it be the hello.js web server, that responds to everyone with “Hello World”. 

1 var http = require('http');
2 	
3 http.createServer(function (_, response) {
4 	response.writeHead(200, {
5          	"Content-Type": "text/plain"
6         });
7 	response.end("Hello World\n");
8 })
9 .listen(8080);
Localhost: hello

So, how to transfer and run it in the container?

The First Option: Connect the Folder with the Source Code Directly to the File System of the Docker Container

The Docker client has the -v option, with which you can connect files and folders of the host file system to the one of the container. That is, if you initially install Node.js in the container and afterwards connect the project folder to it, then the task seems to be solved. Let’s see:

1 docker run \                        #run container
2  
3  -p8080:8080 \                      #expose its 
  8080th port
4 
5  -v /Users/pav/helloapp:/helloapp \ #mount ~/helloapp
6                                     # to /helloapp
7  node \                         	#image with
8                                     # preinstalled node.js
9  node /helloapp/hello.js            #start hello.js

Since Docker works from a virtual machine on Mac and Windows, it will no longer be possible to see hello world at http://127.0.0.1:8080 – as the virtual machine has its IP. But it can be obtained through docker-machine ip. We run it, and it shows 192.168.99.100

container-hello

Now it works from the container!

Without any doubts you should not use this method in production, and anyway it works for small objectives, or just to check something quickly – that’s it. Among the minuses, the resulting container is not self-sufficient: you are not able to transfer it to another machine without separately copying the project files as well.

The second approach is able to solve it.

The Second Option: Copy the Project Files into the Container

The cp command was created specifically to copy files from the host to the container. It is so close to solving the problem that we decided to make the process a bit more difficult to diversify the routine with a couple of new tricks along the way. So:

1. Run node container in interactive mode:

docker run -ti -p8080:8080 node bash

-ti , as always, represents tty-interactive.

2. Exit out of the container, but leave it to run in the background: Ctrl+p + Ctrl+q

3. Get the container ID with docker ps :

docker-ps

In our example it is db8ce50cfd72

4. Copy project files into the container

1 docker cp \           	#copy files
2  
3   /Users/pav/helloapp \   #from local machine
4  
5   db8:/helloapp       	#to /helloapp of container
6                           # who's ID starts with db8

There is no need to specify the full ID in Docker. Usually, the first few characters are enough, provided, of course, that no other ID uses them.

5. Return to the container: docker attach db8

6. Run Node.js: node /helloapp/hello.js

And “hello world” works again!

In opposite to the first method, this container is self-sufficient. It can be saved through docker commit and transferred to another host. However, if “Hello World” changes to “Goodbye cruel world”, then you ought to repeat these 6 steps.

The third method can help to avoid it.

The Third Option: Build a New Image with the Project Files Through the Dockerfile

Dockerfile allows you to describe the structure of the image in simple text format and then just ‘compile it’. To get the image we want, we have to follow four obvious steps:

1. Take a node image that already exists,

2. Copy project files into the image,

3. Open port 8080,

4. Run the application.

In Dockerfile these steps would look like this:

1 FROM node:latest
2 COPY hello.js /helloapp/hello.js
3 EXPOSE 8080
4 ENTRYPOINT ["node", "/helloapp/hello.js"]

This is almost a complete coincidence: we take the latest node image, copy the files, open port 8080 and specify the entry point: as soon as the container starts, run hello.js as well.

After that, we assemble the Dockerfile into an image:

docker build -t helloapp:latest .

and we get the output image called helloapp which contains the project files, and it marked as newest (but you could specify any other version). It can be started (docker run -d helloapp), transferred between hosts, deleted and rebuilt. Dockerfile acts as source code, and it can be added to other sources in Git.

Let’s sum it up

Among all three options, the first and the second one work great for small and one-time objectives, but Dockerfile is a perfect solution for building new images on a regular basis. It is a great friend of git/mercurial/svn, automates the tedious process of creating images, and if some of its dependencies have changed, rebuilding the image does not require any effort.

Therefore, before you start a new project be sure to think thoroughly at what stages usage of Docker will bring you the most value, and definitely you shouldn’t miss our extremely useful article about the right ways of starting a new project.

Want to know more about dockerizing? Please consider checking out other articles on OS-System’s blog.

Subscribe to us