This post explores how to build multi-container applications with Docker Compose. For the sake of example, we’ll build a small hello world backend written in NodeJS/Express hosted behind an Nginx load balancer.
Building a multi-container application with “Compose”
Modern applications, and especially web applications, are usually
built on multiple elements as for example a load balancer possibly
also acting as static HTTP server serving the front end, an API backend
and one or multiple databases. Each of these services can be run as a
container. To start the whole system, we can of course run docker run ...
multiple times and in the right order, but it will get annoying
and error prone very quickly. Imagine for example a platform that
would require tens of different containers… It would be just
impossible to manage. Compose is here to solve
exactly this issue (mainly) for a platform running on one server.
Taken from the documentation: Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Sounds simple, right? So let’s build a simple multi-container application.
It will be a distributed version of the docktest
application that we
have been using in the previous post on Docker. We will run several
docktest
instances being named be0
, be1
, etc. (for backend 1,
backend 2, …). An Nginx-based load balancer will distribute the
traffic among all those servers. The load balancer will internally be
listening on port 80
but that port will be exposed to the host as
2000
.
First let’s see what it would look like without Compose.
A “Docker only” version
We already have the docktest
image from a previous post. We need to
create a load balancer image that will contain the configuration file
to load balance among the be_x
servers. And, eventually, we’ll need
to make all those containers communicate over a network.
The load balancer image will be based on nginx:1.20-alpine
(The one
based on Debian, nginx:1.20
, is about seven times bigger.). For our
tests, we’ll be running 2 docktest
backends, be0
, be1
. The Nginx
configuration file, nginx.conf
, will look like this (See
here):
1events {}
2http {
3 upstream bex {
4 server be0:3000;
5 server be1:3000;
6 }
7
8 server {
9 listen 80;
10 location /docktest/ {
11 proxy_pass http://bex/;
12 }
13 }
14}
With this configuration, requests on /docktest/
will be load balanced
among the two servers listening on ports 3000
, while all
other requests will search for resources in the standard html
directory, namely /etc/nginx/html
. To make sure this is correct,
we also include a short index.html
:
1<center>Hello from docker load balancer</center>
Now that we have the configuration file and a static resource, the
Docker configuration file will simply copy these two files in the
image. Let’s call it Dockerfile.lb
. It looks like this:
1FROM nginx:1.20-alpine
2
3# Install the configuration and resource files
4COPY nginx.conf /etc/nginx/
5COPY index.html /etc/nginx/html/
To build the image: docker build . -t mszmurlo/loadbalancer -f Dockerfile.lb
.
Now that we have all images in place, let’s try if our platform
works. We’ll first define the communication network, docktestnet
,
then start the backend servers (be0
, be1
), and the load
balancer (lb
), and finely, we’ll connect to lb
to see the logs:
1docker network create --driver bridge docktestnet
2docker run --rm -d --name be0 --network docktestnet mszmurlo/docktest:0.1.4
3docker run --rm -d --name be1 --network docktestnet mszmurlo/docktest:0.1.4
4docker run --rm -d -p 2000:80 --name lb --network docktestnet mszmurlo/loadbalancer
5docker container logs -f lb
Notice that with this configuration, the backend servers are not visible from the host: they can only be reached through the load balancer. This is fine from security point of view.
Access to http://localhost:2000/
works fine. Accessing multiple
times http://localhost:2000/docktest/
results in two different
replies, either with server ID of be0
or with server ID of be1
which demonstrates the load balancing works also fine. Check the Nginx
load balancer
configuration to
see different options from the default round robin.
A basic “Compose” version
Compose doesn’t do any black magic: we’ll still need to have the
Docker files to create the images, we’ll need all the configuration
files, and we’ll need to know how to put all the bricks together. What
Compose provides is one place to define how the system is to be built
and managed, typically build, started, stopped and inspected. This
place is the Compose configuration file, called by default
docker-compose.yml
.
A Compose file is a YAML file having several sections, most of which being optional (see the Compose file specification)
We’ll be defining several versions of our docker-compose.yml
file,
the name of each will be post-fixed with the version number. Below is
the docker-compose-0.yml
file for our application:
1version: "3.7"
2services:
3 be0:
4 image: "mszmurlo/docktest:0.1.4"
5
6 be1:
7 image: "mszmurlo/docktest:0.1.4"
8
9 lb:
10 image: "mszmurlo/loadbalancer"
11 ports:
12 - "2000:80"
where :
version
defines the version of the Compose file to be used. Current version is3.9
but my version of Compose is3.7
.services
defines the list of services (basically the containers), that are to be run. Here we have three services: the two backend serversbe0
andbe1
that are based on the imagemszmurlo/docktest:0.1.4
and the load balancer service based onmszmurlo/loadbalancer
. For the load balancer, we specify the port mapping2000
on80
just as we would do to start the container manually withdocker run ... -p 2000:80 ...
Once this defined, we can start the application with docker-compose -f docker-compose-0.yml up
. The output is as follows:
Starting docktest_be1_1 ... done
Starting docktest_lb_1 ... done
Starting docktest_be0_1 ... done
Attaching to docktest_be0_1, docktest_lb_1, docktest_be1_1
be0_1 | Server sid='6ac9c807-dafb-4df3-86d9-86291e9545ab' listening at http://:::3000
lb_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
lb_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
lb_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
lb_1 | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
lb_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
lb_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
lb_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
be1_1 | Server sid='90ab6ca5-86f8-4358-96fc-dfe8640a328f' listening at http://:::3000
The first part of the log above is the log of the startup of the
containers. Compose has given to each of the containers an
auto-generated name, e.g.: docktest_be1_1
, etc.. Those names and the
names defined in the service
section are both resolved to the same
container (try to docker exec -it docktest_be0_1 /bin/sh
and then
ping be1
and ping docktest_be1_1
). The difference lies in that the
name docktest_be1_1
is also available from the host while be1
is
only available from within the containers that form the application.
The second part of the log are logs of each container when it starts. Each line is prefixed with the name of the contained and has a dedicated color. Handy!
We can now try out application:
1for i in `seq 0 9`; do curl http://localhost:2000/docktest/; echo; done
2 {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
3 {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
4 {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
5 {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
6 {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
7 {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
8 {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
9 {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
10 {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
11 {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
Above we see that on every second line it’s the same container that replies: out load balancing work!
Some basic Compose commands
Besides the documentation, the list of available commands can be
obtained from the command line with docker-compose -h
and the help
for command xxx
with docker-compose help xxx
. Here is a quick list
of most useful commands:
docker-compose config
: validates the configuration filedocker-compose ps
: lists the running containers. Option-a
lists all, running and stopped containersdocker-compose up
: builds (we’ll see that later), creates or re-creates, starts, and attaches to containers for a service. The-d
options runs the containers in background mode.docker-compose down
: stops and removes the containers.docker-compose start
: starts a previouslystop
-ped set of containersdocker-compose stop
: stops the containers without removing the containers so that the application can be re-started later with thestart
command.docker-compose kill
: kill running containers. They can be re-started later with thestart
command but as they had been killed, the state, especially of the volumes, is not guaranteed.docker-compose logs
shows the logs of the running containers.
Networking
How can the above work without a network? Actually, Compose defines a
default network named <project-name>_default
, here:
docktest_default
. With few containers, there is no problem using
it. Would one have to manage tens of containers for an application, it
might become hard to figure out which data flow goes what way and what
is seen by which container so it’s always a good idea to define a
network.
Networks are created in docker-compose.yml
file, just as
services. They are declared at top level with the networks
keyword. Then they listed in each service definition section if that
service uses that network. Here is the modified Compose file for our
application, docker-compose-1.yml
:
1version: "3.7"
2services:
3 be0:
4 image: "mszmurlo/docktest:0.1.4"
5 networks:
6 backend:
7
8 be1:
9 image: "mszmurlo/docktest:0.1.4"
10 networks:
11 backend:
12
13 lb:
14 image: "mszmurlo/loadbalancer"
15 networks:
16 backend:
17 ports:
18 - "2000:80"
19
20networks:
21 backend:
As with the bare docker configuration, we don’t export any port from
the be_x
containers so they can only be reached on the backend
network through the load balancer. See the network documentation
page
for much more information about network configuration.
Volumes
Attaching a volume to a service works the same way as with networks:
in the service’s definition section add a volumes
keyword and below,
list the volumes to be mounted. One can mount host paths and named
volumes. Named volumes must be defined at top level.
As an example, let’s modify the definition of be0
and attach two
volumes named myvol_volume
and myvol_bind
that will be mounted on
the root of the file system:
1version: "3.7"
2services:
3 be0:
4 image: "mszmurlo/docktest:0.1.4"
5 networks:
6 backend:
7 volumes:
8 - type: volume
9 source: my_volume
10 target: /myvol_volume
11 - type: bind
12 source: .
13 target: /myvol_bind
14
15# other services and network definitions
16
17volumes:
18 my_volume:
We can check that the volumes are mounted properly in the container
with docker exec -it docktest_be0_1 ls -l /
.
Short syntax is also possible when no additional configuration is required:
1 volumes:
2 - "my_volume:/myvol_volume"
3 - "~/.:/myvol_bind"
Long syntax allows additional configurations such as the type, if the volume is mounted as read-only, volume’s options as its size, etc. See the (documentation).
Building images
So far we have supposed that the images that we used for the services
we already existing. However, if we were to change some configuration or
source code, we would need to rebuild the image with docker build ...
and then restart the application with Compose. Actually, Compose
also allows us to build images.
Below is the docker-compose-3.yml
file where we build the images for
the backend and the load balancer services:
1version: "3.7"
2services:
3 be0:
4 build:
5 context: .
6 dockerfile: Dockerfile.alpine-5
7 image: "mszmurlo/docktest:0.1.5"
8 networks:
9 backend:
10
11 be1:
12 image: "mszmurlo/docktest:0.1.5"
13 networks:
14 backend:
15
16 lb:
17 build:
18 context: .
19 dockerfile: Dockerfile.lb
20 image: "mszmurlo/loadbalancer"
21 networks:
22 backend:
23 ports:
24 - "2000:80"
25
26networks:
27 backend:
In the be0
service definition we introduce a new subsection, namely
build
which tells Compose how to build the image. context
tells
where to find the Docker file, dockerfile
tells which Docker file to
use; if not set, the default Dockerfile
file will be used. If
image
is specified at the same time as build
, Compose will name
the new image as specified in image
. As the image built for be0
,
will be reused for be1
, it is not necessary to rebuild it. The same
principle is applied to the lb
service.
Eventually, we build our application with docker-compose -f docker-compose-3.yml build
.
There are many more parameters to build an image; again, see the documentation.
Getting immortal
In the context of web services, load balancers are used for two main purposes:
Well, they balance the traffic among all servers in the cluster so that it’s quite easy to add additional resources during traffic peaks by adding additional servers (or containers).
They also make the service available even if one (or several) server crashes by distributing the traffic on remaining servers (or containers)
To get convinced, start the application with docker-compose -f docker-compose-3.yml up -d
, in another terminal start a supervision
with docker stats
, then send just one kill with curl
http://localhost:2000/docktest/kill
. You will see one of the
containers (probably be0
but that doesn’t matter) disappear from the
supervision. Now send one curl http://localhost:2000/docktest/
you’ll get a reply; send another one, you’ll get a reply but after few
seconds, the time for Nginx to detect the failure and send the request
on another server. From now on, all the requests you’ll be sending
will end up on the remaining container.
What Docker adds to this picture is the ability to restart a container
if the container had been stopped under some conditions. To make a
container automatically “restartable”, add restart: "unless-stopped"
to each service in docker-compose-4.yml
:
1version: "3.7"
2services:
3 be0:
4 build:
5 context: .
6 dockerfile: Dockerfile.alpine-5
7 image: "mszmurlo/docktest:0.1.5"
8 networks:
9 backend:
10 restart: "unless-stopped"
11
12 be1:
13 image: "mszmurlo/docktest:0.1.5"
14 networks:
15 backend:
16 restart: "unless-stopped"
17
18 lb:
19 build:
20 context: .
21 dockerfile: Dockerfile.lb
22 image: "mszmurlo/loadbalancer"
23 networks:
24 backend:
25 ports:
26 - "2000:80"
27 restart: "unless-stopped"
28
29networks:
30 backend:
and retry the previous experiment. You’ll see a container disappear
and reappear after a couple of seconds. As the new container has the
same name as the one that got killed, the load balancer has no issue
to continue sending traffic on it. Notice that we also have added a
restart
directive to the load balancer: if for some reason it
crashes, it will get restarted by Docker.
The possible values for restart
are:
"no"
: no restart. this is the default.“
always
” : the container will get always restarted.“
on-failure
”: restarts a container when the exit code is a failure error.“
unless-stopped
”: always restarts a container, except when the container is stopped
Conclusion
As we have seen above, Compose makes our lives easier for the manipulation of several containers that form an application. There are many more things to explore on it in the documentation. Sure, you have not become a ninja on Compose but I hope you can start to play with it.