This post is part of Jetbatsa series and explores how to build multi-container applications with Docker Compose. For the sake of example, we’ll build a small hello world backend written in NodeJS/Express hosted behind an Nginx load balancer.

Jetbatsa stands for Just Enough To Be Able To Speak About. This is the code name for posts that are check lists or quick notes for myself while I explore some topic and that I started to share recently. I’m definitely not a guru of any of the technologies discussed here.

Building a multi-container application with “Compose”

Modern applications, and especially web applications, are usually built on multiple elements as for example a load balancer possibly also acting as static HTTP server serving the front end, an API backend and one or multiple databases. Each of these services can be run as a container. To start the whole system, we can of course run docker run ... multiple times and in the right order, but it will get annoying and error prone very quickly. Imagine for example a platform that would require tens of different containers… It would be just impossible to manage. Compose is here to solve exactly this issue (mainly) for a platform running on one server.

Taken from the documentation: Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Sounds simple, right? So let’s build a simple multi-container application.

It will be a distributed version of the docktest application that we have been using in Jetbatsa on Docker. We will run several docktest instances being named be0, be1, etc. (for backend 1, backend 2, …). An Nginx-based load balancer will distribute the traffic among all those servers. The load balancer will internally be listening on port 80 but that port will be exposed to the host as 2000.

First let’s see what it would look like without Compose.

A “Docker only” version

We already have the docktest image from a previous post. We need to create a load balancer image that will contain the configuration file to load balance among the be_x servers. And, eventually, we’ll need to make all those containers communicate over a network.

The load balancer image will be based on nginx:1.20-alpine (The one based on Debian, nginx:1.20, is about seven times bigger.). For our tests, we’ll be running 2 docktest backends, be0, be1. The Nginx configuration file, nginx.conf, will look like this (See here:

events {}
http {
    upstream bex {
        server be0:3000;
        server be1:3000;
    }

    server {
        listen 80;
        location /docktest/ {
            proxy_pass http://bex/;
        }
    }
}

With this configuration, requests on /docktest/ will be load balanced among the two servers listening on ports 3000, while all other requests will search for resources in the standard html directory, namely /etc/nginx/html. To make sure this is correct, we also include a short index.html:

<center>Hello from docker load balancer</center>

Now that we have the configuration file and a static resource, the Docker configuration file will simply copy these two files in the image. Let’s call it Dockerfile.lb. It looks like this:

FROM nginx:1.20-alpine

# Install the configuration and resource files
COPY nginx.conf /etc/nginx/
COPY index.html /etc/nginx/html/

To build the image: docker build . -t mszmurlo/loadbalancer -f Dockerfile.lb.

Now that we have all images in place, let’s try if our platform works. We’ll first define the communication network, docktestnet, then start the backend servers (be0, be1), and the load balancer (lb), and finely, we’ll connect to lb to see the logs:

docker network create --driver bridge docktestnet
docker run --rm -d --name be0 --network docktestnet mszmurlo/docktest:0.1.4
docker run --rm -d --name be1 --network docktestnet mszmurlo/docktest:0.1.4
docker run --rm -d -p 2000:80 --name lb --network docktestnet mszmurlo/loadbalancer
docker container logs -f lb

Notice that with this configuration, the backend servers are not visible from the host: they can only be reached through the load balancer. This is fine from security point of view.

Access to http://localhost:2000/ works fine. Accessing multiple times http://localhost:2000/docktest/ results in two different replies, either with server ID of be0 or with server ID of be1 which demonstrates the load balancing works also fine. Check the Nginx load balancer configuration to see different options from the default round robin.

A basic “Compose” version

Compose doesn’t do any black magic: we’ll still need to have the Docker files to create the images, we’ll need all the configuration files, and we’ll need to know how to put all the bricks together. What Compose provides is one place to define how the system is to be built and managed, typically build, started, stopped and inspected. This place is the Compose configuration file, called by default docker-compose.yml.

A Compose file is a YAML file having several sections, most of which being optional (see the Compose file specification)

We’ll be defining several versions of our docker-compose.yml file, the name of each will be post-fixed with the version number. Below is the docker-compose-0.yml file for our application:

version: "3.7"
services:
  be0:
    image: "mszmurlo/docktest:0.1.4"

  be1:
    image: "mszmurlo/docktest:0.1.4"

  lb:
    image: "mszmurlo/loadbalancer"
    ports:
      - "2000:80"

where :

  • version defines the version of the Compose file to be used. Current version is 3.9 but my version of Compose is 3.7.

  • services defines the list of services (basically the containers), that are to be run. Here we have three services: the two backend servers be0 and be1 that are based on the image mszmurlo/docktest:0.1.4 and the load balancer service based on mszmurlo/loadbalancer. For the load balancer, we specify the port mapping 2000 on 80 just as we would do to start the container manually with docker run ... -p 2000:80 ...

Once this defined, we can start the application with docker-compose -f docker-compose-0.yml up. The output is as follows:

Starting docktest_be1_1 ... done
Starting docktest_lb_1  ... done
Starting docktest_be0_1 ... done
Attaching to docktest_be0_1, docktest_lb_1, docktest_be1_1

be0_1  | Server sid='6ac9c807-dafb-4df3-86d9-86291e9545ab' listening at http://:::3000
lb_1   | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
lb_1   | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
lb_1   | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
lb_1   | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
lb_1   | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
lb_1   | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
lb_1   | /docker-entrypoint.sh: Configuration complete; ready for start up
be1_1  | Server sid='90ab6ca5-86f8-4358-96fc-dfe8640a328f' listening at http://:::3000

The first part of the log above is the log of the startup of the containers. Compose has given to each of the containers an auto-generated name, e.g.: docktest_be1_1, etc.. Those names and the names defined in the service section are both resolved to the same container (try to docker exec -it docktest_be0_1 /bin/sh and then ping be1 and ping docktest_be1_1). The difference lies in that the name docktest_be1_1 is also available from the host while be1 is only available from within the containers that form the application.

The second part of the log are logs of each container when it starts. Each line is prefixed with the name of the contained and has a dedicated color. Handy!

We can now try out application:

for i in `seq 0 9`; do curl http://localhost:2000/docktest/; echo; done
  {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
  {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
  {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
  {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
  {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
  {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
  {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
  {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}
  {"sid":"34cf8eaf-37bb-463d-bcb3-cbf2a6c50ad2","resp":"Hello world"}
  {"sid":"7345635f-2d6e-4a7f-8a18-b3783e128cd7","resp":"Hello world"}

Above we see that on every second line it’s the same container that replies: out load balancing work!

Some basic Compose commands

Besides the documentation, the list of available commands can be obtained from the command line with docker-compose -h and the help for command xxx with docker-compose help xxx. Here is a quick list of most useful commands:

  • docker-compose config: validates the configuration file

  • docker-compose ps: lists the running containers. Option -a lists all, running and stopped containers

  • docker-compose up: builds (we’ll see that later), creates or re-creates, starts, and attaches to containers for a service. The -d options runs the containers in background mode.

  • docker-compose down: stops and removes the containers.

  • docker-compose start: starts a previously stop-ped set of containers

  • docker-compose stop: stops the containers without removing the containers so that the application can be re-started later with the start command.

  • docker-compose kill: kill running containers. They can be re-started later with the start command but as they had been killed, the state, especially of the volumes, is not guaranteed.

  • docker-compose logs shows the logs of the running containers.

Networking

How can the above work without a network? Actually, Compose defines a default network named <project-name>_default, here: docktest_default. With few containers, there is no problem using it. Would one have to manage tens of containers for an application, it might become hard to figure out which data flow goes what way and what is seen by which container so it’s always a good idea to define a network.

Networks are created in docker-compose.yml file, just as services. They are declared at top level with the networks keyword. Then they listed in each service definition section if that service uses that network. Here is the modified Compose file for our application, docker-compose-1.yml:

version: "3.7"
services:
  be0:
    image: "mszmurlo/docktest:0.1.4"
    networks:
      backend:

  be1:
    image: "mszmurlo/docktest:0.1.4"
    networks:
      backend:

  lb:
    image: "mszmurlo/loadbalancer"
    networks:
      backend:
    ports:
      - "2000:80"

networks:
  backend:

As with the bare docker configuration, we don’t export any port from the be_x containers so they can only be reached on the backend network through the load balancer. See the network documentation page for much more information about network configuration.

Volumes

Attaching a volume to a service works the same way as with networks: in the service’s definition section add a volumes keyword and below, list the volumes to be mounted. One can mount host paths and named volumes. Named volumes must be defined at top level.

As an example, let’s modify the definition of be0 and attach two volumes named myvol_volume and myvol_bind that will be mounted on the root of the file system:

version: "3.7"
services:
  be0:
    image: "mszmurlo/docktest:0.1.4"
    networks:
      backend:
    volumes:
      - type: volume
        source: my_volume
        target: /myvol_volume
      - type: bind
        source: .
        target: /myvol_bind

# other services and network definitions

volumes:
  my_volume:

We can check that the volumes are mounted properly in the container with docker exec -it docktest_be0_1 ls -l /.

Short syntax is also possible when no additional configuration is required:

    volumes:
      - "my_volume:/myvol_volume"
      - "~/.:/myvol_bind"

Long syntax allows additional configurations such as the type, if the volume is mounted as read-only, volume’s options as its size, etc. See the (documentation).

Building images

So far we have supposed that the images that we used for the services we already existing. However, if we were to change some configuration or source code, we would need to rebuild the image with docker build ... and then restart the application with Compose. Actually, Compose also allows us to build images.

Below is the docker-compose-3.yml file where we build the images for the backend and the load balancer services:

version: "3.7"
services:
  be0:
    build:
      context: .
      dockerfile: Dockerfile.alpine-5
    image: "mszmurlo/docktest:0.1.5"
    networks:
      backend:

  be1:
    image: "mszmurlo/docktest:0.1.5"
    networks:
      backend:

  lb:
    build:
      context: .
      dockerfile: Dockerfile.lb
    image: "mszmurlo/loadbalancer"
    networks:
      backend:
    ports:
      - "2000:80"

networks:
  backend:

In the be0 service definition we introduce a new subsection, namely build which tells Compose how to build the image. context tells where to find the Docker file, dockerfile tells which Docker file to use; if not set, the default Dockerfile file will be used. If image is specified at the same time as build, Compose will name the new image as specified in image. As the image built for be0, will be reused for be1, it is not necessary to rebuild it. The same principle is applied to the lb service.

Eventually, we build our application with docker-compose -f docker-compose-3.yml build.

There are many more parameters to build an image; again, see the documentation.

Getting immortal

In the context of web services, load balancers are used for two main purposes:

  1. Well, they balance the traffic among all servers in the cluster so that it’s quite easy to add additional resources during traffic peaks by adding additional servers (or containers).

  2. They also make the service available even if one (or several) server crashes by distributing the traffic on remaining servers (or containers)

To get convinced, start the application with docker-compose -f docker-compose-3.yml up -d, in another terminal start a supervision with docker stats, then send just one kill with curl http://localhost:2000/docktest/kill. You will see one of the containers (probably be0 but that doesn’t matter) disappear from the supervision. Now send one curl http://localhost:2000/docktest/ you’ll get a reply; send another one, you’ll get a reply but after few seconds, the time for Nginx to detect the failure and send the request on another server. From now on, all the requests you’ll be sending will end up on the remaining container.

What Docker adds to this picture is the ability to restart a container if the container had been stopped under some conditions. To make a container automatically “restartable”, add restart: "unless-stopped" to each service in docker-compose-4.yml:

version: "3.7"
services:
  be0:
    build:
      context: .
      dockerfile: Dockerfile.alpine-5
    image: "mszmurlo/docktest:0.1.5"
    networks:
      backend:
    restart:  "unless-stopped"

  be1:
    image: "mszmurlo/docktest:0.1.5"
    networks:
      backend:
    restart: "unless-stopped"

  lb:
    build:
      context: .
      dockerfile: Dockerfile.lb    
    image: "mszmurlo/loadbalancer"
    networks:
      backend:
    ports:
      - "2000:80"
    restart: "unless-stopped"

networks:
  backend:

and retry the previous experiment. You’ll see a container disappear and reappear after a couple of seconds. As the new container has the same name as the one that got killed, the load balancer has no issue to continue sending traffic on it. Notice that we also have added a restart directive to the load balancer: if for some reason it crashes, it will get restarted by Docker.

The possible values for restart are:

  • "no" : no restart. this is the default.

  • always” : the container will get always restarted.

  • on-failure”: restarts a container when the exit code is a failure error.

  • unless-stopped”: always restarts a container, except when the container is stopped

Conclusion

As we have seen above, Compose makes our lives easier for the manipulation of several containers that form an application. There are many more things to explore on it in the documentation. Sure, you have not become a ninja on Compose but I believe it’s enough to be able to speak about.

References