Spinning up a VM now days have become a breeze. You sign with a cloud provider (and there are so many of them: Digital Ocean, upCloud, Linode, not speaking of the big guys like Google, AWS or Azure) and for few dollars per month you get a basic Linux VM, most of the time more than sufficient to host some side projects or demos. Such VMs come usually with a bare root account. This post is my TODO list to a VM able to host some web apps get in five minutes.

Create a user and make him happy

From now on, I’ll suppose you have a server with a root access. This server will be named my.domain.com through this post.

Create a user

Working with root user is BAD! The first thing is to create a user able to user sudo:

# Create a user: set a password and validate everything else
adduser auser

# Add that user user to sudoers, the happy ones how can do sudo
usermod -a -G sudo auser

Add some basic utilities

Whatever we install, we should first sudo apt update and sudo apt upgrade to start with a good basis.

There is is couple of command line tools that I always want to have at my finger tips:

  • zile: I love emacs and I have been using emacs-like editors since 1989. Emacs, however, is a fat guy what will use lots of resources while, obviously, we don’t need all or its power on a system such as the one we are building. Zile is a good terminal oriented alternative: it is smaller than vi or nano and still offers lots of functionalities found in emacs.

  • tree: A gadget to show a directory structure in a ascii-graphics way. Handy.

  • htop: Better process visualization and monitoring than top.

  • git: The version control application. I’ll come back on this below.

  • jq: jq is a command line JSON processor: jq can transform JSON in various ways, by selecting, iterating, reducing and otherwise mangling JSON documents. (From jq man page).

  • tmux is a terminal multiplexer. When I “discovered” it few years ago I just wondered how it was possible that I had lived without using such a tool before! Basically, when started on the remote machine, tmux provides a terminal (in which you can execute any terminal application) from which you can detach and which will continue to run. You later re-attach to that terminal. tmux allows to split a text window in several sub-windows, it can manage several session at the same time, etc.. Powerful!

    Just try the following:

    ## on your dev workstation
    $ ssh auser@my.domain.com
      
    ## on the remote server
    $ tmux
      
    ## In tmux shell
    $ n=1; while true; do echo $n; n=`expr $n + 1`; sleep 1; done
    1
    2
    Crtl-Bd # Control-B+d to detach from tmux
    Ctrl-D  # Control-D   to logout from the remote server
      
    ## Back on the local machine
    $ ssh auser@my.domain.com
      
    ## On the remote server
    $ tmux attach
    67
    68 # the program continued to execute while we were disconnected
      
    Crtl-Bd # Control-B+d to detach from tmux
    Ctrl-D  # Control-D   to logout from the remote server
    

    From now on, forget about ssh or VPN timeouts due to idle connections!

So, at the end of the day, let’s just install all those tools in one shot:

apt install -y zile tree htop git jq tmux`

An other magic tool I’ve discovered not so long ago is sshfs which allows to manipulate files on a server which runs the shh daemon. Install sshfs on your development machine: sudo apt install shhfs and try the following (on the dev machine):

mkdir ~/mnt
sshfs auser@my.domain.com:/home/auser ~/mnt
ls -l ~/mnt

I love that one!

Provide our user with a nice terminal environment

Terminal environment is really a matter of taste and habit. I’m using bash and my .bashrc is really simple, no bells nor whistles:

# ----- Aliases
alias ll="\ls -lhp" # --color=auto
alias rm="rm -i" 
alias cp="cp -i" 
alias mv="mv -i" 
alias df="df -h" 
alias du="du -h" 
alias grep="grep --color=auto"
alias fgrep="fgrep --color=auto"
alias egrep="egrep --color=auto"

# ----- Control bash behaviour. See bash(1)
HISTCONTROL=ignoredups 
HISTSIZE=10000
HISTFILESIZE=20000
shopt -s histappend
shopt -s checkwinsize


# ----- Prompt on 2 lines:
# working directory (git status if in git repo)
# username@hostname>
export PS1='\w$(get_git_status)\n\u@\h> '

# ----- Path
# Sometimes, it's better to install stuff locally and not systemwide.
# ~/usr/bin is the place for this
export PATH=$HOME/usr/bin:$PATH

Add a usr directory tree for personnal installations:

mkdir -p ~/usr/bin

Use git to manage configuration files and scripts

At system level, git is very useful for managing the history of configurations files and system scripts. I like to have a notification in the prompt if I’m in a git repository and if so, on which branch I am and what is its status. The basic idea is execute a git status query every time the prompt is displayed and to add to the prompt text. For more details see a previous post of mine.

  1. Add the following function to .bashrc:

    # ----- git status if in git repository
    function get_git_status () {
    LANGUAGE=en git status -b --porcelain 2>&1 | awk '
      BEGIN {
        branch_name = "" 
        not_in_git = 0			# not in git repo
        status = ""
      }
       
      {
        if($1=="fatal:") {   # in git repo
     	  not_in_git = 1
    	  exit
        }
        if($1=="##") {       # format is : ## <branch name>
    	  branch_name = $2
    	  next
        }
       
        if(length($1) == 2)            # Get the Y of the 'XY' status
          car = substr($1, 1, 1)
        else
          car = substr($1, 0, 1)
        if(car=="?" || car=="M" || car=="A" || car =="D" || car=="R" || 
           car=="C" || car=="U") {
          status = status car
          next
        }
      }
       
      END {
        s = ""
        if(not_in_git == 0) {
    	  if(length(status) > 0)
      	    s = " (\033[31m"branch_name" "status"\033[39m)"
    	  else
      	    s = " (\033[32m"branch_name"\033[39m)"
        }
        if(s) 
          print s
      }
    '
    }
    
  2. Create ~/.gitconfig, the git configuration file

    [user]
        name = Your Name
        email = your.name@domain.com
       
    [alias]
        co    = checkout
        cob   = checkout -b
        lg    = log --oneline --graph --decorate
        lgl   = log --oneline --graph -n 15  --decorate
        lge   = log --oneline --graph --name-status --decorate
        lgel  = log --oneline --graph --name-status -n 10 --decorate
        l     = log --graph --name-status -n 6 --decorate
        ciam  = commit -a -m
        cia   = commit -a
        cim   = commit -m
        ci    = commit
        br    = branch --color -v
        st    = status
        s     = status -bs
    [color]
      ui = auto
    [core]
      editor = zile
    

Make it easy and secure to connect from a client machine

Typing in passwords every time you want to connect to the server is kind of painful. The solution is to allow connections using your public keys. To achieve this, there are two steps to be done.

  1. Generate a ssh key pair on the client machine

    You may want first to check if you don’t already have such key with ll ~/.ssh/id_*.pub. If this is the case, you can use this file or generate a new one.

    To generate a 4096-bit key pair:

    mkdir -p ~/.ssh
    ssh-keygen -t rsa -b 4096 -C "your.email@maildomain.com"
    

    You will be asked for a passphrase. The most secure is to provide one, but this may interrupt some automatic processes where ssh is involved. I tend to leave it empty.

    You will also be prompted for the location where to save the keys. Default is /home/your_login/.ssh/id_rsa which is fine.

    At the end of the process, you will end up with two files in ~/.ssh directory:

    • id_rsa_key is the private key which has to remain on the local machine

    • id_rsa_key.pub is the public key which we will transfer on the remote server

  2. Transfer the public key on the remote server.

    1. With sshfs:

      sshfs auser@my.domain.com:/home/auser ~/mnt
      mkdir -p ~/mnt/.ssh
      cat .ssh/id_rsa.pub >> ~/mnt/.ssh/authorized_keys
      umount ~/mnt
      
    2. If I didn’t manage to convince you about sshfs:

      ssh auser@<server @IP> mkdir -p .ssh
      cat .ssh/id_rsa.pub | \
         ssh auser@<server @IP> 'cat >> .ssh/authorized_keys'
      
  3. And, finally, you can try to connect with ssh auser@my.domain.com and you should login without being prompted for a password.

If you need to connect to your server from many client machines, repeat the same procedure and append new public keys to .ssh/authorized_keys.

Security

Access protection with the firewall

The server will usually be used only for hosting some web services so remote access is only on ports ssh:22 and https:443. My distribution of choice is Ubuntu and on Ubuntu the firewall is managed with ufw:

# Allow incoming requests on ports 22 and 443
ufw allow ssh
ufw allow https
 
# Start firewall
ufw enable

We can make sure all went OK:

ufw status
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)
443/tcp (v6)               ALLOW       Anywhere (v6)

Few ssh tweaks

The default ssh configurations are usually good enough but depending on the usage there are some parameters in /etc/ssh/sshd_config that can be changed:

  • PermitRootLogin no: once we have a user who can execute sudo, there is ne reason to connect as root

  • PermitEmptyPasswords no: obviously, this is a bad idea to keep it at yes

  • PubkeyAuthentication yes: allows authentication with public key

  • X11Forwarding no: on this kind of machine there is little chance to do X11 forwarding…

Let’s install some nice applications

Nginx

I’m using Nginx as a static website server as well as a reverse proxy to some services running locally on the server. I also only use HTTPS for obvious privacy and security reasons so I need to install Let’s Encrypt’s suite to get and auto-renew certificates.

Install Nginx along with Let’s Encrypt

sudo apt install -y nginx 
sudo apt install certbot -y python3-certbot-nginx

Change the default index.html file

Nginx’s home page is locate at /var/www/html: create a simple index.html to avoid showing the default file.

Configure a basic HTTP server

Nginx’s configuration files are located in /etc/nginx. The subdirectory sites-available/ host the configurations for sites served by the server. default is the default site. Change its content to:

server {
  listen 80 default_server;
  listen [::]:80 default_server;
  root /var/www/html;
  server_name my.domain.com;
}

Then check the configuration with

sudo service nginx configtest

and if every thing is OK, reload the configuration with

sudo service nginx reload

Alternatively, this can be done with sudo nginx -t and sudo nginx -s reload.

Obtain and install the certificates for your server

sudo certbot --nginx -d my.domain.com # -d someother.domain.com 

certbot will ask some questions; reply accordingly to your wishes. The most important question is about removing HTTP access to which you should reply “yes”.certbot will change the configuration file which will look like this:

server {
    root /var/www/html;
    server_name my.domain.com;

    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
    if ($host = my.domain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80 default_server;
    listen [::]:80 default_server;
    server_name my.domain.com;
    return 404; # managed by Certbot
}

Renew the certificate automatically

Let’s Encrypt certificates expire after 90 days. To be sure the site will be running without problem, it’s a very good idea to renew the certificates automatically. So do so, add the following cron entry 0 1 * * 0 /usr/bin/certbot renew --quiet (every Sunday at 1AM) to the crontab sudo crontab -e.

Docker

Since I started using Docker, Docker has hold its promise “what runs in Docker on my laptop will run unchanged in production”. We can use multiple frameworks, multiple versions of the same framework, various languages: installed on the same machine they might be incompatible but once our application is packaged in an image, it is completely isolated from the rest of the world.

What I usually do is to create an image of the app and run it on a specific port as a container. Then I use Nginx as a reverse proxy. For example:

  • App_1 runs as a container and listens on port 3000
  • App_2 runs as a container and listens on port 3010
  • App_3 runs as a container and listens on port 3020

Then, Nginx is configured to distribute the traffic in the following manner:

  • myserver.mydomain.com/app1 maps to localhost:3000/
  • myserver.mydomain.com/app2 maps to localhost:3010/
  • myserver.mydomain.com/app3 maps to localhost:3020/

Easy, no ?

So let’s first install Docker, execute some containers and configure Nginx.

Install Docker

This section is almost a copy and past from the the official installation page.

  1. If docker is already installed, depending on its version, you may want to re-install a newer version. So let’s first uninstall it:

    sudo apt remove docker docker-engine docker.io containerd runc
    
  2. Install the necessary commands to be able to install from Docker’s repository (most probably all these commends are already here):

    sudo apt install ca-certificates curl gnupg lsb-release
    
  3. Add Docker’s public key:

     curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
     sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    
  4. Add Docker’s repository to our package manager

    echo \
    "deb [ \
     arch=$(dpkg --print-architecture) \
     signed-by=/usr/share/keyrings/docker-archive-keyring.gpg\
     ] \
    https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
    sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  5. Install Docker

    sudo apt update
    sudo apt install docker-ce docker-ce-cli containerd.io
    
  6. Make Docker manageable by a non-root user. If you try something like docker ps you’ll get a permission denied error message because the Docker daemon binds to a Unix socket owned by root rather than to a TCP port. Thus, managing Docker with the docker command requires using sudo.

    To allow auser user to manage Docker without sudo we need to add him to the docker group. Notice, however, that there are security impacts.

    # Add auser user to the docher group 
    sudo usermod -aG docker auser 
       
    # activate the change
    newgrp docker 
    
  7. Check that everything had been installed properly. You should get an output that looks like that:

    docker run --rm hello-world
      Unable to find image 'hello-world:latest' locally
      latest: Pulling from library/hello-world
      2db29710123e: Pull complete 
      Digest: sha256:507ecde44b8eb741278274653120c2bf793b174c06ff4eaa672b713b3263477b
      Status: Downloaded newer image for hello-world:latest
         
      Hello from Docker!
      This message shows that your installation appears to be working correctly.
      ...
    

Create a “hello world” HTTP application

I’m not going to spend a lot of time on this part as this is only for demo purposes. The app is a NodeJS app. It defines a “server ID”, sid, as a UUID, which purpose is to show which appliaction (server) responds when we will be running multiple docker containers. It has four endpoints:

  • /: displays the HTTP headers on a HTML page.
  • /hello: displays the JSON object

    {
      sid: "ab7eca43-f375-4666-8cfb-f4093a4e90ce",
      resp: "Hello world"
    }
    
  • /hello/name: displays the JSON object

    {
      sid: "ab7eca43-f375-4666-8cfb-f4093a4e90ce",
      resp: "Hello name"
    }
    
  • /kill: kills the server: exits with code -100 to simulate a crash.
  1. Init the NodeJS application:

    mkdir hello
    cd hello
    npm init
    npm install express --save
       
    cat > .dockerignore << EOF
    node_modules
    *.log
    EOF
    

    NB: .dockerignore contains the list of files, possibly with wildcards, that we don’t want to have COPY-ed in the image whet it will be created later by docker build.

  2. Create the index.js file:

    // Libraries and global variable section
    var express = require('express')
    var crypto = require('crypto')
    var app = express()
    var sid = crypto.randomUUID()
       
    // Server start section. The server will be listerning on port 3000
    var port = 3000
    var server = app.listen(port, function () {
        var host = server.address().address 
        console.log(`Server sid='${sid}' listening at http://%s:%s`, host, port)
    })
       
    // Routes section
    app.get('/', function (req, res) {
        h1 = `<p>Hello from server ID:'${sid}'</p>`
        h2 = `<p>Current date is '${Date.now()}'</p>`
        r = `<p>Headers: <pre>${JSON.stringify(req.headers, null, 2)}</pre></p>`
        res.send(`${h1}${h2}${r}`)
    })
       
    app.get('/hello', function (req, res) {
        res.json({"sid": sid, "resp": "Hello world"})
    })
       
    app.get('/hello/:who', function (req, res) {
        res.json({"sid": sid, "resp": `Hello '${req.params.who}'`})
    })
       
    app.get('/kill', function (req, res) {
        process.exit(-100)
        res.send("Not supposed to reply...")
    })
    

Create a Docker image for the application

  1. Create a Dockerfile based on Alpine Linux to minimize the size of the image

    # Build from the latest LTS version
    FROM node:16.13.2-alpine3.15
       
    # Use /app as the working directory
    WORKDIR /app
       
    # Install applications dependencies and the app itself
    # the .dockerignore file prevents copying node_modules
    # directory and any log files
    COPY . .
    RUN npm install --production
       
    # Expose the port the app listens on to the outside world
    EXPOSE 3000
       
    # Define the command to start the server
    CMD ["node", "index.js"]
    
  2. Create the image and try it

    docker build . -t mszmurlo/app2:0.1
    docker run -d --rm -p 3000:3000 66d9dcb48158
    

    Then access the url https://localhost:3000 from a navigator and you should get something like:

    Hello from server ID:'7c4a0108-b1cb-4d71-8887-1e3c84780325'
    Current date is '1644652873908'
    Headers:
    {
      "host": "localhost:3000",
      "user-agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:96.0) Gecko/20100101 Firefox/96.0",
      "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
      "accept-language": "fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3",
      "accept-encoding": "gzip, deflate",
      "connection": "keep-alive",
      "cookie": "cookies omitted"
      "upgrade-insecure-requests": "1",
      "sec-fetch-dest": "document",
      "sec-fetch-mode": "navigate",
      "sec-fetch-site": "cross-site",
      "if-none-match": "W/\"70c-mjX1D11euwUIqTTIzv4cF7zweEQ\"",
      "cache-control": "max-age=0"
    }
    

Test the containers resilience

A nice feature of Docker is that it monitors the exit code of the app running in the container and it can restart it in case of certain events or failures. The following restart policies can be used:

  • no: this is the default. Do no restart whatever has happened

  • always: always restart whatever has happened except if the container had been stopped explicitly.

  • unless-stopped: restart the container except if the container was in stopped state before the Docker daemon was stopped

  • on-failure: if the contained exited with a non zero exit code or if the Docker service daemon restarts, then restart the container

Restart the container with --restart always option:

docker run -d --restart always -p 3000:3000 mszmurlo/hello:0.1

From and from another terminal make some tests:

curl http://localhost:3000/hello
  {"sid":"427f9af5-dba5-42e1-925d-83b9cf494f57","resp":"Hello world"}
curl http://localhost:3000/kill
  curl: (52) Empty reply from server
curl http://localhost:3000/hello
  {"sid":"d08a5024-e557-4492-b279-b8d05feeded5","resp":"Hello world"}

Notice the sid has changed without any manual restart action.

Push the image on an image repository

The first repository that comes in mind while using Docker is Docker Hub.

  1. Head to https://hub.docker.com/ and create an account. Hosting images on Docker Hub is free.

  2. Login to the hub: on the command line, type docker login. You’ll be asked for the username and password you defined during account creation

  3. Tag the image that you have created: docker tag mszmurlo/hello:0.1 mszmurlo/hello:0.1

  4. Push the image on the hub: docker push mszmurlo/hello:0.1. Once done, you may want to head to the repositories list on the hub to see that the image had been uploaded properly.

Test the image on the VM

  1. Login on the VM

  2. Run a container based on this image: docker run -d --rm -p 3080:3000 mszmurlo/hello:0.1

  3. Of course, as the image only exposes the application on port 3000 and as we map that port on port 3080, it is not possible to reach it right now from outside. However, we can test it locally : curl localhost:3080.

Conclusion

At this point in time we have a working virtual machine with some nice applications installed which is able to serve static websites and to run Docker containers.

In the next section, well see how to expose multiple dockerized applications on our server.

Exposing multiple dockerized applications to the internet

Exposing one app through Nginx

In this section we will configure Nginx to proxy requests from the internet to our application.

  1. Start one container.

    docker run --restart always -d -p 3010:3000 mszmurlo/hello:0.1
    
  2. Configure Nginx to reverse proxy url https://my.domain.com/hello on http://localhost:3010. Add the following snippet to the HTTPS server section in /etc/nginx/sites-available/default:

    location /hello/ {
      proxy_pass http://localhost:3010/;
      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;
    }
    

    If everything is OK, a curl https://my.domain.comg/hello/hello should return something like: {"sid" : "74742d59-340f-4e9a-b2f2-b375b9d56c0c", "resp" : "Hello world"}.

  3. As it had been stated with --restart always, the container should restart in case of any failure. To check this, lets run the following command on the development machine:

    while true; do 
      curl https://my.domain.com/hello/hello
      echo ""
      sleep 1; 
    done
    

    This will print JSON lines like {"sid":"74742d59-340f-4e9a-b2f2-b375b9d56c0c","resp":"Hello world"} forever. In another terminal or in a browser, hit the https://my.domain.com/hello/kill endpoint. The first terminal should show 502 Bad Gateway error lines for a while before restarting to show JSON lines again with another sid value.

Exposing multiple apps through Nginx

Exposing multiple dockerized apps is simply a matter of defining several location sections in Nginx’s configuration file and having each container listening on a dedicated port.

  1. Stop all previously started containers with docker stop $(docker container ls -q) then start three docker containers:

    docker run --restart always -d -p 3010:3000 mszmurlo/hello:0.1
    docker run --restart always -d -p 3020:3000 mszmurlo/hello:0.1
    docker run --restart always -d -p 3030:3000 mszmurlo/hello:0.1
    
  2. Update Nginx config file with three endpoints, /app1/, /app2/ and /app3/ that will be mapped on each containers listening on ports 3010, 3020 and 3030. Replace the previously defined location section with the following snippet with striped down location sections:

    location /app1/ {
         proxy_pass http://localhost:3010/;
    }
    location /app2/ {
         proxy_pass http://localhost:3020/;
    }
    location /app3/ {
         proxy_pass http://localhost:3030/;
    }
    
  3. Call each of the application in sequence to test if they are all running properly:

    n=1; 
    while true; do 
      echo "/app${n} ->`curl -s https://my.domain.com/app${n}/hello/${n}`" 
      n=$(( $n % 3 + 1 )); 
    done
    

    You may also want to invoke the /kill endpoint on one of the applications to test if it restarts properly.

Conclusion

Where have we gone so far? We have a secured VM with a shell environment and some CLI applications. We are able to host static web sites but more importantly, we are able to host dockerized application. Sweet.

This post was quite long but actually the setup is quite fast: copy and paste the sections of interest and you should be ready in few minutes.

References

Where to get a cheap VM

  • Digital Ocean : 1CPU/1GB/25GB-SSD/1TB-Transfer : 5USD
  • Vultr : 1CPU/1GB/25GB-SSD/1TB-Transfer : 5USD. Has even cheaper plans with smaller machines
  • upCloud : 1CPU/1GB/25GB-MaxIOPS/1TB-Transfer : 5USD
  • Linode : 1CPU/1GB/25GB-SSD/1TB-Transfer : 5USD
  • Kamatera: 1CPU/1GB/20GB-SSD/5TB-Transfer : 4USD

Pricing for the “big providers” ius less obvious even if each of them claims to be the clearest and the cheapest. Guys, if it’s unclear, it’s more difficult to trust!

  • Google : 1G/3.75GB/?/? : 7.3USD. 300USD offered for testing
  • AWS : 2CPU/1GB/?/? : 6USD. 750 free hours of a t2.micro instance during one year
  • Azure : 1CPU/1GB/4GB/? : 7.6USD. Has also free forever plans for some services.
  • Oracle : Oracle offers a quite interesting free forever trier with 2 AMD VM, up to 4 ARM instances, 200GB block storage, 10GB of object storage, 2 autonomous databases with 20GB each as well as 300USD in free credits. Probably the most generous offer. But the overall pricing is quite obscure.
  • IBM: 200USD free credid for 30 days and a free tier but it’s really unclear what is included.

Other