Spinning up a VM now days have become a breeze. You sign with a cloud
provider (and there are so many of them: Digital Ocean, upCloud,
Linode, not speaking of the big guys like Google, AWS or Azure) and
for few dollars per month you get a basic Linux VM, most of the time
more than sufficient to host some side projects or demos. Such VMs
come usually with a bare root
account. This post is my TODO list to
a VM able to host some web apps get in five minutes.
Create a user and make him happy
From now on, I’ll suppose you have a server with a root access. This
server will be named my.domain.com
through this post.
Create a user
Working with root
user is BAD! The first thing is to create a user
able to use the sudo
command:
1# Create a user: set a password and validate everything else
2adduser auser
3
4# Add that user user to sudoers, the happy ones how can do sudo
5usermod -a -G sudo auser
Add some basic utilities
Whatever we install, we should first sudo apt update
and sudo apt upgrade
to start with a good basis.
There is is couple of command line tools that I always want to have at my finger tips:
zile
: I love emacs and I have been using emacs-like editors since 1989. Emacs, however, is a fat guy what will use lots of resources while, obviously, we don’t need all or its power on a system such as the one we are building. Zile is a good terminal oriented alternative: it is smaller than vi or nano and still offers lots of functionalities found in emacs. an alternative isjed
, a bit bigger but with macros and scripting capabilities with an extension language namesslang
.tree
: A gadget to show a directory structure in a ascii-graphics way. Handy.htop
: Better process visualization and monitoring than top.git
: The version control application. At system level, git is very useful for managing the history of configurations files and system scripts. I like to have a notification in the prompt if I’m in a git repository and if so, on which branch I am on and what is its status. This had been discussed in the post Git status as terminal prompt. See the section below.jq
: jq is a command line JSON processor: jq can transform JSON in various ways, by selecting, iterating, reducing and otherwise mangling JSON documents. (From jq man page).lftp
: a command line FTP client with handy features like completion and many unix-like commands rather than bare bone FTP commands.tmux
is a terminal multiplexer. When I “discovered” it few years ago I just wondered how it was possible that I had lived without using such a tool before! Basically, when started on the remote machine,tmux
provides a terminal (in which you can execute any terminal application) from which you can detach and which will continue to run. You later re-attach to that terminal.tmux
allows to split a text window in several sub-windows, it can manage several session at the same time, etc.. Powerful!Just try the following:
1## on your dev workstation 2$ ssh auser@my.domain.com 3 4## on the remote server 5$ tmux 6 7## In tmux shell 8$ n=1; while true; do echo $n; n=`expr $n + 1`; sleep 1; done 91 102 11Crtl-Bd # Control-B+d to detach from tmux 12Ctrl-D # Control-D to logout from the remote server 13 14## Back on the local machine 15$ ssh auser@my.domain.com 16 17## On the remote server 18$ tmux attach 1967 2068 # the program continued to execute while we were disconnected 21 22Crtl-Bd # Control-B+d to detach from tmux 23Ctrl-D # Control-D to logout from the remote server
From now on, forget about
ssh
or VPN timeouts due to idle connections!sshfs
:sshfs
allows to manipulate files on a server which runs the shh daemon. Installsshfs
on your development machine:sudo apt install shhfs
and try the following (on the dev machine):1mkdir ~/mnt 2sshfs auser@my.domain.com:/home/auser ~/mnt 3ls -l ~/mnt
I love that one!
So, at the end of the day, let’s just install all those tools in one shot:
1apt install -y zile tree htop git jq tmux lftp sshfs`
Note that on a server environment you probably won’t need tmux
,
lftp
nor sshfs
which are more client oriented. Yet, if you are
used to these, it’s convenient to have them installed, just in case.
Provide our user with a nice terminal environment
Terminal environment is really a matter of taste and habit. I’m using
bash and my .bashrc
is really simple, no bells nor whistles:
1# ----- Aliases
2alias ll="\ls -lhp" # --color=auto
3alias rm="rm -i"
4alias cp="cp -i"
5alias mv="mv -i"
6alias df="df -h"
7alias du="du -h"
8alias grep="grep --color=auto"
9alias fgrep="fgrep --color=auto"
10alias egrep="egrep --color=auto"
11
12# ----- Control bash behaviour. See bash(1)
13HISTCONTROL=ignoredups
14HISTSIZE=10000
15HISTFILESIZE=20000
16shopt -s histappend
17shopt -s checkwinsize
18
19
20# ----- Prompt on 2 lines:
21# working directory (git status if in git repo)
22# username@hostname>
23export PS1='\w$(get_git_status)\n\u@\h> '
24
25# ----- Path
26# Sometimes, it's better to install stuff locally and not systemwide.
27# ~/usr/bin is the place for this
28export PATH=$HOME/usr/bin:$PATH
Line 23 uses get_git_status
which is a function that shows the
status of a git
repository if your current directory is one. See the
post Git status as terminal prompt for more information.
Finally, add a usr
directory tree for personal installations:
1mkdir -p ~/usr/bin
Use git to manage configuration files and scripts
As said above git
is very useful for managing the history of
configuration files and system scripts. Let’s create a configuration
file, ~/.gitconfig
with some shortcuts:
[user]
name = Your Name
email = your.name@domain.com
[alias]
co = checkout
cob = checkout -b
lg = log --oneline --graph --decorate
lgl = log --oneline --graph -n 15 --decorate
lge = log --oneline --graph --name-status --decorate
lgel = log --oneline --graph --name-status -n 10 --decorate
l = log --graph --name-status -n 6 --decorate
ciam = commit -a -m
cia = commit -a
cim = commit -m
ci = commit
br = branch --color -v
st = status
s = status -bs
[color]
ui = auto
[core]
editor = zile
Make it easy and secure to connect from a client machine
Typing in passwords every time you want to connect to the server is kind of painful. The solution is to allow connections using your public keys. To achieve this, there are two steps to be done.
Generate a ssh key pair on the client machine
You may want first to check if you don’t already have such key with
ll ~/.ssh/id_*.pub
. If this is the case, you can use this file or generate a new one.To generate a 4096-bit key pair:
mkdir -p ~/.ssh ssh-keygen -t rsa -b 4096 -C "your.email@maildomain.com"
You will be asked for a passphrase. The most secure is to provide one, but this may interrupt some automatic processes where
ssh
is involved. I tend to leave it empty.You will also be prompted for the location where to save the keys. Default is
/home/your_login/.ssh/id_rsa
which is fine.At the end of the process, you will end up with two files in
~/.ssh
directory:id_rsa_key
is the private key which has to remain on the local machineid_rsa_key.pub
is the public key which we will transfer on the remote server
Transfer the public key on the remote server.
With
sshfs
:sshfs auser@my.domain.com:/home/auser ~/mnt mkdir -p ~/mnt/.ssh cat .ssh/id_rsa.pub >> ~/mnt/.ssh/authorized_keys umount ~/mnt
If I didn’t manage to convince you about
sshfs
:ssh auser@<server @IP> mkdir -p .ssh cat .ssh/id_rsa.pub | \ ssh auser@<server @IP> 'cat >> .ssh/authorized_keys'
And, finally, you can try to connect with
ssh auser@my.domain.com
and you should login without being prompted for a password.
If you need to connect to your server from many client machines,
repeat the same procedure and append new public keys to
.ssh/authorized_keys
.
Security
Access protection with the firewall
The server will usually be used only for hosting some web services
so remote access is only on ports ssh:22
and https:443
. My
distribution of choice is Ubuntu and on Ubuntu the firewall is managed
with ufw
:
1# Allow incoming requests on ports 22 and 443
2ufw allow ssh
3ufw allow https
4
5# Start firewall
6ufw enable
We can make sure all went OK:
1ufw status
2Status: active
3
4To Action From
5-- ------ ----
622/tcp ALLOW Anywhere
7443/tcp ALLOW Anywhere
822/tcp (v6) ALLOW Anywhere (v6)
9443/tcp (v6) ALLOW Anywhere (v6)
Few ssh
tweaks
The default ssh configurations are usually good enough but depending on
the usage there are some parameters in /etc/ssh/sshd_config
that can
be changed:
PermitRootLogin no
: once we have a user who can executesudo
, there is ne reason to connect asroot
PermitEmptyPasswords no
: obviously, this is a bad idea to keep it atyes
PubkeyAuthentication yes
: allows authentication with public keyX11Forwarding no
: on this kind of machine there is little chance to do X11 forwarding…
Let’s install some nice applications
Nginx
I’m using Nginx as a static website server as well as a reverse proxy to some services running locally on the server. I also only use HTTPS for obvious privacy and security reasons so I need to install Let’s Encrypt’s suite to get and auto-renew certificates.
Install Nginx along with Let’s Encrypt
1sudo apt install -y nginx
2sudo apt install certbot -y python3-certbot-nginx
Change the default index.html
file
Nginx’s home page is locate at /var/www/html
: create a simple
index.html
to avoid showing the default file.
Configure a basic HTTP server
Nginx’s configuration files are located in /etc/nginx
. The
subdirectory sites-available/
host the configurations for sites
served by the server. default
is the default site. Change its
content to:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
server_name my.domain.com;
}
Then check the configuration with
sudo service nginx configtest
and if every thing is OK, reload the configuration with
sudo service nginx reload
Alternatively, this can be done with sudo nginx -t
and sudo nginx -s reload
.
Obtain and install the certificates for your server
sudo certbot --nginx -d my.domain.com # -d someother.domain.com
certbot
will ask some questions; reply accordingly to your
wishes. The most important question is about removing HTTP access to
which you should reply “yes”.certbot
will change the configuration
file which will look like this:
server {
root /var/www/html;
server_name my.domain.com;
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = my.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name my.domain.com;
return 404; # managed by Certbot
}
Renew the certificate automatically
Let’s Encrypt certificates expire after 90 days. To be sure the site
will be running without problem, it’s a very good idea to renew the
certificates automatically. So do so, add the following cron entry 0 1 * * 0 /usr/bin/certbot renew --quiet
(every Sunday at 1AM) to the
crontab sudo crontab -e
.
Docker
Since I started using Docker, Docker has hold its promise “what runs in Docker on my laptop will run unchanged in production”. We can use multiple frameworks, multiple versions of the same framework, various languages: installed on the same machine they might be incompatible but once our application is packaged in an image, it is completely isolated from the rest of the world.
What I usually do is to create an image of the app and run it on a specific port as a container. Then I use Nginx as a reverse proxy. For example:
App_1
runs as a container and listens on port3000
App_2
runs as a container and listens on port3010
App_3
runs as a container and listens on port3020
Then, Nginx is configured to distribute the traffic in the following manner:
myserver.mydomain.com/app1
maps tolocalhost:3000/
myserver.mydomain.com/app2
maps tolocalhost:3010/
myserver.mydomain.com/app3
maps tolocalhost:3020/
Easy, no ?
So let’s first install Docker, execute some containers and configure Nginx.
Install Docker
This section is almost a copy and past from the the official installation page.
If docker is already installed, depending on its version, you may want to re-install a newer version. So let’s first uninstall it:
sudo apt remove docker docker-engine docker.io containerd runc
Install the necessary commands to be able to install from Docker’s repository (most probably all these commends are already here):
sudo apt install ca-certificates curl gnupg lsb-release
Add Docker’s public key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Add Docker’s repository to our package manager
echo \ "deb [ \ arch=$(dpkg --print-architecture) \ signed-by=/usr/share/keyrings/docker-archive-keyring.gpg\ ] \ https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker
sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io
Make Docker manageable by a non-root user. If you try something like
docker ps
you’ll get a permission denied error message because the Docker daemon binds to a Unix socket owned by root rather than to a TCP port. Thus, managing Docker with thedocker
command requires usingsudo
.To allow
auser
user to manage Docker withoutsudo
we need to add him to thedocker
group. Notice, however, that there are security impacts.1# Add auser user to the docher group 2sudo usermod -aG docker auser 3 4# activate the change 5newgrp docker
Check that everything had been installed properly. You should get an output that looks like that:
docker run --rm hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 2db29710123e: Pull complete Digest: sha256:507ecde44b8eb741278274653120c2bf793b174c06ff4eaa672b713b3263477b Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...
Create a “hello world” HTTP application
I’m not going to spend a lot of time on this part as this is only for
demo purposes. The app is a NodeJS app. It defines a “server ID”,
sid
, as a UUID, which purpose is to show which appliaction (server)
responds when we will be running multiple docker containers. It has
four endpoints:
/
: displays the HTTP headers on a HTML page./hello
: displays the JSON object1{ 2 sid: "ab7eca43-f375-4666-8cfb-f4093a4e90ce", 3 resp: "Hello world" 4}
/hello/name
: displays the JSON object1{ 2 sid: "ab7eca43-f375-4666-8cfb-f4093a4e90ce", 3 resp: "Hello name" 4}
/kill
: kills the server: exits with code-100
to simulate a crash.
Init the NodeJS application:
1mkdir hello 2cd hello 3npm init 4npm install express --save 5 6cat > .dockerignore << EOF 7node_modules 8*.log 9EOF
NB:
.dockerignore
contains the list of files, possibly with wildcards, that we don’t want to have COPY-ed in the image whet it will be created later bydocker build
.Create the
index.js
file:1// Libraries and global variable section 2var express = require('express') 3var crypto = require('crypto') 4var app = express() 5var sid = crypto.randomUUID() 6 7// Server start section. The server will be listerning on port 3000 8var port = 3000 9var server = app.listen(port, function () { 10 var host = server.address().address 11 console.log(`Server sid='${sid}' listening at http://%s:%s`, host, port) 12}) 13 14// Routes section 15app.get('/', function (req, res) { 16 h1 = `<p>Hello from server ID:'${sid}'</p>` 17 h2 = `<p>Current date is '${Date.now()}'</p>` 18 r = `<p>Headers: <pre>${JSON.stringify(req.headers, null, 2)}</pre></p>` 19 res.send(`${h1}${h2}${r}`) 20}) 21 22app.get('/hello', function (req, res) { 23 res.json({"sid": sid, "resp": "Hello world"}) 24}) 25 26app.get('/hello/:who', function (req, res) { 27 res.json({"sid": sid, "resp": `Hello '${req.params.who}'`}) 28}) 29 30app.get('/kill', function (req, res) { 31 process.exit(-100) 32 res.send("Not supposed to reply...") 33})
Create a Docker image for the application
Create a
Dockerfile
based on Alpine Linux to minimize the size of the image1# Build from the latest LTS version 2FROM node:16.13.2-alpine3.15 3 4# Use /app as the working directory 5WORKDIR /app 6 7# Install applications dependencies and the app itself 8# the .dockerignore file prevents copying node_modules 9# directory and any log files 10COPY . . 11RUN npm install --production 12 13# Expose the port the app listens on to the outside world 14EXPOSE 3000 15 16# Define the command to start the server 17CMD ["node", "index.js"]
Create the image and try it
1docker build . -t mszmurlo/app2:0.1 2docker run -d --rm -p 3000:3000 66d9dcb48158
Then access the url
https://localhost:3000
from a navigator and you should get something like:Hello from server ID:'7c4a0108-b1cb-4d71-8887-1e3c84780325' Current date is '1644652873908' Headers: { "host": "localhost:3000", "user-agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:96.0) Gecko/20100101 Firefox/96.0", "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8", "accept-language": "fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3", "accept-encoding": "gzip, deflate", "connection": "keep-alive", "cookie": "cookies omitted" "upgrade-insecure-requests": "1", "sec-fetch-dest": "document", "sec-fetch-mode": "navigate", "sec-fetch-site": "cross-site", "if-none-match": "W/\"70c-mjX1D11euwUIqTTIzv4cF7zweEQ\"", "cache-control": "max-age=0" }
Test the containers resilience
A nice feature of Docker is that it monitors the exit code of the app running in the container and it can restart it in case of certain events or failures. The following restart policies can be used:
no
: this is the default. Do no restart whatever has happenedalways
: always restart whatever has happened except if the container had been stopped explicitly.unless-stopped
: restart the container except if the container was in stopped state before the Docker daemon was stoppedon-failure
: if the contained exited with a non zero exit code or if the Docker service daemon restarts, then restart the container
Restart the container with --restart always
option:
1docker run -d --restart always -p 3000:3000 mszmurlo/hello:0.1
From and from another terminal make some tests:
1curl http://localhost:3000/hello
2 {"sid":"427f9af5-dba5-42e1-925d-83b9cf494f57","resp":"Hello world"}
3curl http://localhost:3000/kill
4 curl: (52) Empty reply from server
5curl http://localhost:3000/hello
6 {"sid":"d08a5024-e557-4492-b279-b8d05feeded5","resp":"Hello world"}
Notice the sid
has changed without any manual restart action.
Push the image on an image repository
The first repository that comes in mind while using Docker is Docker Hub.
Head to https://hub.docker.com/ and create an account. Hosting images on Docker Hub is free.
Login to the hub: on the command line, type
docker login
. You’ll be asked for the username and password you defined during account creationTag the image that you have created:
docker tag mszmurlo/hello:0.1 mszmurlo/hello:0.1
Push the image on the hub:
docker push mszmurlo/hello:0.1
. Once done, you may want to head to the repositories list on the hub to see that the image had been uploaded properly.
Test the image on the VM
Login on the VM
Run a container based on this image:
docker run -d --rm -p 3080:3000 mszmurlo/hello:0.1
Of course, as the image only exposes the application on port
3000
and as we map that port on port3080
, it is not possible to reach it right now from outside. However, we can test it locally :curl localhost:3080
.
Conclusion
At this point in time we have a working virtual machine with some nice applications installed which is able to serve static websites and to run Docker containers.
In the next section, well see how to expose multiple dockerized applications on our server.
Exposing multiple dockerized applications to the internet
Exposing one app through Nginx
In this section we will configure Nginx to proxy requests from the internet to our application.
Start one container.
1docker run --restart always -d -p 3010:3000 mszmurlo/hello:0.1
Configure Nginx to reverse proxy url
https://my.domain.com/hello
onhttp://localhost:3010
. Add the following snippet to the HTTPS server section in/etc/nginx/sites-available/default
:location /hello/ { proxy_pass http://localhost:3010/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }
If everything is OK, a
curl https://my.domain.comg/hello/hello
should return something like:{"sid" : "74742d59-340f-4e9a-b2f2-b375b9d56c0c", "resp" : "Hello world"}
.As it had been stated with
--restart always
, the container should restart in case of any failure. To check this, lets run the following command on the development machine:1while true; do 2 curl https://my.domain.com/hello/hello 3 echo "" 4 sleep 1; 5done
This will print JSON lines like
{"sid":"74742d59-340f-4e9a-b2f2-b375b9d56c0c","resp":"Hello world"}
forever. In another terminal or in a browser, hit thehttps://my.domain.com/hello/kill
endpoint. The first terminal should show 502 Bad Gateway error lines for a while before restarting to show JSON lines again with anothersid
value.
Exposing multiple apps through Nginx
Exposing multiple dockerized apps is simply a matter of defining
several location
sections in Nginx’s configuration file and having
each container listening on a dedicated port.
Stop all previously started containers with
docker stop $(docker container ls -q)
then start three docker containers:1docker run --restart always -d -p 3010:3000 mszmurlo/hello:0.1 2docker run --restart always -d -p 3020:3000 mszmurlo/hello:0.1 3docker run --restart always -d -p 3030:3000 mszmurlo/hello:0.1
Update Nginx config file with three endpoints,
/app1/
,/app2/
and/app3/
that will be mapped on each containers listening on ports3010
,3020
and3030
. Replace the previously definedlocation
section with the following snippet with striped downlocation
sections:location /app1/ { proxy_pass http://localhost:3010/; } location /app2/ { proxy_pass http://localhost:3020/; } location /app3/ { proxy_pass http://localhost:3030/; }
Call each of the application in sequence to test if they are all running properly:
1n=1; 2while true; do 3 echo "/app${n} ->`curl -s https://my.domain.com/app${n}/hello/${n}`" 4 n=$(( $n % 3 + 1 )); 5done
You may also want to invoke the
/kill
endpoint on one of the applications to test if it restarts properly.
Conclusion
Where have we gone so far? We have a secured VM with a shell environment and some CLI applications. We are able to host static web sites but more importantly, we are able to host dockerized application. Sweet.
This post was quite long but actually the setup is quite fast: copy and paste the sections of interest and you should be ready in few minutes.
References
Where to get a cheap VM
- Digital Ocean : 1CPU/1GB/25GB-SSD/1TB-Transfer : 5USD
- Vultr : 1CPU/1GB/25GB-SSD/1TB-Transfer : 5USD. Has even cheaper plans with smaller machines
- upCloud : 1CPU/1GB/25GB-MaxIOPS/1TB-Transfer : 5USD
- Linode : 1CPU/1GB/25GB-SSD/1TB-Transfer : 5USD
- Kamatera: 1CPU/1GB/20GB-SSD/5TB-Transfer : 4USD
Pricing for the “big providers” is less obvious even if each of them claims to be the clearest and the cheapest. Guys, if it’s unclear, it’s more difficult to trust!
- Google : 1G/3.75GB/?/? : 7.3USD. 300USD offered for testing
- AWS : 2CPU/1GB/?/? : 6USD. 750 free hours of a t2.micro instance during one year
- Azure : 1CPU/1GB/4GB/? : 7.6USD. Has also free forever plans for some services.
- Oracle : Oracle offers a quite interesting free forever trier with 2 AMD VM, up to 4 ARM instances, 200GB block storage, 10GB of object storage, 2 autonomous databases with 20GB each as well as 300USD in free credits. Probably the most generous offer. But the overall pricing is quite obscure.
- IBM: 200USD free credid for 30 days and a free tier but it’s really unclear what is included.
Other
- A nice list of free for dev services.