Working at BigTech™ can sometimes skew your view of the world (a surprise to very few of you). One example is seeing how Kubernetes can cope with the massive deployment scale of hundreds/thousands of servers and thinking to yourself that this is effectively required knowledge (and if not now it soon will be) which leads to adopting these technologies for projects that will never see millions of daily active users. I fully leant into k8s for my home lab. At one point I had 6 netboot raspberry pis running a kubernetes cluster with etcd/ceph, using FluxCD and GitOps to manage... what amounts to a couple of self hosted apps. It was a nightmare to maintain. One one hand, I could plug a new raspberry pi into the network, add the mac address to the list of provisioned services and within 10 minutes it would be assimilated into the cluster. On the other hand just idling cause 30% CPU load and the SSDs had an estimated lifetime of about 4 years due to the excessive writes that etcd & ceph would make. All of this complexity was not worth it for a website that had 2-3 users (again this will come as a surprise to very few of you). This was obviously a ridiculous thing to do, but going through this process made me take stock of what I was looking for in a homelab and also how I build and ship everything. Kubernetes was costing me time and money for such little payoff, but the more I thought about it, so was other tech, like Docker. It was time to simplify.
So, the last few projects I've built have been handled differently. I've simply been copying the built artefacts onto a cheap server, no containers, no gitops, no high availability, just ssh
and scp
. This is a simplification over Docker for several reasons:
-
I don't need to use some kind of registry like Docker or GitHub. The registries are cool and all but they're also time consuming to set up, and create a dependency on a service.
-
Process, hardware, and network isolation sure sound useful but the majority of the time I'm running these services in isolation from each other by other means; physically separated hardware, VMs, or dare I say it, sometimes containers in containers.
-
Speaking of isolation, it's always useful until it's not. I've found if I want to go slightly off-piste the simple becomes exceedingly difficult. Those who've tried to forward GPUs or USB devices know how painful tools like Docker can suddenly be.
-
Having Docker set up on every machine I use is still a pain today. Plus it seems as though the company is increasingly profit seeking - which isn't necessarily innately bad, they have to make money somehow - but the second order effects are net-frustration. I've found "Docker Desktop" increasingly cumbersome and confusing to use as they add more to their product offering, and I don't want a "Docker Core Subscription" nor do I care about privacy policy changes. I just want a reliable way to boot a server.
-
Docker is useful for rebooting services and running multiple services. Docker Compose is the killer feature for me when running small apps. However I've found using the built-in systemd can be quite painless and the syntax is simple. For me, systemd scripts feel like less work than
docker-compose.yml
. -
Docker solves half of the problem for commit-to-production services. I use
watchtower
to try and keep containers up to date, but it's a bit of a faff, but then so is sshing into a server to rundocker compose up --force-recreate -build -d
every time I want to deploy. I'd prefer a more seamless setup that's fire-and-forget.
I could switch to alternatives, such as lxc or podman, but it only solves some of these problems. So instead I've just been buying more VPS (virtual private servers) and using a combination of ssh, rsync and systemd scripts - all built into most Linux distros. Containers are nice but after a decade of using them I'm not fully convinced they solve problems at the small scale. My belief is that Docker, like k8s, is "trickle-down web scale" (which, much like trickle-down economics, pretends that consolidation at the top benefits the whole but really just makes things harder at the bottom). This old-new way of doing things has truly been delightful. I wanted to share my process, and where I automate bits of it to get 80% of the benefits I saw from Docker, with about 20% of the effort.
This "simple" workflow requires some manual steps because I'm trying to ward off the complexity demons and avoiding stepping into learning new things (though maybe one day I'll finally learn how to use nixos and this whole post will be redundant for me). Given these manual steps, I thought I wrote a playbook on how to set up a new server, and I figured it might be worthwhile sharing, so here it is (with some words around it because this is a blog after all):
Provisioning like its 2005
Instead of using AWS's cloud services (or one of the other competing cloud services), and instead of wedding myself to a framework like Cloudflare Workers or Next.js, my new projects all start out by provisioning a VPS. Before the days of AWS I used to provision a lot of VPSs (vee-pee-ess-es? vee-pee-ess-eye?), and even back then it was incredibly cheap to buy these, but now they're so steeply commoditised you can pick up a VPS for less than €5/$5/£4/mo these days. I use Hetnzer (not an endorsement) but you could try Hostwinds or Bluehost or Inmotion or Hostinger or any other provider and it'll likely be on the order of 1/10th the price of an EC2 instance. When you pick a provider, just ensure you can export your server as an image that you can then take to another provider if you like. The nice thing about administering a VPS is that there's little to no vendor lock-in.
I use Ubuntu Server as my distro of choice. While Ubuntu is reasonably secure out of the box, there's a few things I do to lock it down a little further. The downside to a VPS is it does take some administering, but it's not much. Here's a run-down of the commands I'll run to set up a VPS:
First step, updating and installing packages
This one should be obvious but it's here to remind me that an updated server is a more secure server.
apt update
apt upgrade
apt install neovim # I use this to edit files on the server. You might be fine with nano which is built in.
cat <<EOF >> ~/.bashrc
export SYSTEMD_EDITOR=nvim
EOF
. ~/.bashrc
Hardening the server from basic attacks:
# Install fail2ban to block script kiddies
cd
wget https://github.com/fail2ban/fail2ban/releases/download/1.1.0/fail2ban_1.1.0-1.upstream1_all.deb
dpkg -i fail2ban_1.1.0-1.upstream1_all.deb
cat <<EOF > /etc/fail2ban/jail.d/debian-defaults.conf
[DEFAULT]
banaction = nftables
banaction_allports = nftables[type=allports]
backend = systemd
[sshd]
enabled = true
EOF
systemctl enable fail2ban --now
Fail2Ban isn't strictly necessary but I've used it for a long time and it seems to work well for me. Be sure to check on /var/log/fail2ban.log
to confirm it's still doing what it's supposed to be and banning attempts to ssh. Next up, a firewall:
# Get firewall going
systemctl enable ufw --now
ufw enable
ufw default deny incoming
ufw default allow outgoing
ufw allow 22
ufw allow 80
ufw route allow proto tcp from any to any port 80
ufw route allow proto tcp from any to any port 443
systemctl restart ufw
The ufw
firewall is built-in to Ubuntu but just needs enabling. I find it much simpler than running iptables
commands directly. You could switch up port 22 to another port for ssh but I don't really see the value in that. Just block everything but 22, 80 and 443. On another machine try running nmap
to confirm this is doing what it's meant to.
Next I'll run ssh-keygen
on my local machine and copy the contents of the .pub
file, which then get pasted into authorized_keys
:
# Add ssh pubkey to auth keys
nvim ~/.ssh/authorized_keys
# Stop password auth on SSH
cat <<EOF >> /etc/ssh/ssh_config
PasswordAuthentication no
EOF
systemctl restart ssh.service
With all of this done I can start assembling the basic services.
Setting up Caddy & node exporter
In order to run an Web service, it's a good idea to run things through a reverse proxy. I like Caddy as it's dead simple to administer and has some great defaults. Caddy has their own package setup for Debian, which is how I install it. Here's how:
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /e
tc/apt/sources.list.d/caddy-stable.list
apt update
apt install caddy
nvim /etc/caddy/Caddyfile
systemctl enable --now caddy-api.service
Caddy needs to be configured. The Caddyfile will depend a lot on what you're doing with the server, but here's a basic one:
{
email <your-email-here>
debug
servers {
timeouts {
read_body 1m
read_header 1m
write 1m
idle 10m
}
metrics
}
}
<your-domain-here>.com {
basic_auth /metrics {
<basic-user> <basic-password>
}
reverse_proxy :8100
}
node.<your-domain-here>.com {
basic_auth {
<basic-user> <basic-password>
}
reverse_proxy :9100
}
So with this set up, I will ensure my server is going to run on port 8100 and has a /metrics
endpoint exposed. The email
directive enables https
(replace <your-email-here>
of course), and I introduce some limits to avoid payload attacks.
Of note, <basic-user>
will need to be replaced with a username, and <basic-password>
will need to be replaced with the output of caddy hash-password
. The username and password combo can be given to a service like Prometheus to read the /metrics
endpoints.
The service running on port :9100
isn't something homegrown, that's node_exporter
- a service which will give us metrics about the server's health, such as CPU and memory. This can be plugged into Prometheus to give us telemetry about the server itself. Here's how node-exporter
is set up (instructions on how to set up Prometheus/Grafana are out of scope for this guide, but easy enough to find on the web. I have them set up on a separate machine which is why /metrics
endpoints are web exposed):
# Set up node_exporter for monitoring server with Prometheus
cd
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-arm64.tar.gz
tar zxvf node_exporter-1.8.2.linux-arm64.tar.gz
mkdir /opt/node_exporter/
mv node_exporter-1.8.2.linux-arm64/node_exporter /opt/node_exporter/node_exporter
chmod +x /opt/node_exporter/node_exporter
sudo useradd -m node_exporter
sudo usermod -a -G node_exporter node_exporter
chown node_exporter:node_exporter /opt/node_exporter/node_exporter
systemctl edit --force --full node_exporter.service
systemctl enable node_exporter.service --now
lsof -i :9100 # confirm it's running on port :9100
This is going to use systemd
to keep the server running, so we'll need to make the unit file for that (that's the systemctl edit --force...
command):
[Unit]
Description=Node Exporter
After=network.target
[Service]
Type=simple
User=node_exporter
Group=node_exporter
Restart=on-failure
RestartSec=100ms
WorkingDirectory=/opt/node_exporter
ExecStart=/opt/node_exporter/node_exporter
StandardOutput=append:/var/log/node_exporter.log
StandardError=append:/var/log/node_exporter.error.log
[Install]
WantedBy=multi-user.target
Running your own application
So now comes the bit where we upload our own application servers to this VPS, and host them on port :8100
. This is going depend a lot on how you build your applications, for example a NodeJS server will have different requirements to a Golang one. For my purposes, I've been recently building out apps in Rust and this means I can package everything up into a single binary. The process will likely be similar for Go, but you might need to copy directories of applications and install some extra dependencies for a runtime platform like Node or Ruby.
So the process looks a lot similar to how we set up node-exporter. I'll create a user for the service, and an /opt/<svc>
directory for the code to live within. Replace all references of <svc>
with the name of your service:
useradd -m <svc>
usermod -a -G <svc> <svc>
mkdir /opt/<svc>
chown <svc>:<svc> /opt/<svc>
touch /opt/<svc>/.env # My apps will read a .env file, so this is a common step for me
chmod 0600 /opt/<svc>/.env
With a user created for the service, I'll also generate an ssh key on the server, that can be used to log in with that user:
cd /home/<svc>/.ssh
ssh-keygen -f key
mv key.pub authorized_keys
chmod 0600 authorized_keys
cat key # copy the contents of the private key somewhere for save keeping
rm key
The ssh-key will be used later when uploading deploy binaries. Next I'll get the systemd files ready:
# Build the service files
systemctl edit --force --full <svc>.service
systemctl enable --now <svc>.service
Again this will look very similar to the node-exporter service:
[Unit]
Description=<svc>
After=network.target
[Service]
Type=simple
User=<svc>
Group=<svc>
Restart=on-failure
RestartSec=100ms
WorkingDirectory=/opt/<svc>
ExecStart=/opt/<svc>/<svc>
StandardOutput=append:/var/log/<svc>.log
StandardError=append:/var/log/<svc>.error.log
[Install]
WantedBy=multi-user.target
Here's where I'll go a step further though. To simplify deployment of these services, I'll set up a watcher service which systemd can use to reload the main service whenever the binary changes, this gives me graceful reloading of the service by simply rsyncing the binary over:
systemctl edit --force --full <svc>-watcher.service
[Unit]
Description=<svc> restarter
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/bin/systemctl restart <svc>.service
[Install]
WantedBy=multi-user.target
This command is a "oneshot" command. Running this will just run the ExecStart
command and exit. On its own it's quite useless but adding a <svc>-watcher.path
systemd file can run this service whenever the path changes:
systemctl edit --force --full <svc>-watcher.path
[Path]
PathModified=/opt/<svc>/<svc>
[Install]
WantedBy=multi-user.target
With these two files combined and enabled, they'll run whenever the binary file changes - e.g. from an upload via ftp, scp, or rsync.
systemctl enable --now <svc>-watcher.{service,path}
This is all the necessary scaffolding to now upload the binary to the server. With the ssh key you had for the user, it should be as simple as compiling your binary and copying it over. So on a remote machine you can run this:
rsync -zp target/release/<svc> <svc>@<domain>.com:/opt/<svc>/
Every time this is run, the application server should be reloaded, thanks to the file watcher, and you should see your service running, thanks to Caddy.
Deploying
We can make this a little smoother by using GitHub actions or equivalent to automate commit-to-production. Here's a GitHub actions workflow I use for a couple of basic Rust projects I have:
on:
push:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.80.1
override: true
- name: Check Out
uses: actions/checkout@v4.1.7
- name: Set up Cache
uses: actions/cache@v4.0.2
with:
path: |
~/.cargo/bin
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: ${{ runner.os }}-cargo-
- name: Cargo Release Build
run: cargo build --release
- uses: actions/upload-artifact@v3
with:
name: <svc>
path: target/release/<svc>
if-no-files-found: error
compression-level: 0
overwrite: true
- name: Upload to server
run: |
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "${{ secrets.ARTIFACT_SSH_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -p 22 ${{ secrets.ARTIFACT_HOST }} >> ~/.ssh/known_hosts
rsync -zp target/release/<svc> <svc>@${{ secrets.ARTIFACT_HOST }}:/opt/<svc>/
Example
An example of this whole set up can be found at github.com:keithamus/tickrs, the code behind https://tick.rs - a little server I run to save numbers to a database.
Conclusions
That's pretty much it. I've built a handful of services using this method which have been running for months without intervention, and have been updated multiple times over the course of their lives. The initial setup of a server takes about 30 minutes but from then on an update is a git push
away.
I'm sure this post will prompt people to tell me that I'm doing it horribly wrong, or tell me the reasons why Docker is actually far superior, and that's okay. If you think I could simplify this further, I'd love to know. There's still a place on the web for using these tools at a certain scale. I'm not sure what the tipping point is for me but I intend to use this for as long as I can, and I'll be sure to update this post when one of my projects suddenly needs more than this can deliver.