Self-hosting with Podman
I've been a self-hoster for a while. The adventure started with regular mani-pc manufactured by HP. 32G of RAM, Intel gen 10, and 1T HDD drive. However, as long as it was a great experience at the beginning, with time it became a challenge. My stack was built with portainer and a bunch of docker-compose files. It leads to specific issues: with portainer, you don't own the Compose files, they are living inside your tool, but nowhere on the filesystem or git repo. update Feb-2025: portioner supports GitOps Additionally at some point people behind this product decided to change the licensing model, and allow the use of community editions for up to 5 nodes. It wasn't my case, but that pushed me to use something more independent. So I started using dockge, then added another service for docker logs, version monitor, and keeps adding applications that are fun to use, for example homebox or bookstack. It was fun until I released the cost of energy and maintenance effort need to keep it running, at my home. Every internet issue, or power issue takes my setup down. Maybe it was not happening very often, but when I wasn’t home, and the hardware was down, there was no chance to fix it remotely. And I started relaying on that service. That is why I simply decided to migrate to hetzner, and podman at the same time, and use remote NFS. However, let's start from the beginning. Why Hetzner As I'm an AWS Community Builder, every renewal cycle I receive $500 for AWS services. I was getting an EC2 instance on ARM (yes, I decided to switch CPU architecture too) with 4 cores and 8GB of RAM, which cost more than $500 per year. Since we're talking about a server running 24/7, with additional VPC, EBS, etc., the estimated cost was just $111.744 per month. For EC2 only! Hetzner is much more affordable. A standard AmpereOne VM, with backups, a public IP, and 80GB of SSD, was about $7 per month. Then I added a 1TB storage box for an additional $3. Based on complex math, my setup was only 10x cheaper than what AWS could provide. Note: We're talking about a constantly running server, without the need for scalability or high availability. Using Hetzner is pretty simple. If you know how to use Terraform and AWS, the German company provider is less complex. For example, spinning up one VM with a public IP address and backup enabled is just: terraform { required_providers { hcloud = { source = "hetznercloud/hcloud" version = "~> 1.49.1" } } required_version = "~> 1.9.8" } provider "hcloud" { token = var.hcloud_token } variable "hcloud_token" { sensitive = true type = string } variable "local_ip" { type = string default = "79.184.235.150" #default = "31.61.169.52" } resource "hcloud_primary_ip" "box_ip" { name = "box-ip" datacenter = "fsn1-dc14" type = "ipv4" assignee_type = "server" auto_delete = true labels = { "arch" : "arm64", "managed_by" : "terraform", "env": "prod", "location": "de" } } resource "hcloud_firewall" "box_fw" { name = "box-firewall" rule { direction = "in" protocol = "tcp" port = "22" source_ips = [var.local_ip] } labels = { "arch" : "arm64", "managed_by" : "terraform", "env": "prod", "location": "de" } } resource "hcloud_server" "box" { name = "box" image = "centos-stream-9" server_type = "cax21" datacenter = "fsn1-dc14" ssh_keys = ["mbp@home"] backups = true labels = { "arch" : "arm64", "managed_by" : "terraform", "env": "prod", "location": "de" } public_net { ipv4_enabled = true ipv4 = hcloud_primary_ip.box_ip.id ipv6_enabled = false } firewall_ids = [hcloud_firewall.box_fw.id] } output "box_public_ip" { value = hcloud_server.box.ipv4_address } output "ssh_box" { value = "ssh -i ~/.ssh/id_ed25519_local root@${hcloud_server.box.ipv4_address}" } Then we can execute our code with a simple: terraform apply -var="local_ip=$(curl -s ifconfig.me)" To list accessible images, use hcloud image list. So, when we have our CentOS box, let's install some baseline packages there. Ansible The standard tool for this will be Ansible - a simple, stable, and solid product. 100% open source. For setting up my server, I decided to write a custom role. The structure of my role's task folder is simple but, in my opinion, requires some explanation. Tasks's name started with 01 and 02 are core services. Tasks's name started with 03 are responsible for packages. Tasks's name started with 1 are application services. tasks ├── 010_shost.yml ├── 020_ssh.yml ├── 030_packages.yml ├── 035_tailscale.yml ├── 036_storagebox.yml ├── 100_containers.yml ├── 101_linkwarden.yml ├── 102_miniflux.yml ├── 103_umami.yml ├── 104_internal.yml ├── 105_immich.yml ├── 106_jellyfin.yml └── main.yml main.yml is a role control point; I have tags here and custom logic:

I've been a self-hoster for a while. The adventure started with regular mani-pc manufactured by HP. 32G of RAM, Intel gen 10, and 1T HDD drive. However, as long as it was a great experience at the beginning, with time it became a challenge. My stack was built with portainer and a bunch of docker-compose files. It leads to specific issues: with portainer, you don't own the Compose files, they are living inside your tool, but nowhere on the filesystem or git repo.
update Feb-2025: portioner supports GitOps
Additionally at some point people behind this product decided to change the licensing model, and allow the use of community editions for up to 5 nodes. It wasn't my case, but that pushed me to use something more independent. So I started using dockge, then added another service for docker logs, version monitor, and keeps adding applications that are fun to use, for example homebox or bookstack. It was fun until I released the cost of energy and maintenance effort need to keep it running, at my home. Every internet issue, or power issue takes my setup down. Maybe it was not happening very often, but when I wasn’t home, and the hardware was down, there was no chance to fix it remotely. And I started relaying on that service. That is why I simply decided
to migrate to hetzner, and podman at the same time, and use remote NFS. However, let's start from the beginning.
Why Hetzner
As I'm an AWS Community Builder, every renewal cycle I receive $500 for AWS services. I was getting an EC2 instance on ARM (yes, I decided to switch CPU architecture too) with 4 cores and 8GB of RAM, which cost more than $500 per year. Since we're talking about a server running 24/7, with additional VPC, EBS, etc., the estimated cost was just $111.744 per month. For EC2 only!
Hetzner is much more affordable. A standard AmpereOne VM, with backups, a public IP, and 80GB of SSD, was about $7 per month. Then I added a 1TB storage box for an additional $3. Based on complex math, my setup was only 10x cheaper than what AWS could provide.
Note: We're talking about a constantly running server, without the need for scalability or high availability.
Using Hetzner is pretty simple. If you know how to use Terraform and AWS, the German company provider is less complex. For example, spinning up one VM with a public IP address and backup enabled is just:
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "~> 1.49.1"
}
}
required_version = "~> 1.9.8"
}
provider "hcloud" {
token = var.hcloud_token
}
variable "hcloud_token" {
sensitive = true
type = string
}
variable "local_ip" {
type = string
default = "79.184.235.150"
#default = "31.61.169.52"
}
resource "hcloud_primary_ip" "box_ip" {
name = "box-ip"
datacenter = "fsn1-dc14"
type = "ipv4"
assignee_type = "server"
auto_delete = true
labels = {
"arch" : "arm64",
"managed_by" : "terraform",
"env": "prod",
"location": "de"
}
}
resource "hcloud_firewall" "box_fw" {
name = "box-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = [var.local_ip]
}
labels = {
"arch" : "arm64",
"managed_by" : "terraform",
"env": "prod",
"location": "de"
}
}
resource "hcloud_server" "box" {
name = "box"
image = "centos-stream-9"
server_type = "cax21"
datacenter = "fsn1-dc14"
ssh_keys = ["mbp@home"]
backups = true
labels = {
"arch" : "arm64",
"managed_by" : "terraform",
"env": "prod",
"location": "de"
}
public_net {
ipv4_enabled = true
ipv4 = hcloud_primary_ip.box_ip.id
ipv6_enabled = false
}
firewall_ids = [hcloud_firewall.box_fw.id]
}
output "box_public_ip" {
value = hcloud_server.box.ipv4_address
}
output "ssh_box" {
value = "ssh -i ~/.ssh/id_ed25519_local root@${hcloud_server.box.ipv4_address}"
}
Then we can execute our code with a simple:
terraform apply -var="local_ip=$(curl -s ifconfig.me)"
To list accessible images, use hcloud image list.
So, when we have our CentOS box, let's install some baseline packages there.
Ansible
The standard tool for this will be Ansible - a simple, stable, and solid product. 100% open source. For setting up my server, I decided to write a custom role. The structure of my role's task folder is simple but, in my opinion, requires some explanation.
Tasks's name started with 01 and 02 are core services.
Tasks's name started with 03 are responsible for packages.
Tasks's name started with 1 are application services.
tasks
├── 010_shost.yml
├── 020_ssh.yml
├── 030_packages.yml
├── 035_tailscale.yml
├── 036_storagebox.yml
├── 100_containers.yml
├── 101_linkwarden.yml
├── 102_miniflux.yml
├── 103_umami.yml
├── 104_internal.yml
├── 105_immich.yml
├── 106_jellyfin.yml
└── main.yml
main.yml
is a role control point; I have tags here and custom
logic:
# tasks file for roles/hetzner
- name: Ensure that shost exists, it's a main user, and root cannot access the server
tags:
- baseline
ansible.builtin.import_tasks:
file: 010_shost.yaml
- name: Ensure that ssh config is correct
tags:
- baseline
ansible.builtin.import_tasks:
file: 020_ssh.yaml
- name: Ensure that needed packages are installed
tags:
- packages
ansible.builtin.import_tasks:
file: 030_packages.yaml
- name: Ensure that Tailscale is installed if needed
tags:
- vpn
ansible.builtin.import_tasks:
file: 035_tailscale.yaml
- name: Ensure that StorageBox was attached
tags:
- storage
when:
- hetzner_storagebox_enabled
ansible.builtin.import_tasks:
file: 036_storagebox.yaml
- name: Ensure that containers have latest configuration
tags:
- never
- containers
ansible.builtin.import_tasks:
file: 100_containers.yaml
Then, every new application has a dedicated playbook, which will look like 101_linkwarden.yml
.
- name: Supply system with Linkwarden network configuration
notify:
- Restart Linkwarden
ansible.builtin.template:
src: linkwarden.network.j2
dest: /home/shost/.config/containers/systemd/linkwarden.network
mode: '0700'
owner: shost
group: shost
- name: Supply system with Linkwarden service
notify:
- Restart Linkwarden
ansible.builtin.template:
src: linkwarden.service.j2
dest: /home/shost/.config/systemd/user/linkwarden.service
mode: '0700'
owner: shost
group: shost
- name: Supply system with Linkwarden Postgresq config
notify:
- Restart Linkwarden
ansible.builtin.template:
src: linkwarden-postgresql.container.j2
dest: /home/shost/.config/containers/systemd/linkwarden-postgresql.container
mode: '0700'
owner: shost
group: shost
- name: Supply system with Linkwarden server
notify:
- Restart Linkwarden
ansible.builtin.template:
src: linkwarden-app.container.j2
dest: /home/shost/.config/containers/systemd/linkwarden-app.container
mode: '0700'
owner: shost
group: shost
- name: Supply system with Linkwarden Tunnel
notify:
- Restart Linkwarden
ansible.builtin.template:
src: linkwarden-tunnel.container.j2
dest: /home/shost/.config/containers/systemd/linkwarden-tunnel.container
mode: '0700'
owner: shost
group: shost
You may be wondering why I'm not using docker-compose and have a lot of systemd services instead. Let me explain it.
Podman
By default, Podman supports rootless containers. What does it mean? Basically, you don't need to be root or a member of the docker group to run the container. All the magic happens inside the user namespace and only has access to the user's data. As an extra feature, we have SELinux context on files as another security layer. However, as good as the community.docker.docker_compose_v2 module is, there is no Podman equivalent. Folks responsible for the project are saying that you should use quadlets, not composes. WTF are quadlets? Generally speaking, they are systemd services that live in a user namespace and orchestrate Podman containers. Unfortunately, one by one. Wait, what? Yes, the user needs to write a new service per container, and network, and service dependencies. Sounds like fun, isn't it?
There is a project on the internet that allows you to convert your docker-compose directly into quadlets. However, I will show you my systemd services, which could be helpful.
-
linkwarden.service.j2
is a dummy service that allows me to control the whole application with one service.
[Unit]
Description=Linkwarden
[Service]
Type=oneshot
ExecStart=/bin/true
RemainAfterExit=yes
[Install]
WantedBy=basic.target
-
linkwarden.network.j2
is a simple definition of my separated network, only for this particular app.
[Unit]
Description=Linkwarden - Network
PartOf=linkwarden.service
[Network]
-
linkwarden-app.container.j2
is the main service for my app. As I'm using Ansible and Jinja2, avoiding hardcoded credentials is easy (use sops).
[Unit]
Description=Linkwarden - Server
PartOf=linkwarden.service
After=linkwarden.service
After=linkwarden-network.service
After=linkwarden-postgresql.service
[Container]
Image=ghcr.io/linkwarden/linkwarden:v{{ hetzner_linkwarden_app_version }}
ContainerName=linkwarden-app
Network=systemd-linkwarden
Volume=linkwarden-data:/data/data
LogDriver=journald
Environment="DATABASE_URL=postgresql://postgres:{{ linkwarden_postgresql_password }}@linkwarden-postgresql:5432/postgres"
Environment=NEXTAUTH_SECRET={{ linkwarden_next_auth_secret }}
Environment=NEXTAUTH_URL=http://localhost:3000/api/v1/auth
Environment=NEXT_PUBLIC_DISABLE_REGISTRATION=true
[Service]
Restart=always
[Install]
WantedBy=linkwarden.service
As you may have noticed, specifying the volume is very straightforward. The usage of After/PartOf is needed, and what is tricky is that the whole image path is required, as Podman could have issues finding images like linkwarden/linkwarden.
-
linkwarden-postgresql.container.j2
- database service
[Unit]
Description=Linkwarden - Postgresql
PartOf=linkwarden.service
After=linkwarden.service
After=linkwarden-network.service
[Container]
Image=docker.io/library/postgres:16-alpine
ContainerName=linkwarden-postgresql
Network=systemd-linkwarden
Volume=linkwarden-postgresql:/var/lib/postgresql/data
LogDriver=journald
Environment="POSTGRES_PASSWORD={{ linkwarden_postgresql_password }}"
[Service]
Restart=always
[Install]
WantedBy=linkwarden.service
linkwarden-tunnel.container.j2 - Cloudflare tunnels
[Unit]
Description=Linkwarden - Server
PartOf=linkwarden.service
After=linkwarden.service
After=linkwarden-postgresql.service
After=linkwarden-network.service
Requires=linkwarden-app.service
[Container]
ContainerName=linkwarden-tunnel
Exec=tunnel --no-autoupdate run
Image=docker.io/cloudflare/cloudflared:{{ hetzner_tunnel_version }}
Network=systemd-linkwarden
Volume=linkwarden-tunnel:/etc/cloudflared
LogDriver=journald
Environment="TUNNEL_TOKEN={{ linkwarden_tunnel_token }}"
[Service]
Restart=always
[Install]
WantedBy=linkwarden.service
I like the idea of tunnels, even if all of them are commercial software. They allow me to expose my service to the internet without needing to set up NGINX or Caddy, and more importantly, hardening them.
Summary
So far so good. The solution seems complex, but after the initial setup, it is very secure and stable. The regular path of upgrading my services is changing the version of the images I'm using in the group_vars
file.
$ diff --git a/apps.yaml b/apps.yaml
index 63b65b2..e08c7a3 100644
--- a/apps.yaml
+++ b/apps.yaml
@@ -1 +1 @@
-hetzner_linkwarden_app_version: 2.8.3
+hetzner_linkwarden_app_version: 2.9.3
Then just running:
$ ansible-playbook \
selfhost-hetzner.yaml \
--tags containers -u shost
That's it. What am I thinking after 4 months of using Hetzner in production? I'm very happy with them. The price is unbeatable, it's stable, and the user experience is very high level. For a project like this one, I can't recommend it more.
Ah, a few words about the AmpereOne CPUs. For the services I'm using, there is no problem with using ARM64 binaries.
- Miniflux
- Jellyfin
- Immich
- Linkwarden
- N8N
- Caddy
- Actual-budget
- uptime-kuma
- ghost
All of them are running very well on an ARM64 CPU, 4 CPU cores and 8GB of RAM to be precise. This is probably due to the popularity of Raspberry Pi in the self-hosting landscape. So yes, if you have a chance to use ARM CPUs, let's give them a spin. It will be cheaper, probably more efficient, and you could give yourself an ‘innovative soul’ award.