BLOG_POST / everything-i-self-host

Everything I Self-Host at Home

4 min read
771 words
tl;dr summary

I self-host Nextcloud, Immich, Jellyfin, Home Assistant, Invidious, and a few supporting tools. Here is what runs where, what it replaced, what it costs, and what it changed.

Overview

I self-host to keep core services predictable, keep sensitive data local, and reduce dependence on changing third-party policies. The setup is intentionally small and designed to run with minimal day-to-day attention. I am not trying to build a miniature data center; I am trying to make a handful of everyday services reliable and owned.

Self-hosting also gives me the ability to work directly with my own datasets. Having photos, media, and documents on local storage makes it easier to build small tools and workflows without exporting data to yet another service. The benefits are practical rather than ideological.

The tradeoff is that I take responsibility for uptime and backups. I mitigate that with automation, snapshots, and a conservative scope, but it is still a responsibility shift rather than a free upgrade.

Hardware and layout

There are three machines: illustrious (public entrypoint), portland (storage), and formidable (daily workstation and compute). The servers run Arch Linux for control and simplicity, while the workstation runs CachyOS for performance.

storage by machine
loading chart…
Total storage per machine (GiB).

Only illustrious is exposed to the internet. It runs Caddy as a reverse proxy and routes traffic to containers on the local network. Everything is deployed with Docker Compose. Remote access is VPN-only and key-based.

For storage I rely on ZFS snapshots on portland so I can recover quickly from mistakes or upgrades. It is not a full enterprise backup strategy, but it does give me a safer baseline than a single disk without versioning.

Services

  • Media: Jellyfin for my personal media library. I avoid transcoding and prefer direct play.
  • YouTube: Invidious as an alternative front-end, with the caveat that upstream changes can break it.
  • Files: Nextcloud as a shared family drive.
  • Photos: Immich for a 100k+ photo library, with ML indexing delegated to formidable, because that is literally what a 4090 is for when it is not being dramatic about drivers.
  • Home automation: Home Assistant for smart plugs and power monitoring.
  • Monitoring: Gatus for service and disk health checks.
  • Notifications: ntfy for job completion and failure alerts.
  • Recipes: Tandoor Recipes plus the kitshn iOS app.

These services cover the daily needs that matter most: media I already own, shared family files, photos that should not depend on a third-party quota, and lightweight monitoring that makes failures obvious. I keep the list short so the system stays understandable and repairable.

I also run bitmagnet behind Gluetun with Proton VPN on illustrious, and occasionally a Minecraft server.

Maintenance and costs

Host updates run nightly via a systemd timer. I stick to the LTS kernel, use kernel-livepatch where possible, and reboot about once per week. Gatus and ntfy alert me quickly if something drifts, and pacman keeps a previous package version and kernel for recovery. I do not have an UPS, so power loss still means manual unlocks because encryption is intentional.

The main manual step after power loss is unlocking encrypted disks. portland uses encrypted ZFS via OpenZFS, and the other machines are encrypted as well. After an outage I unlock formidable, then illustrious, then portland, which keeps the security model intact even if it adds friction.

The setup saves roughly 50 EUR/month on subscriptions, costs about 9 EUR/month in electricity, and required about 200 EUR upfront for storage. The bigger benefit is fewer dependencies on external decisions, better privacy through local data control, and reliable access to my own data. The main downside is a single public entrypoint on illustrious, which is a tradeoff I accept in exchange for simplicity.

Another practical benefit is how easy it is to build on top of local data. When photos, media, and documents live on the same network, it is straightforward to run indexing jobs, backup scripts, or small automation tasks without additional integrations. That keeps experimentation easy and reduces friction when I want to try something new.

Closing

Self-hosting works for me because it keeps the stack stable, keeps data local, and supports the projects I care about without ongoing vendor drift. It is not for everyone, but it fits my priorities and the level of ownership I want over my services. For me, that tradeoff is worth the steady, local baseline.

hash: 89d
EOF