Everything I Self-Host at Home
tl;dr summary
I self-host Nextcloud, Immich, Jellyfin, Home Assistant, Invidious, and a few supporting tools. Here is what runs where, what it replaced, what it costs, and what it changed.
table of contents
I did not start self-hosting out of frustration with the cloud. I started because I wanted to understand what I rely on and to keep my photos, files, and media under my own control.
Over time, it became a small home ecosystem that stays stable, reduces some recurring costs, and supports the projects I work on.
The hardware (three machines)
illustrious
- OS: Arch Linux x86_64 (kernel 6.17.11-hardened)
- CPU: 11th Gen Intel(R) Core(TM) i5-1135G7 (8) @ 4.20 GHz
- Total RAM: 31.00 GiB
- Disk: 914.83 GiB (ext4)
- Disk: 915.82 GiB (ext4)
- Disk: 3.64 TiB (xfs)
formidable
- OS: CachyOS x86_64 (kernel 6.12.63-cachyos-lts)
- CPU: AMD Ryzen 9 7950X (32) @ 5.88 GHz
- GPU: NVIDIA RTX 4090
- Total RAM: 61.91 GiB
- Disk: 929.50 GiB (btrfs)
- Disk: 1.82 TiB (xfs)
- Disk: 7.28 TiB (xfs)
- Disk: 119.24 GiB (btrfs)
portland
- OS: Arch Linux x86_64 (kernel 6.12.59-lts)
- CPU: AMD Ryzen 7 2700X (16) @ 3.70 GHz
- Total RAM: 31.26 GiB
- Disk: 697.49 GiB (btrfs)
- Disk: 3.64 TiB (fuseblk)
- Disk: 3.51 TiB (zfs)
Roles are simple: illustrious is the public entrypoint, portland handles storage, and formidable provides workstation and compute capacity.
The operating systems follow the role of each machine. The always-on systems run Arch Linux for its straightforward tooling and the control it gives me, along with the maintenance it requires. formidable runs CachyOS as a daily workstation because its performance-focused defaults keep the system responsive without constant tuning.
Network layout (single public entrypoint)
Everything is deployed with Docker Compose. It keeps the setup consistent, repeatable, and easier to rebuild when needed.
Only one machine is exposed to the internet: illustrious. It has my domain pointing at it and it is the only place where ports 80/443 are open on my router. The reverse-proxy is Caddy, which handles HTTPS and routes requests to the right containers, including services that actually live on portland.
This creates a single point of failure, and I accept that tradeoff. The goal is reliability and simplicity, not a full-scale high availability replica.
For remote access, I do not expose SSH. It is VPN-only, SSH keys only, no password prompt. A few services are IP-locked as well. Authentication is handled per-service. It is not fancy, but it is consistent with the rest of the philosophy: fewer moving parts, fewer surprises.
What I actually host
The easiest way to explain my stack is to explain the little frictions it removed from my days.
Media hosting
For my own library, I use Jellyfin as the media server. The collection is made up of DVD rips, Blu-ray rips, CDs, and music I purchased. The library is a few terabytes, and the focus is reliable access to my own media rather than a broad streaming catalog.
I do not transcode. I mostly direct-play on mobile, Apple TV, and web. That decision alone makes the system feel simpler and sturdier. If a file plays, it plays. If it does not, I fix the file, not the server.
For YouTube, I use Invidious, an open-source alternative front-end. It is the most fragile part of the stack, not due to server instability but because YouTube changes frequently and upstream breakage is unavoidable. When it breaks, I update and move on.
Photos and files for family
For family storage and sync, the backbone is Nextcloud. In practice, we use it mostly as a shared drive. It is multi-user, reliable, and predictable, which is exactly what I need from it.
For photos and videos, I run Immich. The library is large (100k+ photos and videos), there are several phones backing up, and I use the ML features. I do not force the always-on servers to do the heavy lifting: ML indexing is delegated to formidable, because that is literally what a 4090 is for when it is not being dramatic about drivers. (kinda outdated joke nowadays)
Home automation
I run Home Assistant mainly for pragmatic use: smart plugs (Tapo) to measure electricity consumption, and the ability to turn devices on or off remotely. It provides clear visibility into power usage and prevents unnecessary idle draw.
Monitoring and notifications
The most valuable tools are the ones that prevent slow, silent failure.
I use Gatus to check service health and monitor mounted disks on illustrious and portland, including ZFS status. It gives me clear feedback when something drifts from expected behavior.
For notifications, I run ntfy. It handles long-running jobs and scripts where I only need completion or failure alerts. I do not need dashboards for those tasks, just a direct notification.
For recipes, I self-host Tandoor Recipes. It keeps the experience focused: no ads, no bloat, and fast access to the recipe data. On iOS, I use kitshn (also open source), which integrates cleanly with the same library.
Experimentation
On illustrious, I also run bitmagnet behind Gluetun with Proton VPN. It has minimal impact on the rest of the stack and stays isolated behind the VPN due to the nature of the ecosystem it touches.
Occasionally, illustrious also runs a Minecraft server, which can temporarily increase RAM usage.
Updates and maintenance
My goal is to keep the always-on machines secure without turning maintenance into a hobby. On illustrious and portland, host updates are automated via a systemd timer. I track security-relevant updates aggressively (microcode, firmware, kernel, bootloader), but I keep the operational side conservative: I run the LTS kernel and treat reboots as a controlled, scheduled event rather than something that happens opportunistically.
Updates are applied nightly. I use kernel-livepatch so the system can absorb many kernel security fixes without requiring an immediate reboot, and I still do a regular reboot cadence (about once per week) to keep things clean and predictable.
I do not run pre-flight checks or automatic snapshots before upgrades. Instead, I rely on fast detection and straightforward intervention. pacman keeps enough local state for quick recovery (I keep two package versions available for downgrade), and I keep a known-good fallback at boot (the previous kernel is always available).
The safety net is monitoring. Gatus checks ZFS pool status, mountpoints, disk space, and core service health, and alerts route to ntfy with a short delay (about 3 minutes). If something breaks, I find out quickly and fix it manually. I do not have a fully automated rollback pipeline; the tradeoff I chose is a simple loop of update → verify → intervene, with an informal target of being back to normal within about an hour.
I do not have an UPS, so a power loss can still require hands-on work (especially because encryption is intentional friction in my setup), but the monitoring makes the failure mode obvious and fast to address.
Encryption and operational impact
The storage on portland is ZFS and encrypted. I use OpenZFS because I need snapshots and strong safeguards for family data. Disks on illustrious are encrypted as well, and formidable is fully encrypted.
That means after a power cut, there is a manual unlock step. I have a chain of trust at home: my iPhone unlocks formidable, formidable holds the keys for illustrious, and illustrious holds the keys for portland. I cannot fully automate this without breaking the security model, so I chose a compromise: an iOS Shortcut that makes the process fast, but still deliberate.
It is an inconvenience, but it materially improves my risk profile, so I keep it.
Costs and savings
Before this setup became stable, we were paying for a family cloud bundle, premium music, and a higher streaming tier. Self-hosting does not replace all entertainment spending, but it reduced the number of subscriptions that felt required.
My rough estimate is about 50 EUR/month saved in third-party services. Electricity for the two always-on machines is about 9 EUR/month. Storage for portland was bought specifically for this and cost a bit over 200 EUR upfront.
If you amortize 200 EUR over 36 months, that is about 5.6 EUR/month. So my mental math looks like this:
- Avoided subscriptions: ~50 EUR/month
- Electricity: ~9 EUR/month
- Storage amortization: ~5.6 EUR/month
- Net: ~35 EUR/month saved (very roughly)
The hidden cost is the temptation to expand storage beyond the original plan.
Cost table (service -> paid alternative -> avoided cost -> self-host cost)
These are rough, and they reflect my situation, not a universal truth. The main point is that storage and photo workflows are where self-hosting makes the biggest difference.
| Self-hosted service | Typical paid alternative | Cost avoided (rough, my case) | Self-host cost (rough) |
|---|---|---|---|
| Nextcloud | OneDrive Family, Google Drive, Dropbox | ~10 to 15 EUR/month | Shared infra (storage-heavy) |
| Immich | Google Photos, iCloud Photos | ~5 to 15 EUR/month | Shared infra (storage + ML indexing on formidable) |
| Jellyfin | Netflix / Plex-style streaming | ~15 to 25 EUR/month (depends what you cancel) | Shared infra (mainly disks) |
| Invidious | YouTube Premium (or “pay with ads”) | 0 EUR/month if you did not pay Premium | Tiny compute cost |
| Home Assistant | Vendor hubs + cloud subscriptions | Usually small, but adds up | Tiny compute cost |
| Tandoor Recipes | Recipe app subscriptions | 0 to ~5 EUR/month | Tiny compute cost |
| Gatus | UptimeRobot-style monitoring | 0 to ~10 EUR/month | Tiny compute cost |
| ntfy | Push/alert services | 0 to ~5 EUR/month | Tiny compute cost |
Non-financial benefits
The primary benefit is not the savings; it is the additional control and flexibility.
Having my own media and photo datasets made it possible to build recommendation workflows for music, videos, and images on top of my Jellyfin library. This is based on a real collection that reflects actual usage. Self-hosting keeps the data local and accessible for that work.
It also forced me to learn infrastructure in a practical way. ZFS required real time and study before it felt reliable, but once it clicked it changed how I think about storage and durability.
It also connected me with a community that approaches storage and preservation with similar priorities.
Pros and cons
The biggest advantage is fewer dependencies on external decisions: fewer surprise price changes, fewer feature retirements, fewer forced migrations, and better privacy through local data control.
The second advantage is speed and operational clarity. Local access is fast, and workflows stay consistent because I control the defaults.
The main downside is that illustrious is a single point of failure because it is the only public entrypoint. That is a deliberate choice. Invidious is also fragile because upstream YouTube changes are frequent.
Encryption adds friction during outages, but the tradeoff is acceptable for the security benefits.
If you want to try
If you want to test whether self-hosting is for you, do not start with ten services. Start with one that removes a real pain: files (Nextcloud), photos (Immich), or media (Jellyfin). Then add a reliability tool like Gatus so you can observe the system as it grows.
You do not need a rack. You do not need Kubernetes. You mostly need a small box that stays on, and the patience to fix one thing at a time.
Closing
Self-hosting, for me, is not a rebellion against the cloud. It is a practical way to keep core services predictable, keep sensitive data local, and support the projects I care about without depending on changing third-party policies.
It is not the right choice for everyone, but it aligns with how I work and with the level of control I want over my infrastructure.