My Gaming Rig Doubles as a Homelab and Serves Internet Traffic (Part 2: Going Public Without the MITM)
In Part 1, I set up a homelab on my gaming rig: Podman Quadlets, Headscale, Immich, Ollama. Everything accessible over a private Tailnet.
But here's the thing: explaining Tailscale for access to Immich to my mom and sibling isn't happening. They just want to see the family photos.
So some services need to be public. The question is how.
Why Not Cloudflare Tunnel
The obvious answer is Cloudflare Tunnel. Point your domain at Cloudflare, run their daemon, and traffic flows through their network to your home server. Free. Easy. Done.
But Cloudflare terminates your TLS. They decrypt your traffic at their edge, inspect it, and re-encrypt it to your origin. That's the deal. For a CDN, it makes sense. For my family photos? I'd rather not.
I'm not saying Cloudflare is evil. I'm saying if I'm already self-hosting to avoid third parties, routing everything through Cloudflare feels like it defeats the point. Plus, it's a good excuse to learn how this stuff actually works.
The Architecture
Here's the setup:
- DNS points
*.apps.example.comto a VPS - VPS runs Caddy, which terminates TLS and gets wildcard certs via DNS challenge
- VPS also runs HAProxy, which forwards traffic to my home server over Tailnet
- Home Caddy receives the request and routes to the right container based on the Host header
For the VPS, bandwidth matters more than specs. You're just proxying traffic, not running workloads. A cheap VPS with generous bandwidth allowance beats a beefy one with metered transfer.
User (HTTPS)
β
βΌ
βββββββββββββββββββββββββββββββ
β VPS β
β βββββββββββββββββββββββββ β
β β Caddy (TLS + WAF) β β
β βββββββββββββ¬ββββββββββββ β
β βΌ β
β βββββββββββββββββββββββββ β
β β HAProxy (TCP proxy) β β
β βββββββββββββ¬ββββββββββββ β
βββββββββββββββββββββββββββββββ
β Tailnet (WireGuard)
βΌ
βββββββββββββββββββββββββββββββ
β Gaming Rig β
β βββββββββββββββββββββββββ β
β β Caddy (routes by Host)β β
β βββββββββββββ¬ββββββββββββ β
β β β
β ββββββββ΄βββββββ β
β βΌ βΌ β
β ββββββββββ ββββββββββββ β
β β Immich β βOpen-WebUIβ β
β ββββββββββ ββββββββββββ β
βββββββββββββββββββββββββββββββ
I terminate TLS on my VPS. The traffic between VPS and home travels encrypted over Tailnet. No third party sees the plaintext.
Connecting the VPS to the Tailnet
Before anything else, the VPS needs to join the Tailnet.
1. Install Tailscale on the VPS
curl -fsSL https://tailscale.com/install.sh | sh
2. Create a pre-auth key
On the Headscale server:
headscale preauthkeys create --user 1 --reusable --expiration 2409000h
The long expiration means the VPS won't have to reauthenticate ever.
3. Connect to Headscale
sudo tailscale up --login-server https://headscale.example.com --authkey=<key>
Verify with:
headscale node list
4. Test connectivity
From the VPS:
curl -sI http://gamingrig.nooblab.internal
Should return HTTP 200 from home Caddy. If this works, the Tailnet link is up.
Caddy on the VPS
First, point DNS to the VPS. Create a wildcard A record *.apps.example.com pointing to the VPS public IP.
Two things I need in the Caddy on the VPS that the stock image doesn't have:
- DNS challenge for wildcard certs. Let's Encrypt won't issue wildcards via HTTP challenge. You need a DNS plugin for your provider to automate cert provisioning.
- CrowdSec bouncer. Exposing anything to the public internet means dealing with bots, scanners, and script kiddies. CrowdSec blocks known bad actors.
The Dockerfile for the custom Caddy build:
FROM caddy:2.11-builder AS builder
RUN xcaddy build --with github.com/caddy-dns/hetzner/v2 \
--with github.com/mholt/caddy-l4 \
--with github.com/caddyserver/transform-encoder \
--with github.com/hslatman/caddy-crowdsec-bouncer/http@main \
--with github.com/hslatman/caddy-crowdsec-bouncer/appsec@main \
--with github.com/hslatman/caddy-crowdsec-bouncer/layer4@main
FROM caddy:2.11
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
The compose service:
services:
caddy:
build: ./caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy/data:/data
- ./caddy/config:/config
- ./caddy/logs:/var/log/caddy
environment:
- HETZNER_API_TOKEN=${HETZNER_API_TOKEN}
- CROWDSEC_API_KEY=${CROWDSEC_API_KEY}
restart: unless-stopped
The Caddyfile:
{
crowdsec {
api_url http://crowdsec:8080
api_key {env.CROWDSEC_API_KEY}
ticker_interval 15s
appsec_url http://crowdsec:7422
}
}
(tls_using_dns_challenge) {
tls {
dns hetzner {env.HETZNER_API_TOKEN}
propagation_delay 30s
}
}
(access_log) {
log {
output file /var/log/caddy/access.log {
roll_size 30MiB
roll_keep 5
}
}
}
*.apps.example.com {
import tls_using_dns_challenge
import access_log
route {
crowdsec
appsec
reverse_proxy http://host.docker.internal:8888
}
}
The global block configures CrowdSec integration. The access_log snippet writes logs that CrowdSec reads to detect malicious patterns. crowdsec blocks IPs based on community threat intelligence. appsec adds layer 7 inspection to catch things like SQL injection and XSS attempts. Not bulletproof, but it filters out the noise.
CrowdSec
CrowdSec needs its own container:
services:
crowdsec:
image: crowdsecurity/crowdsec:latest
environment:
COLLECTIONS: "crowdsecurity/caddy crowdsecurity/appsec-virtual-patching crowdsecurity/appsec-generic-rules"
volumes:
- ./crowdsec/acquis.yaml:/etc/crowdsec/acquis.yaml:ro
- ./crowdsec/data:/var/lib/crowdsec/data/
- ./crowdsec/config:/etc/crowdsec/
- ./caddy/logs:/var/log/caddy:ro
ports:
- "127.0.0.1:8081:8080"
- "127.0.0.1:7422:7422"
restart: unless-stopped
CrowdSec reads Caddy's access logs to detect malicious patterns. That's why both containers mount ./caddy/logs.
For the full CrowdSec + Caddy setup, see this guide.
Why HAProxy
Notice how Caddy forwards to host.docker.internal:8888? That's because Caddy runs on a bridge network. I have other services on this VPS that Caddy also reverse proxies, and they all share the same Docker bridge network.
But bridge networks can't reach the Tailnet interface. The Tailscale interface lives on the host, not inside container networks.
HAProxy solves this by running in network_mode: host. It binds to the host's network stack, sees the Tailnet interface, and forwards TCP traffic to my home server's Tailnet IP.
services:
home-proxy:
image: haproxy:alpine
network_mode: host
volumes:
- ./home-proxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
restart: unless-stopped
The haproxy.cfg:
defaults
mode tcp
timeout connect 5s
timeout client 30s
timeout server 30s
frontend front
bind *:8888
default_backend back
backend back
server gamingrig fd7a:115c:a1e0::1:80
To let Caddy reach HAProxy on the host, add extra_hosts to the Caddy service:
extra_hosts:
- "host.docker.internal:host-gateway"
Now Caddy forwards to host.docker.internal:8888, which hits HAProxy on the host. HAProxy forwards to the gaming rig over Tailnet. Simple bridge between two network worlds.
Home Caddy
The home Caddy needs entries for each public service. Add to the Caddyfile:
http://immich.apps.example.com {
reverse_proxy immich:2283
}
http://chat.apps.example.com {
reverse_proxy open-webui:8080
}
Then reload:
systemctl --user restart caddy
# or using the repo's CLI
bin/restart caddy
The VPS Caddy handles all *.apps.example.com with its wildcard config. Home Caddy just routes by Host header. To add a new public service, only the Home Caddyfile needs updating.
The Full compose.yml
Putting it all together:
services:
caddy:
build: ./caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy/data:/data
- ./caddy/config:/config
- ./caddy/logs:/var/log/caddy
environment:
- HETZNER_API_TOKEN=${HETZNER_API_TOKEN}
- CROWDSEC_API_KEY=${CROWDSEC_API_KEY}
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
crowdsec:
image: crowdsecurity/crowdsec:latest
environment:
COLLECTIONS: "crowdsecurity/caddy crowdsecurity/appsec-virtual-patching crowdsecurity/appsec-generic-rules"
volumes:
- ./crowdsec/acquis.yaml:/etc/crowdsec/acquis.yaml:ro
- ./crowdsec/data:/var/lib/crowdsec/data/
- ./crowdsec/config:/etc/crowdsec/
- ./caddy/logs:/var/log/caddy:ro
ports:
- "127.0.0.1:8081:8080"
- "127.0.0.1:7422:7422"
restart: unless-stopped
home-proxy:
image: haproxy:alpine
network_mode: host
volumes:
- ./home-proxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
restart: unless-stopped
Lower Your Eyebrows
"So you're running the gaming rig 24/7? What about the electricity bill?"
I don't. Think of the old days when you'd come back from vacation, plug the SD card into a reader, and move the photos. Same workflow, except with the Immich app. The homelab sleeps when I'm away.
One thing I'm eyeing for the future: PiKVM. Remote access at the hardware level, plus the ability to wake the machine while traveling.
"What happens when you want to game?"
Nothing. The containers keep running. Why would they stop?
"What if the rig needs to reboot for updates?"
Then it reboots. This isn't a production service with an SLA. It's a homelab. If it's down for 10 minutes, nobody's paging me.
"What about dynamic IP at home?"
Doesn't matter. The VPS has the static IP. My home connects outbound to the Tailnet. The VPS reaches home via Tailnet, not my home's public IP. Dynamic IP is a non-issue.
"What about noise and heat?"
Negligible. Big Noctua fans, and the GPU isn't spinning all the time. This isn't a rack server screaming in the living room.
"What's the total cost?"
The VPS is under $5/month and runs other stuff too. Electricity is negligible since it's not always-on. As for my time? I have plenty since I no longer work 9-6. (Why did 9-5 become 9-6? Corporate creep.) And I've built this to be low maintenance.
"Why not just buy a Synology NAS?"
What are you, a Synology salesman? I already have hardware. Why buy more? Also, RAID is not a backup, Restic to offsite storage is backup.
"But you're exposing your home network!"
Not really. No port forwarding on the home router. The firewall only allows traffic from the Tailnet IPv6 range. Public traffic hits the VPS first, where CrowdSec filters out the noise. VPS to home goes over Tailnet, encrypted. The VPS is the only thing exposed to the internet, and that's the point. It's a bastion. The home network is hidden behind layers of firewall rules and an encrypted tunnel.
"What about upload speed?"
Immich has automatic URL switching based on whether you're connected to a home WiFi. On Tailnet, it uses the fast direct connection. Outside, it falls back to the public proxy. The app handles it.
"Wait, the VPS uses Docker Compose? I thought you were all about Podman Quadlets."
The homelab uses Quadlets because that's what works on Bazzite. But the VPS predates the homelab. It was already running Docker Compose before any of this started. Migrating it to match would be work for no real benefit. Consistency is nice in theory. In practice, use what works where it works.
Wrapping Up
That's the full picture. A gaming rig running Podman Quadlets, connected to a self-hosted Tailnet, with a VPS handling public traffic. My mom and sibling can access Immich without learning what WireGuard is. I keep control of the TLS termination. No Cloudflare in the middle.
Two machines. One private mesh network. Public access on my terms.
The homelab repo is at github.com/ukazap/homelab-example.