i second this
i haven’t gotten around to looking into something like terraform/ansible yet, and currently rely on a series of setup.sh scripts and docker-compose files
i have a single master setup.sh at the root of my homelab which basically just outlines which scripts i need to run and in what order in order to get things back up and running from zero
i only user my README.md for any non scriptable stuff (such as external services i rely on such as cloudflare/vpn providers, etc)
i mean charitably you could say that your code / architecture should be self documenting, versus having to rely on READMEs / wikis
in effect, if you change the code you are by definition also changing the documentation, since the file names/function names/hierarchy is clear and unambiguous
while security might be compromised if an attacker found your documentation, it could equally be compromised by having zero documentation
the easier it is for you to get things back up and running in the event of a data loss / corrupted hard drive / new machine / etc, the less likely you are to forget any crucial steps (eg setting up iptables or ufw)
this is basically what i ended up doing to - glad to see my approach verified somewhat ha ha!
but yeah, in general whenever i make a change / add new service, i always try and add those steps to some sort of setup.sh / docker-compose
supports podcasts too? what tool are you using to download those? and does ABS handle the sorting/meta data the same way it does for audio books?
maybe silly question but does tailscale tunnel operate in a similar fashion to a cloud flare tunnel? as in you can remotely access your internal service over https?
i have nginx proxy manager set up all as well, but haven’t worked out the SSL part yet, so all my internal docker services are still on http
out of interest, how did you set up https with npm?
real question though is do you back up your backup server?
Why would Cloudflare warn me against a service they themselves offer? The email authentication is all managed by them
So i’ve been trying to set this up this exact thing for the past few weeks - tried all manner of different Nginx/Tailscale/VPS/Traefik/Wireguard/Authelia combos, but to no avail
I was lost in the maze
However, I realised that it was literally as simple as setting up a CloudFlare Tunnel on my particular local network I wanted exposed (in my case, the Docker network that runs the JellyFin container) and then linking that domain/ip:port within CloudFlare’s Zero Trust dashboard
Cloudflare then proxies all requests to your public domain/route to your locally hosted service, all without exposing your private IP, all without exposing any ports on your router, and everything is encrypted with HTTPS by default
And you can even set up what looks like pretty robust authentication (2FA, limited to only certain emails, etc) for your tunnel
Not sure what your use case is, but as mine is shared with only me and my partner, this worked like a charm
literally was going through the exact same thoughts as you a couple a weeks ago, tried so many different configurations but the one i found that worked was actually kinda simple
basically they way i did was to run a gluetun docker container, and then in the environment variables pass in the the fact i wanted this to use the WireGuard VPN manager, and then i passed in my Proton VPN wireguard api key (you’ll need a subscription for this)
then once that gluetun container is up and running, you literally just add “network_mode: service:gluetun” to any other containers that you want to use this VPN
can you can even test its working by sending a curl command to an ip checking site from within those containers connected to gluetun
and then also try shutting down that gluetun vpn, and see if you other services (e.g. qbitorrent, still work)
I’ve literally just set this all up and it’s working now after some tinkering, so here’s what I found out. Assuming you have correctly configured the sonarr/qbitorrent api keys and credentials:
When you make a TV show request in Sonarr, it will automatically add the torrent to your download client (e.g qbitorrent)
qbitorrent will then download the file to wherever you specify (e.g. /torrents/completed)
periodically, Sonarr will scan that /torrents/completed folder, and if it finds the tagged TV show, it will either copy or hard link that video file to your specified media folder (e.g. /media/tv-shows)
JellyFin will do the same, periodically scanning your media folders to see if there are any updates
EDIT: also if you are using docker containers, make sure that Sonars native /downloads folder is pointed at the same external folder your qBitTorrent is downloading files in
Here’s my approach to documentation. It’s about habits as much as it’s about actually writing anything down:
Never setup anything important via naked terminal commands that you will forget you did
Always wrap important commands in some kind of “setup-xyz.sh” script and then run that script to see if your install worked.
If you need to make a change to your service, ensure you update your script and so it can be re-run without braking anything
Get into the habit of this and you are documenting as you go