Why Self-Host?
romanzipp.com345 points by romanzipp 4 days ago
345 points by romanzipp 4 days ago
"start self-hosting more of your personal services."
I would make the case that you should also self host more as a small Software/SAAS business and it is not quite the boogeyman that a lot of cloud vendors want you to think.
Here is why. Most software projects/businesses don't require the scale and complexity for which you truly need the cloud vendors and their expertise. For example, you don't need Vercel to deploy NextJS or whatever static website or even netlify. You can setup Nginx or Caddy (my favorite) on a simple VPS with Ubuntu etc and boom. For majority of projects, that will do.
90%+ of projects can be self hosted with the following:
- A well hardened VPS server with good security controls. Plenty of good articles online on how to do the most important things (remove root login, ssh should only be key based etc).
- Setup a reverse proxy like Caddy (my favorite) or Nginx etc. Boom. Static files can now be served. Static websites can be served. No need for CDN etc unless you are talking about millions of requests per day.
- Setup your backend/API with something simple like supervisor or even the native systemd.
- The same Reverse proxy can also forward requests to backend and other services as needed. Not that hard.
- Self host a mysql/postgres database and setup the right security controls.
- Most importantly: Setup backups for everything using a script/cron and test them periodically.
- IF you really want to feel safe against DOS/DDOS etc, add cloudflare in front of everything.
So you end up with:
Cloudflare/DNS=>Reverse Proxy (Caddy/Nginx)=>Your App.
- You want to deploy ? Git pull should do it for most projects like PHP etc. If you have to rebuild binary, it will be another step but possible.
You don't need Docker or containers. They can help but not needed for small to even mid sized projects.
Yes, you can claim that a lot of these things are hard and I would say they are not that hard. Majority of projects don't need the web scale or whatever.
The main thing that gives me anxiety about this is the security surface area associated with "managing" a whole OS— kernel, userland, all of it. Like did I get the firewall configured correctly, am I staying on top of the latest CVEs, etc.
For that reason alone I'd be tempted to do GHA workflow -> build container image and push to private registry -> trivial k8s config that deploys that container with the proper ports exposed.
Run that on someone else's managed k8s setup (or Talos if I'm self hosting) and it's basically exactly as easy as having done it on my own VM but this way I'm only responsible for my application and its interface.
I left my VPS open to password logins for over 3 years, no security updates, no firewalls, no kernel updates, no apt upgrades; only fail2ban and I survived: https://oxal.org/blog/my-vps-security-mess/
Don't be me, but even if you royally mess up things won't be as bad as you think.
I've had password login enabled for decades on my home server, not even fail2ban. But I do have an "AllowUsers" list with three non-cryptic user names. (None of them are my domain name, but nice try.)
Last month I had 250k failed password attempts. If I had a "weak" password of 6 random letters (I don't), and all 250k had guessed a valid username (only 23 managed that), that would give... uh, one expected success every 70 years?
That sounds risky actually. So don't expose a "root" user with a 6-letter password. Add two more letters and it is 40k years. Or use a strong password and forget about those random attempts.
I wonder about:
- silently compromised systems, active but unknown
- VPS provider doing security behind your back
I'd be worried about this too. Like there must be AI bots that "try the doors" on known exploits all over the internet, and once inside just do nothing but take a look around and give themselves access for the future. Maybe they become a botnet someday, but maybe the agent never saw the server doing anything of value worth waking up its master for— running a crypto wallet, a shard of a database with a "payments" table, an instance of a password manager like Vault, or who knows what else might get flagged as interesting.
Security is way more nuanced than "hey look I left my door open and nothing happened!". You are suggesting, perhaps inadvertently, a very dangerous thing.
> Run that on someone else's managed k8s setup ... this way I'm only responsible for my application and its interface.
It's the eternal trade-off of security vs. convenience. The downside of this approach is that if there is a vulnerability, you will need to wait on someone else to get the fix out. Probably fine nearly always, but you are giving up some flexibility.
Another way to get a reasonable handle on the "managing a whole OS ..." complexity is to use some tools that make it easier for you, even if it's still "manually" done.
Personally, I like FreeBSD + ZFS-on-root, which gives "boot environments"[1], which lets you do OS upgrades worry-free, since you can always rollback to the old working BE.
But also I'm just an old fart who runs stuff on bare metal in my basement and hasn't gotten into k8s, so YMMV (:
[1] eg: https://vermaden.wordpress.com/2021/02/23/upgrade-freebsd-wi... (though I do note that BEs can be accomplished without ZFS, just not quite as featureful. See: https://forums.freebsd.org/threads/ufs-boot-environments.796...)
I think for a normal shlub like me who is unlikely to be on top of everything it’s really more of a cost / convenience tradeoff.
It might take Amazon or Google a few hours or a day to deploy a critical zero-day patch but that’s in all likelihood way better than I’d do if it drops while I’m on vacation or something.
I used digital ocean for hosting a wordpress blog.
It got attacked pretty regularly.
I would never host an open server from my own home network for sure.
This is the main value add I see in cloud deployments -> os patching, security, trivial stuff I don't want to have to deal with on the regular but it's super important.
Wordpress is just low-hanging fruit for attackers. Ideally the default behavior should be to expose /wp-admin on a completely separate network, behind a VPN, but no one does that, so you have to run fail2ban or similar to stop the flood of /wp-admin/admin.php requests in your logs, and deal with Wordpress CVEs and updates.
More ideal: don't run Wordpress. A static site doesn't execute code on your server and can't be used as an attack vector. They are also perfectly cacheable via your CDN of choice (Cloudflare, whatever).
A static site does run on a web server.
Yes, but the web server is just reading files from disk and not invoking an application server. So if you keep your web server up to date, you are at a much lesser risk than if you would also have to keep your application + programming environment secure.
That really depends on the web server, and the web app you'd otherwise be writing. If it's a shitty static web server, than a JVM or BEAM based web app might be safer actually.
Uh, yeah, I thought about Nginx or Apache and would expect them to be more secure then your average self-written application.
a static site is served by a webserver, but the software to generate it runs elsewhere.
Yes. And a web server has an attack surface, no?
I think it’s reasonable to understand that nginx/caddy serving static files (or better yet a public s3 bucket doing so) is way, way less of a risk than a dynamic application.
Of course, that’s true for those web servers. If kept up to date. If not, the attack surface is actually huge because exploits are well known.