I got hacked: My Hetzner server started mining Monero
blog.jakesaunders.dev492 points by jakelsaunders94 17 hours ago
492 points by jakelsaunders94 17 hours ago
> I also enabled UFW (which I should have done ages ago)
I disrecommend UFW.
firewalld is a much better pick in current year and will not grow unmaintainable the way UFW rules can.
firewall-cmd --persistent --set-default-zone=block
firewall-cmd --persistent --zone=block --add-service=ssh
firewall-cmd --persistent --zone=block --add-service=https
firewall-cmd --persistent --zone=block --add-port=80/tcp
firewall-cmd --reload
Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.
Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.
If you can, do not expose ports like this 8080:8080, but do this "192.168.0.1:8080:8080" so its bound to a private IP. Then use any old method expose only what you want to the world.
In my own use I have 10.0.10.11 on the vm that I host docker stuff. It doesn't even have its own public IP meaning I could actually expose to 0.0.0.0 if I wanted to but things might change in the future so it's a precaution. That IP is only accessible via wireguard and by the other machines that share the same subnet so reverse proxying with caddy on a public IP is super easy.
It's really a trap. I'm surprised they never changed the default to 127.0.0.1 instead of 0.0.0.0. So you would need to explicitly specify it, if you want to bind to all interfaces.
The reason is convenience. There would be a lot more friction if they didn't do it like this for everything other than local development.
Docker also has more traps and not quite as obvious as this. For example, it can change the private IP block its using without telling you. I got hit by this once due to a clash with a private block I was using for some other purpose. There's a way to fix it in the config but it won't affect already created containers.
By the way. While we're here. A public service announcement. You probably do NOT need the userland-proxy and can disable it.
/etc/docker/daemon.json
{ "userland-proxy": false }
Is there a guide that lists some common options / gotchas in Docker like this?
Some quick searching yields generic advice about keeping everything updated or running in rootless mode.
Not that I'm aware of. Sorry. Here's one my daemon.json files though. It tames the log file size and sets its format. And fixes the IP block so it won't change like I mentioned above.
{
"log-driver": "json-file",
"log-opts": {
"labels": "production_status",
"tag": "{{.ImageName}}|{{.Name}}|{{.ImageFullID}}|{{.FullID}}",
"env": "os,customer",
"max-size": "10m"
},
"bip": "172.17.1.1/24",
"default-address-pools": [
{"base": "172.17.0.0/16", "size": 24}
]
}Yup the regular "8080:8080" bind resulted in a ransom note in my database on day 1. Bound it to localhost only now.
I had the same experience (postgres/postgres on default port). It took me a few days to find out, because the affected database was periodically re-built from another source. I just noticed that for some periods the queries failed until the next rebuild.
Yea plenty of bots scouting the internet for these kinda vulnerabilities, good learning experience, wont happen again :D
one thing I always forget about, is that you have a whole network of 127.0.0.0/8 , not just one IP.
So you can create multiple addresses with multiple separate "domains" mapped statically in /etc/hosts, and allow multiple apps to listen on "the same" port without conflicts.
> Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway.
Like I said in another comment, drop Docker, install podman.I keep reading comments by podman fans asking to drop Docker and yet every time I have tried to use podman it failed on me miserably. IMHO it would be better if podman was not designed and sold as a docker drop in replacement but its own thing.
That sucks, I never had any problem running a Dockerfile in podman. I don't know what I do differently, but I would as a principle filter out any container that messes with stuff like docker in docker. Podman doesn't need these kind of shenegians.
Also the Docker Compose tool is a well-know exception to the compatibility story. (There is some unofficial podman compose tool, but that is not feature complete and quadlets are better anyway.)
I agree with approaching podman as its own thing though. Yes, you can build a Dockerfile, but buildah lets you build an efficient OCI image from scratch without needing root. For those interested, this document¹ explains how buildah compares to podman and docker.
1. https://github.com/containers/buildah/tree/main/docs/contain...
There's a real dearth of blog posts explaining how to use quadlets for the local dev experience, and actually most guides I've seen seem to recommend using podman/Docker compose. Do you use quadlets for local dev and testing?
I just use Podman's Kubernetes yaml as a compose substitute when running everything locally. This way it is fairly similar to production. Docker compose seems very niche to me now.
podman is not a drop-in replacement for Docker. You can replace it with podman but expect to encounter minor inconsistencies and some major differences, especially if you use Docker Compose or you want to use podman in rootless mode. It's far from just being a matter of `alias docker=podman`.
The only non-deprecated way of having your Compose services restart automatically is with Quadlet files which are systemd unit files with extra options specific to containers. You need to manually translate your docker-compose.yml into one or more Quadlet files. Documentation for those leaves a lot to be desired too, it's just one huge itemized man page.
This affects podman too.
Not if you run it in rootless mode, which is more of a first class citizen in Podman compared to Docker.
> Not if you run it in rootless mode.
Same as for docker, yes?
Rootless exists in Docker, yes, but as OP said, it's not first-class. The setup process is clunky, things break more often. In podman it just works, and podman is leading with features like quadlets, which make docker services just services like any other.
nope. You should look at https://docs.docker.com/engine/network/
Networking is just better in podman.
> nope. You should look at https://docs.docker.com/engine/network/
That page does not address rootless Docker, which can be installed (not just run) without root, so it would not have the ability to clobber firewall rules.
In docker, simply clearly define the interface (ip) and port. It can be 0.0.0.0:80 for example. No bypass happens.
it doesn't matter what netfilter frontend you use if you allow outbound connections from any binary.
In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.
This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.
On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.
> On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.
Sadly that's more of a duck tape or plaster, because any serious malware will launch their scripts with the proper '/bin/bash /path/to/dropped/payload' invocation. A non-exec mount works reasonably well only against actual binaries dropped into the paths, because it's much less common to launch them with the less known '/bin/ld.so /path/to/my/binary' stanza.
I've also at one time suggested that Debian installer should support configuring a read-only mount for /tmp, but got rejected. Too many packaging scripts depend on being able to run their various steps from /tmp (or more correctly, $TMPDIR).
I agree. That's why I said that it's also useful. It won't work in all scenarios, but in most of the cryptomining attacks, files dropped to /tmp are binaries.
It is really unfortunate that a lot of services expect to have write access to their config files, so you can tweak settings with a web UI.
If this weren't the case, plenty of containers could probably have a fully read-only filesystem.
Wasn’t there that npm malware thing a while ago that trashed your home folder if it couldn’t phone home?
Is there an automated way of doing this?
restricting outbound connections by binary: OpenSnitch .
You can also restrict outbound connections to cryptomining pools and malicious IPs. For example by using IOCs from VirusTotal or urlhaus.bazaar.ch
Two paths:
- Configuration management (ansible, salt, chef, puppet)
- Preconfigured images (NixOS, packer, Guix, atomic stuffs)
For a one-off: pssh
Hetzner has a free firewall service outside of your machine. You can use that as the first line of defence.
The problem with Hetzner's firewall service is it nukes network performance especially on ipv6.
That's what I use. Is it enough? Or should I also install a firewall on my machine?
Do both. Using provider's firewall service adds another level of defence. But hiccups may occur and firewall rules may briefly disappear (sync issues, upgrades, vm mobility issues) and you services then may become exposed. Happened to me in the past, were "lucky" enough so no damage was taken.
Personally I find just using nftables.conf straightforward enough that I don't really understand the need for anything additional. With iptables, it was painful, but iptables has been deprecated for a while now.
The problem with firewalld is that it has the worst UX of any program I know. Completely unintuitive options, the program itself doesn’t provide any useful help or hint if you get anything wrong and the documentation is so awful you have to consult the Red Hat manuals that have thankfully been written for those companies that pay thousands per month in support.
It’s not like iptables was any better, but it was more intuitive as it spoke about IPs and ports, not high-level arbitrary constructs such as zones and services defined in some XML file. And since firewalld uses iptables/nftables underneath, I wonder why do I need a worse leaky abstraction on top of what I already know.
I truly hate firewalld.
Coming from FreeBSD and pf, all Linux firewalls I’ve tried feels clunky _at best_ UX-wise.
I’d love a Linux firewall configured with a sane config file and I think BSD really nailed it. It’s easy to configure and still human readable, even for more advanced firewall gateway setups with many interfaces/zones.
A have no doubt that Linux can do all the same stuff feature-wise, but oh god the UX :/
I completely agree.
I have been using for many decades both Linux and FreeBSD, on many kinds of computers.
When comparing Linux with FreeBSD, I probably do not find anything more annoying on Linux than its networking configuration tools.
While I am using Linux on my laptops and desktops and on some servers with computational purposes, on the servers that host networking services I much prefer FreeBSD, for the ease of administration.
nftables is configured like that https://wiki.nftables.org/wiki-nftables/index.php/Simple_rul...
Hate it as well. Why should I bother learning about zones and abstract away ports, adresses, interfaces etc. only to find out pretty soon that my baremetal server actually always needs fine grained rules at least from the firewalld's point of view.
Do either of them work for container-to-container traffic?
Imagine container A which exposes tightly-coupled services X and Y. Container B should be able to access only X, container C should be able to accesd only Y.
For some reason there just isn't a convenient way to do this with Docker or Podman. Last time I looked into it, it required having to manually juggle the IP addressed assigned to the container and having the service explicitly bind to it - which is just needlessly complicated. Can firewalls solve this?
You might be interested in ufw-docker: https://github.com/chaifeng/ufw-docker
UFW and Firewall-CMD both just use iptables in that context though. The real upgrade is in switching to nftables. I know I'm going to need to learn eBpf as the next step too, but for now nftables is readable and easy to grok especially after you rip out the iptables stuff, but technically nftables is still using netfilter.
And ufw supports nftables btw. I think the real lesson is write your own firewalls and make them non-permissive - then just template that shit with CaC.
I’ll just mention Foomuuri here. Its bit of a spiritual successor to shorewall and has firewalld emulation to work with tools compatible with firewalld
Thanks! Would be cool to have it packaged for alpine since firewalld requires D-Bus. There is awall but that's still on iptables and IMO at bit clunky to set up.
Foomuuri is ALMOST there.
I mean there are some payload over payload like GRE VPE/VXLAN/VLAN or IPSec that needs to be written in raw nft if using Foomuuni but it works!.
But I love the Shorewall approach and your configuration gracefully encapsulated Shorewall mechanic.
Disclaimer: I maintain vim-syntax-nftables syntax highlighter repo at Github.
> Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.
This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.
So this is part of the "React2Shell" CVE-2025-55182 issue? I find it interesting that this seems to get so little publicity. Almost like the issue is normal or expected. And it looks like the affected versions go back a little over a year. So if you've deployed anything with Next.js over the last 12 months your web app is now probably part of a million node bot net. And everyone's advice is just "use docker" or "install a firewall".
I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant
Frontend churn has chilled out so much over the last few years. The default webapp stack today has been the same for 5 years now, next.js (9yo) react (12yo) tailwind (8yo) postgres (36yo). I'm not endorsing this stack, it just seems to be the norm now.
Compare that to what we had in the late 00's and early 10's we went through prototype -> mootools -> jquery -> backbone -> angularjs -> ember -> react, all in about 6 years. Thats a new recommended framework every year. If you want to complain about fads and churn, hop on over to AI development, they have plenty.
You can write web apps without touching the hottest JS framework of the week. I've never touched these frameworks that try to blur the line between frontend and backend.
Pick a solid technology (.NET, Java, Go, etc...) for the backend and use whatever you want for your frontend. Voila, less CVEs and less churn!
I had a Pangolin instance compromised by this: https://github.com/orgs/fosrl/discussions/2014
I'm hearing about it like crazy because I deployed around 100 Next frontends in that time period. I didn't use server components though so I'm not affected.
My understanding of the issue is that even if you don't use server components, you're still vulnerable.
Unless you're running a static html export - eg: not running the nextjs server, but serving through nginx or similar
Just a note - you can very much limit cpu usage on the docker containers by setting --cpus="0.5" (or cpus:0.5 in docker compose) if you expect it to be a very lightweight container, this isolation can help prevent one roudy container from hitting the rest of the system regardless of whether it's crypto-mining malware, a ddos attempt or a misbehaving service/software.
Another is running containers in read-only mode, assuming they support this configuration... will minimize a lot of potential attack surface.
Never looked into this. I would expect the majority of images would fail in this configuration. Or am I unduly pessimistic?
Many fail if you do it without any additional configuration. In Kubernetes you can mostly get around it by mounting `emptyDir` volumes to the specific directories that need to be writable, `/tmp` being a common culprit. If they need to be writable and have content that exists in the base image, you'd usually mount an emptyDir to `/tmp` and copy the content into it in an `initContainer`, then mount the same `emptyDir` volume to the original location in the runtime container.
Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].
I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.
Readonly and rootless are my two requirements for Docker containers. Most images can't run readonly because they try to create a user in some startup script. Since I want my UIDs unique to isolate mounted directories, this is meaningless. I end up having to wrap or copy Dockerfiles to make them behave reasonably.
Having such a nice layered buildsystem with mountpoints, I'm amazed Docker made readonly an afterthought.
I like steering docker runs with docker-compose, especially with .env files - easy to store in repositories, easy to customise and have sane defaults.
Yeah agreed. I use docker-compose. But it doesn't help if the Docker images try to update /etc/passwd, or force a hardcoded UID, or run some install.sh at runtime instead of buildtime.
Depends on specific app use case. Nginx doesn't work with it but valkey will.
While this is a good idea I wonder if doing this could allow the intrusion to go undetected for longer - how many people/monitoring systems would notice a small increase in CPU usage compared to all CPUs being maxed out.
This is true, but it's also easy to set at one point and then later introduce a bursty endpoint that ends up throttled unnecessarily. Always a good idea to be familiar with your app's performance profile but it can be easy to let that get away from you.
Soft and hard memory limits are worth considering too, regardless of container method.
The other thing to note is that docker is for the most part, stateless. So if you're running something that has to deal with questionable user input (images and video or more importantly PDFs), is to stick it on its own VM and then cycle the docker container every hour and the VM every 12, and then still be worried about it getting hacked and leaking secrets.
If I can get in once, I can do it again an hour later. I'd be inclined to believe that dumb recycling is not very effective against a persistent attacker.
Most of this is mitigated by running docker in an LXC containers (like proxmox does) which grants a lot more isolation than docker on it's own - closer in nature to running separate VMs.
No firewall! Wow that's brave. Hetzner will let you configure one that runs outside of the box so you might want to add that too, as part of your defense in depth - that will cover you if you make a mistake with ufw. Personally I keep SSH firewalled only to my home address in this way; if I'm out and about and need access, I can just log into Hetzner's website and change it temporarily.
Firewalls in the majority of cases don't get you much. Yes it's a last line of defense if you do something really stupid and don't even know where or what you configure your services to listen on, but if you don't the difference between running firewalls and not is minuscule.
There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.
The way the author describes docker being the savior appears to be sheer luck.
The author mentioned they had other services exposed to the internet (Postgres, RabbitMQ) which increases their attack surface area. There may be vulnerabilities or misconfigurations in those services for example.
Good security is layered.
But if they have to be exposed then a firewall won't help, and if they don't have to be exposed to the internet then a firewall isn't needed either, just configure them not to listen on non-local interfaces.
This sounds like an extremely effective foot gun.
Just use a firewall.
If you're at a point where you are exposing services to the internet but you don't know what you're doing you need to stop. Choosing what interface to listen on is one of the first configuration options in pretty much everything, if you're putting in 0.0.0.0 because that's what you read on some random blogspam "tutorial" then you are nowhere near qualified to have a machine exposed to the internet.
I'm not sure what you mean, what sounds dangerous to me is not caring about what services are listening to on a server.
The firewall is there as a safeguard in case a service is temporarily misconfigured, it should certainly not be the only thing standing between your services and the internet.
extremely loud incorrect buzzer noise, what are you going to say next "bastion servers are a scam"
But the firewall wouldn't have saved them if they're running a public web service or need to interact with external services.
I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.
No you're right, I didn't mean the firewall would have saved them, but just as a general point of advice. And yes a second VPS running opnSense or similar makes a nice cheap proxy and then you can firewall off the main server completely. Although that wouldn't have saved them either - they'd still need to forward HTTP/S to the main box.
A firewall blocking outgoing connections (except those whitelisted through the proxy) would’ve likely prevented the download of the malware (as it’s usually done by using the RCE to call a curl/wget command rather than uploading the binary through the RCE) and/or its connection to the mining server.
How many people do proper egress filtering though, even when running a firewall
In practice, this is basically impossible to implement. As a user behind a firewall you normally expect to be able to open connections with any remote host.
Not impossible at all with a policy-filtering HTTPS proxy. See https://laurikari.github.io/exfilguard/
In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.
The only time I have ever had a machine compromised in 30 years of running Linux is when I ran something exposed to the internet on a well known port.
I know port scanners are a thing but the act of using non-default ports seems unreasonably effective at preventing most security problems.
This is very, very, very bad advice. A non-standard port is not a defence. It’s not even slightly a defence.
Correct. From what I understand, Shodan has had for years a search feature in their paid plans to query for "service X listening on non-standard port". The only sane assumption is that any half-decent internet-census[tm] tool has the same as standard by now.
I do this too, but I think it should only be a defense in depth thing, you still need the other measures.
Password auth being enabled is also very brave. I don’t think fail2ban is necessary personally, but it’s popular enough that it always come up.
I don't whitelist IPs for ssh anymore, but I always run sshd on randomly selected port, in order to not get noticed by port scanners.
I do it for a really long time already, and until now I am not sure if it has any benefit or it's just umbrella in a sideways storm.
As long as you understand it's security by obscurity, rather than by cryptography.
I don't think it's wrong, it's just not the same as eg using a yubikey.
For the record this is only available for their VPS offering and not dedis. If you rent a dedi through their server auction you still need to configure your own firewall.
Dedicated servers can configure external firewalls too; there's a tab for it on the server config. It's basic but functional.
I have SSH blocked altogether and use wireguard to access the server. If something goes wrong I can always go to the dashboard and reenable SSH for my IP. But ultimately your setup is just as secure. Perhaps a tiny bit less convenient.
Yup. All my servers are behind Tailscale. The only thing I expose is a load balancer that routes tcp (email) and http. That balancer is running docker, fully firewalled (incl docker bypasses). Every server is behind herzner’s firewall in addition to the internal firewall.
App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.
I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.
Honestly fail2ban is amazing. I might doa write up on the countless of attempts on my servers.
The only way I've envisioned fail2ban to be of any use at all is if you gather IPs from one server and use them on your whole fleet and I got it running like this for a while. Ultimately I decided that all it does is give you a cleaner log file since by definition its working on logs for attacks/attempts that did not succeed. We need to stop worrying about attempts we see in the logs and let software do its job.
> The Reddit post I’d seen earlier? That guy got completely owned because his container was running as root. The malware could: [...]
Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?
While this is true, the general security stance on this is: Docker is not a security boundary. You should not treat it like one. It will only give you _process level_ isolation. If you want something with better security guarantees, you can use a full VM (KVM/QEMU), something like gVisor[1] to limit the attack surface of a containerized process, or something like Firecracker[2] which is designed for multi-tenancy.
The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.
I hear the "Docker is not a security boundary." mantra all the time, and IIRC it was the official stance of the Docker project a long time ago, but is this really true?
Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).
Am I missing something?
that's a really good point .. but, I think 99% of docker users believe it is a a sandbox and treat it as such.
And not without cause. We've been pitching docker as a security improvement for well over a decade now. And it is a security improvement, just not as much as many evangelists implied.
Must depend on who you've been talking to. Docker's not been pitched for security in the circles I run in, ever.
Not 99%. Many people run an hypervisor and then a VM just for Docker.
Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).
Agreed - this is actually pretty common in the Proxmox realm of hosters. I segment container nodes using LXC, and in some specific cases I'll use a VM.
Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).
it is a sandbox against unintentional attacks and mistakes (sudo rm -rf /)
but will not stop serious malware
Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.
Docker is pretty much the same but supposedly more flimsy.
Both have non-obvious configuration weaknesses that can lead to escapes.
Yeah but why would somebody co-host military secrets or billions of dollars? Its a bit of a stretch
I think you’re missing the point, which was that high value targets adjacent to soft targets make escapes a legitimate target, but in low value scenarios vm escapes aren’t worth the R&D
but if you can do it at scale it might still be worth it, like owning thousands of machines
Firstly, the attacker just wants to mine Monero with CPU, they can do that inside the container.
Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.
Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.
Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.
If the container is running in privileged mode you can just talk to the docker socket to the daemon on the host, spawn a new container with direct access to the root filesystem, and then change anything you want as root.
Notably, if you run docker-in-docker, Docker is probably not a security boundary. Try this inside any dind container (especially devcontainers): docker run -it --rm --pid=host --privileged -v /:/mnt alpine sh
I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...
Containers are never a security boundary. If you configure them correctly, avoid all the footguns, and pray that there's no container escape vulnerabilities that affect "correctly" configured containers then they can be a crude approximation of a security boundary that may be enough for your use case, but they aren't a suitable substitute for hardware backed virtualization.
The only serious company that I'm aware of which doesn't understand that is Microsoft, and the reason I know that is because they've been embarrassed again and again by vulnerabilities that only exist because they run multitenant systems with only containers for isolation
You really need to use user namespaces to get this kind of security protection -- running as root inside a container without user namespaces is not secure. Yes, breakouts often require some other bug or misconfiguration but the margin for error is non-existent (for instance, if you add CAP_SYS_PTRACE to your containers it is trivial to break out of them and container runtimes have no way of protecting against that). Almost all container breakouts in the past decade were blocked by user namespaces.
Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).
Container escapes exist. Now the question is whether the attacker has exploited it or not, and what the risk is.
Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.
Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.
This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.
I think a root container can talk to docker daemon and launch additional containers...with volume mounts of additional parts of file system etc. Not particularly confident about that one though
Unintentional vulnerabilities in Docker and the kernel aside, it can only do that if it has access to the Docker API (usually through a bind mount of the Unix socket). Having access to the Docker API is equivalent to having root on the host.
Well $hit. I have been using Docker for installing NPM modules in interactive projects I was testing out. I believed Docker blocked access to the underlying host (my computer).
Thanks for mentioning it - but now... how does one deal with this?
If you didn’t mount docker.sock or any directory above it (i.e. / or /run by default) or run your containers as --privileged, you’re probably fine with respect to this angle. I’d still recommend rootless containers under unprivileged users* or VMs for extra comfort. Qubes (https://www.qubes-os.org/) is good, even if it’s a little clunkier than it could be.
* but if you’re used to bind-mounting, they’ll be a hassle
Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.
As sibling mentioned, unless you or the runtime explicitly mount the docker socket, this particular scenario shouldn't affect you.
You might still want to tighten things up. Just adding on the "rootless" part - running the container runtime as an unprivileged user on the host instead of root - you also want to run npm/node as unprivileged user inside the container. I still see many defaulting to running as root inside the container since that's the default of most images. OP touches on this.
For rootless podman, this will run as a user with your current uid and map ownership of mounts/volumes:
podman run -u$(id -u) --userns=keep-idThere would be, but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape.
Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.
> but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape
Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.
While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.
As a maintainer of runc (the runtime Docker uses), if you aren't using user namespaces (which is the case for the vast majority of users) I would consider your setup insecure.
And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.
Half the vendor software I come across asks you to mount devices from the host, add capabilities or run the container in privileged mode because their outsourced lowest bidder developers barely even know what a container is. I doubt even the smallest minority of their customers protest against this because apparently the place I work at is always the first one to have a problem with it.
I've seen many articles with `-v /var/run/docker.sock:/var/run/docker.sock` without scary warning
>there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?
non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP
Docker containers with root have rootish rights on the host machine too because the userid will just be 0 for both. So if you have, say, a bind mount that you play fast and loose with, the docker user can create 0777 files outside the docker container, and now we're almost done. Even worse if "just to make it work" someone runs the container with --privileged and then makes the terminal mistake of exposing that container to the internet.
Can you explain this a bit further? Wouldn't that 0777 file outside docker be still executed inside the container and not on the host?
I believe they meant you could create an executable that is accessible outside the container (maybe even as setuid root one), and depending on the path settings, it might be possible to get the user to run it on the host.
Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.
There are obviously other ways to get that executable to be run on the host, this just a simple example.
Another example is they would enumerate your directories and find the names of common scripts and then overwrite your script. Or to be even sneakier, they can append their malicious code to an existing script in your filesystem. Now each time you run your script, their code piggybacks.
OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.
The $HOME/.{aws,docker,claude,ssh}
Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
If your chosen development environment supports it, look into distroless or empty base containers, and run as --read-only if you can.
Go and Rust tend to lend themselves to these more restrictive environments a bit better than other options.
Either docker or a kernel level exploit. With non-VM containers, you are sharing a kernel.
Interesting that this got posted today, I also have a server on Hetzner (although I don't think it's relevant) and noticed yesterday that a Monero miner had been installed.
Luckily for me, the software I had installed[1] was in an LXC container running under Incus, so the intrusion never escaped the application environment, and the container itself was configured with low CPU priority so I didn't even notice it until I tried to visit the page and it didn't load.
I looked around a bit and it seemed like an SSH key had been added under the root user, and there were some kind of remote management agents installed. This container was running Alpine so it was pretty easy to identify what processes didn't belong from a simple ps output of the remaining processes after shutting down the actual web application.
In the end, I just scrapped the container, but I did save it in case I ever feel like digging around (probably not). In the end I did learn some useful things:
- It's a good idea to assume your system will get taken over, so ensure it's isolated and suitably resource constrained (looking at you, pay-as-you-go cloud users).
- Make sure you have snapshots and backups, in my case I do daily ZFS snapshots in Incus which makes rolling back to before the intrusion a breeze.
- While ideally anything compromised should be scrapped, rolling back, locking it down and upgrading might be OK depending on the threat.
Regarding the miner itself:
- from what I could see in its configuration it hadn't actually been correctly configured, so it's possible they do some kind of benchmark and just leave the system silently compromised if it's not "worth it", they still have a way in to use it for other purposes.
- no attempt had been made at file system obfuscation, which is probably the only reason I really discovered it. There were literally folders in /root lying around with the word "monero" in them, this could have been easily hidden.
- if they hadn't installed a miner and just silently compromised the system, leaving whatever running on it alone (or even doing a better job at CPU priority), I probably never would have noticed this.
Not proof read by a human. It claims more than once the vulnerability was related to Puppeteer. Hallucination!
"CVE-2025-66478 - Next.js/Puppeteer RCE)"
TFA mentions it’s mostly a transcript of a Claude session literally in the first paragraph.
That was added as an edit. It does not cover the inaccuracies contained within. It should more realistically say "this article was generated by an LLM and may contain several errors which I didn't bother to find or correct."
Good job.
Examples like this, are why I don’t run a VPS.
I could definitely do it (I have, in the past), but I’m a mediocre admin, at best.
I prefer paying folks that know their shit to run my hosting.
Recently, those Monero miners were installing themselves everywhere that had a vulnerable React 19. I had exactly the same problem.
I love mining malware - it's reasonably visible and causes almost no damage. Essentially, it's like a bug bounty program that you don't have to manage, doesn't generate costly bullshit reports, and only costs you a few bucks of electricity when a vulnerability is found.
If you have decent network or process level monitoring, you're likely to find it, while you might not realize the vulnerable software itself or some stealthier, more dangerous malware that might exploit it.
I had to nuke my Oracle Cloud box that runs my Umami server. It got hit. Was a good excuse to upgrade version and upgrade all my backup systems etc. Lost a few hours of data while it was returning 500 errors.
Hahaha OP could be in deep trouble depending on what types of creds/data they had in that container. I had replied to a child comment but I figure best to reply to OP.
From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.
OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here. The $HOME/.{aws,docker,claude,ssh} Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
Luckily umami in docker is pretty compartimentalized. All data is in the and the DB runs in another container. The biggest thing is the DB credentials. The default config requires no volume mounts so no worries there. It runs unprivileged with no extra capabilities. IIRC don't think the container even has bash, a few of the exploits that tried to run weren't able to due to lack of bash in the scripts they ran.
Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.
Nothing in that container luckily, just what Umami needed to run, so no creds at all. Thanks for the info though!
I wouldn't trust that boot image or storage again, I'd nuke it for peace of mind.
That said, do you have an image of the box or a container image? I'm curious about it.
Yeah I did consider just killing it, I'm going to keep an eye on it for a few days with a gun to it just in case.
I was lucky in that my DB backups were working so all my persistence wax backed up to S3. I think I could stand up another one in an hour.
Unfortunately I didn't keep an image no. I almost didn't have the foresight to investigate before yeeting the whole box into the sun!
Enable connection tracking (if it's not already) and keep looking at the conntrack entires. That's a good way to spot random things doing naughty stuff.
Sure does seem like the primary outcome of cryptocurrencies being released onto the world has been criminals making money.
Sure does seem like the primary outcome of email being released onto the world has been criminals making money.
Criminals and the porn industry are almost invariably early adopters of new technologies. For better or worse their use-cases are proof-of-concepts that get expanded and built on, if successful, by more legitimate industries.
Re: the Internet.
Re: Peer-to-peer.
Re: Video streaming.
Re: AI.
What is the average length of time for new tech to escape porn and crime and integrate into real applications? Longer than 15 years?
How were criminals the early adopters of Internet, Video streaming and AI?
What's considered nowadays the best practice (in terms of security) for running selfhosted workloads with containers? Daemon less, unprivileged podman containers?
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?
As always: never run containers as root. Never expose ports to the internet unless needed. Never give containers outbound internet access. Run containers that you trust and understand, and not random garbage you find on the internet that ships with ancient vulnerabilities and a full suite of tools. Audit your containers, scan them for vulnerabilities, and nuke them from orbit on the regular.
Easier said than done, I know.
Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.
You’ll set yourself up for success if you check the dependencies of anything you run, regardless of it being containerised. Use something like Snyk to scan containers and repositories for known exploits and see if anything stands out.
Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.
Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.
And a whole host of other things.
regardless of firewalls and best practices and all, i'd just put all sorts of admin-related stuff behind tailscale (or similar), including ssh. hetzner allows you to have ssh closed in your public ip and still open a terminal via the console if ssh-on-tailscale fails. for the web stuff you should do a similar trick. blog and public websites on the public address, while admin stuff goes on tailscale. and if you do it nicely with letsencrypt you can even have nice hostnames pointing to your private stuff.
I am not an expert in incident reaction, but I thought the safe way was to image the affected machine, turn it off, take a clean machine, boot a clean OS image with the affected image mounted read only in a VM, and do the investigation like that ?
Assume that the malware has replaced system commands, possibly used a kernel vulnerability to lie to you to hide its presence, so do not do anything in the infected system directly ?
Maybe if your company infrastructure is affected but not the server you use to host your side projects on with “coolify” unless IT security is your hobby.
TY for the article. You made me look into my server, and also found some strange activity on my docker containers. Good call!
I find it interesting that the recent trend of moving to self-hosted solutions is sparking this rediscovery of security issues that come with self-hosting. One more time and it will be a cycle!
What trend? All I'm seeing here is further centralisation:
Search engines try to fight slop results with collateral damage mostly in small or even personal websites. Restaurants are happy to be on one platform only: Google Maps. Who needs an expensive website if you're on there and someone posts your menu as one of the pictures? (Ideally an old version so the prices seem cheaper and you can't be pinned down for false advertising.) Open source communities use Github, sometimes Gitlab or Codeberg, instead of setting up a Forgejo (I host a ton of things myself but notice that the community effect is real and also moved away from self hosting a forge). The cherry in top is when projects use Discord chats as documentation and bug reporting "form". Privacy people use Signal en masse, where Matrix is still as niche as it was when I first heard of it. The binaries referred to as open source just because they're downloadable can be found on huggingface, even the big players use that exclusively afaik. Some smaller projects may be hosted on Github but I have yet to see a self-hosted one. Static websites go on (e.g. Github) Pages and back-ends are put on Firebase. Instead of a NAS, individuals as well as small businesses use a storage service like Onedrive or Icloud. Some more advanced users may put their files on Backblaze B2. Those who newly dip their toes in self-hosting increasingly use a relay server to reach their own network, not because they need it but to avoid dealing with port forwarding or setting up a way to privately reach internal services. Security cameras are another good example of this: you used to install it, set a password, and forward the port so you can watch it outside the home. Nowadays people expect that "it just works" on their phone when they plug it in, no matter where they are. That this relies on Google/Amazon and that they can watch all the feeds is acceptable for the convenience. And that's all not even mentioning the death of the web: people who don't use websites anymore the way they were meant (as hyperlinked pages) but work with an LLM as their one-stop shop
Not that the increased convenience, usability, and thus universal accessibility of e.g. storage and private chats is necessarily bad, but the trend doesn't seem to me as you seem to think it is
I can't think of any example of something that became increasingly often self-hosted instead of less across the last 1, 5, or 10 years
If you see a glimmer of hope for the distributed internet, do share because I feel increasingly as the last person among my friends who hosts their own stuff
it's slow, but there are people slowly turning towards a more decoupled internet, the problem is that you still *have* to use cloudflare (or any kind of http proxy), it's just a basic requirement that you can't avoid for anything that people would be interested in keeping offline.
I've been on the receiving end of attacks that were reported to be the size of more than 10tbps I couldn't imagine how I would deal with that if I didn't have a 3rd party providing such protection - it would require millions $$ a year just in transit contracts.
There is an increasing amount of software that attempts to reverse this, but as someone from https://thingino.com/ said: opensource is riddled with developers that died to starvation (nobody donates to opensource projects).
After reading some comments: this probably goes without saying, but one should be very careful what to expose to the internet. Sounds like the analytics-service maybe could have been available only over VPN (or similar, like mTLS etc.)
And for basic web sites, it's much better if it requires no back-end.
Every service exposed increases risk and requires additional vigilance to maintain. Which means more effort.
I was surprised by the same thing, a tool I run that uses Next.js without me knowing before.
I noticed this, because Hetzner forwarded me an email from the German government agency for IT security (BSI) that must have scanned all German based IP addresses for this Next.js vulnerability. It was a table of IP addresses and associated CVEs.
Great service from their side, and a lot of thanks to the German tax payers wo fund my free vulnerability scans ;)
The world will be a better place when all crypto just disappears
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing
$ sudo ufw allow ssh
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp
$ sudo ufw enable
As a user of iptables this order makes me anxious. I used to cut myself out from the server many times because first blocking then adding exceptions. I can see that this is different here as the last command commits the rules...I had this one too: I first denied all incoming requests and was about to allow SSH, but my SSH connection dropped :) Fortunately, I was able to restore the VM with the provider's VM console.
I took issue with this paragraph of the article, on account of several pieces of misinformation, presumably courtesy of Claude hallucinations:
> Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.
1. Files of running programs can be deleted while the program is running. If the program were trying to hide itself, it would have deleted /tmp/.XIN-unix/javae after it started. The nonexistence of the file is not a reliable source of information for confirming that the container was not escaped.
2. ps shows program-controlled command lines. Any program can change what gets displayed here, including the program name and arguments. If the program were trying to hide itself, it would change this to display `login -fp ubuntu` instead. This is not a reliable source of information for diagnosing problems.
It is good to verify the systemd units and crontab, and since this malware is so obvious, it probably isn't doing these two hiding methods, but information-stealing malware might not be detected by these methods alone.
Later, the article says "Write your own Dockerfiles" and gives one piece of useless advice (using USER root does not affect your container's security posture) and two pieces of good advice that don't have anything to do with writing your own Dockerfiles. "Write your own Dockerfiles" is not useful security advice.
> "Write your own Dockerfiles" is not useful security advice.
I actually think it is. It makes you more intimate with the application and how it runs, and can mitigate one particular supply-chain security vector.
Agreeing that the reasoning is confused but that particular advice is still good I think.
OK, so am I right that this guy had a completely unsecured metrics endpoint running on his server? Why would you do that in the first place?
I’m sorry you went through this.
But I am interested in the monero aspect here.
Should I treat this as some datapoint on monero’s security having held up well so far?
The main reason to use Monero for stuff like this is their mining algo. They made big efforts and changed algorithms several times to make and keep it GPU and ASIC resistant.
If you used the server to mine Bitcoin, you would make approximately zero (0) profit, even if somebody else pays for the server.
But also yes, Monero has technically held up very well.
Didn't Qubic manage to attack Monero?
They tried to do a 51% attack which at worst could result in double spends. They have never reached more than 35%.
The attack did not and could not compromise or weaken moneros privacy and anonymity features.
You can run Docker Scout on one repo for free, and that would alert you that something was using Next.js and had that CVE. AWS ECR has pretty affordable scanning too: 9 cents/image and 1 cent/rescan. Continuous scanning even for these home projects might be worth it.
> Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.
/tmp/.XIN-unix/javae &
rm /tmp/.XIN-unix/javae
This article’s LLM writing style is painful, and it’s full of misinformation (is Puppeteer even involved in the vulnerability?).Yeah fair, I asked claude to help because honestly this was a little beyond my writing skills. I'm real though. Sorry. Will change
Seconding what others have said about preferring to read bad human writing. And I don’t want to pick on you – this is a broadly applicable message prompted by a drop in the bucket – but please don’t publish articles beyond your ability to fact check. Just write what you actually know, and when you’re making a guess or you still have open questions at the end of your investigation, be honest about that. (People make mistakes all the time anyway, but we’re in an age where confident and detailed mistakes have become especially accessible.)
Hi Jake! Cool article, and it's something I'll keep in mind when I start giving my self-hosted setup a remodel soon. That said, I have to agree with the parent comment and say that the LLM writing style dulled what would otherwise have been a lovely sysadmin detective work article and didn't make me want to explore your site further.
I'm glad you're up to writing more of your own posts, though! I'm right there with you that writing is difficult, and I've definitely got some posts on similar topics up on my site that are overly long and meandering and not quite good, but that's fine because eventually once I write enough they'll hopefully get better.
Here's hoping I'll read more from you soon!
Thanks for the encouragement! I find it difficult to write articles beyond simply stating a series of facts.
I tried handwriting https://blog.jakesaunders.dev/schemaless-search-in-postgres/ bit I thought it came off as rambling.
Maybe I'll have a go at redrafting this tomorrow in non LLM-ese.
This is much more pleasent to read and it gives a great insight into your actual thought process. Thanks for sharing and great writeup.
I fixed it, apologies for the misinformation.
It still says:
> IT NEVER ESCAPED.
You haven't confirmed this (at least from the contents of the article). You did some reasonable spot checks and confirmed/corrected your understanding of the setup. I'd agree that it looks likely that it did not escape or gain persistence on your host but in no way have you actually verified this. If it were me I'd still wipe the host and set up everything from scratch again[0].
Also your part about the container user not being root is still misinformed and/or misleading. The user inside the container, the container runtime user, and whether container is privileged are three different things that are being talked about as one.
Also, see my comment on firewall: https://news.ycombinator.com/item?id=46306974
[0]: Not necessarily drop-everything-you-do urgently but next time you get some downtime to do it calmly. Recovering like this is a good excercise anyway to make sure you can if you get a more critical situation in the future where you really need to. It will also be less time and work vs actually confirming that the host is uncontaminated.
I did see your comment on Firewall, and you're right about the escape. It seems safe enough for now. Between the hacking and accidentally hitting the front page of HN it's been a long day.
I'm going to sit down and rewrite the article and take a further look at the container tomorrow.
Before rewriting the article, roll out a new server. Seriously. It seems you do not have the skills yet to do a proper audit. It’s better to roll out a pristine server. If that is a lot of work, it is a good moment to learn about declarative system configuration.
At any rate, this happening to you sucks! Hugs from a fellow HN user, I know that things like this can suck up a lot of time and energy. It’s courageous to write about such an incident incident, I think it’s useful to a lot of other people too, kudos!
Hey, thanks for taking the time to share your learnings and engage. I'm sure there are HN readers out there who will be better off for it alongside you!
(And good to hear you're leaving the LLMs out of the writing next time <3)
I still see Puppeteer mentioned several times in your post and don't understand what that has to do with Umami, nextjs, and/or CVE-2025-66478.
If I’m not wrong, a hetzner VM by default has no firewall enabled. If you are coming from providers with different default settings, that might bite you. Containers that you thought were not open to internet have been open all this time. Two firewalls failed: They bypassed ufw and there was no external firewall either.
You have to define a firewall policy and attach it to the VM.
Does anybody know how to just list the processes running inside a single container from within that container?
And isn’t it a design flaw if you can see all processes from inside a container? This could provide useful information for escaping it.
I had something similar, but I was lucky enough to catch it myself. I've used SSH for years, and never knew that it - by default - also accepts password logins. Maybe dumb on my part, but there you go...
This is a perfect example of how honeypots, anti-malware organizations, and blacklists are so important to security.
Even if you are an owasp member who reads daily vulnerability reports, it's so easy to think you are unaffected.
Something similar happened to me last year, it was with an unsecured user account accessible over ssh with password authentication, something like admin:admin that I forgot about.
At least that's what I think happened because I never found out exactly how it was compromised.
The miner was running as root and it's file was even hidden when I was running ls ! So I didn't understand what was happening, it was only after restarting my VPS from with a rescue image, and after mounting the root filesystem, that I found out the file I was seeing in the processes list did indeed exist.
This nextjs vulnerability is gonna be exploited everywhere because its so easy. This is just the start
I didn’t think it was possible for me to dislike nextjs any more, but here we are. It’s the Sharepoint of the JS ecosystem.
I wonder in a case like this how hard it would be to "steal" the crypto that you've paid to mine. But I assume these people are probably smart enough to where everything is instantly forwarded to their C&C server to prevent that.
Unless you know the wallets seed phrase you can not access the mined funds. At best you could replace there wallet with your own to mine it yourself.
This Monero mining also happened with one of my VPS over at interserv.net, when I forgot to log out of the root console in web-based terminal console to one of my VPS and closed its browser tab instead.
It has since been fixed: Lesson learned.
> "No more exposed PostgreSQL ports, no more RabbitMQ ports open to the internet."
Yikes. I would still recommend a server rebuild. That is _not_ a safe configuration in 2025, whatsoever. You are very likely to have a much better engineered persistent infection on that system.
Also, apparently they run an IoT platform for other users on the same host that cannot only visualize sensors, but also trigger (mains-powered) devices.
The right thing to do is to roll out a new server (you have a declarative configuration right?), migrate pure data (or better, get it from the latest backup), remove the attacked machine off the internet to do a full audit. Both to learn about what compromises there are for the future and to inform users of the IoT platform if their data has been breached. In some countries, you are even required by law to report breaches. IANAL of course.
Would "user root" without --privileged and excessive mounts have enabled a container escape, or just exposed additional attack surface that potentially could have allowed the attacker to escape if they had another exploit?
They would need a vulnerability in containerd or the kernel to escape the sandbox and being root in the sandbox would give them more leeway to exploit that vulnerability.
But if they do have a vulnerability and manage to escape the sandbox then they will be root on your host.
Running your processes as an unprivileged user inside your containers reduces the possibility of escaping the sandbox, running your containers themselves as un unprivileged user (rootless podman or docker for example) reduces the attack surface when they manage to escape the sandbox.
The first step I would take is running podman instead of Docker to prevent container escapes. Podman can be run truly rootless and doesn't mess with your firewall. Next I would drop all caps if possible.
What's the difference between running Podman and running Docker in rootless mode? (Other than Docker messing with the firewall, which apparently OP doesn't know about… yet). I understand Podman doesn't require a daemon, but is that all there is to it, or is there something I'm missing?
The runtime has been designed from the ground up to be run daemonless and rootless. They also have a K8s runtime, that has an extremely small surface, just enough to be K8s compliant.
But podman has also great integration with systemd. With that you could use a socket activated systemd unit, and stick the socket inside the container, instead of giving the container any network at all. And even if you want networking in the container, the podman folks developed slirp4netns, which is user space networking, and now something even better: passt/pasta.
Rootless docker is more compatible than podman I found. I experienced crash dumps in say mssql with podman, but not with rootless docker.
Also rootless docker does not bypass ufw like rootful docker does.
Sorry to hear you got hacked.
I know we aren't supposed to rely on containers as a security boundary, but it sure is great hearing stories like this where the hack doesn't escape the container. The more obstacles the better I guess.
Hacks are humans. For like, ten more minutes anyway.
If the human involved can’t escalate, the hack can’t.
I didn't see it mentioned, but wouldn't having a RO root filesystem with writable directories mounted noexec also have been sufficient?
As an aside, if you're using a Hetzner VPS for Umami you might be over-specced. I just cut my Hetzner bill by $4/mo by moving my Umami box to one of the free Oracle Cloud VPS after someone on here pointed out the option to me. Depends whether this is a hobby thing or something more serious, but that option is there.
All fine and well, but oracle will threaten to turn off your instance if you don’t maintain a reasonable average CPU usage on the free hosts, and will eventually do so abruptly.
This became enough of a hassle that I stopped using them.
Do you mean if it’s idle, or if it’s maxed out? I’ve had a few relatively idle free-tier VMs with Oracle and I’ve not received any threats of shutoff over the last 3 years I’ve had them online.
I assumed the same, but as long as you keep a credit card on file apparently they will let you idle it too. I went in and set my max budget at $1/mo and set alerts too, just in case.
I've got a whole Hetzner EX41 bare metal server, as opposed to a VPS. It's gotr like 20 services on it.
But yeah it is massively overspecced. Makes me feel cool load testing my go backend at 8000 requests per second though!
I pay for Hetzner because it’s an EU based, sane company without a power hungry CEO.
The manageability of having everything on one host is kind of nice at that scale, but yeah you can stack free tiers on various providers for less.
You might want to harden that those outbound firewall rules as another step. Did the Umami container need the ability to initiate connections? If not, that would eliminate the ability to do the outbound scans.
Also could prevent something to exfiltrate sensitive data.
Is mining via CPU even worthwhile for the hackers? I thought ASICs dominated mining
ASICs do dominate Bitcoin mining but Monero's POW algorithm is supposed to be ASIC resistant. Besides, who cares if it's efficient when it's someone else's server?
Monero's proof of work (RandomX) is very asic-resistant and although it generates a very small amount of earnings, if you exploit a vulnerability like this with thousands or tens of thousands of nodes, it can add up (8 modern cores 24/7 on Monero would be in the 10-20c/day per node range). OPs Vps probably generated about $1 for those script kiddies.
Hit 1000 servers and it starts adding up. Especially if you live somewhere with a low cost of living.
So $40 a year? Does that imply all monero is mined like this because it's clearly not cost effective at all to mine legitimately?
I think so, but it is hard to say. Could be a lot of people with extra power (or stolen power), but their own equipment. I mine myself with waste solar power
This is the PoW scheme that Monero currently uses:
> RandomX utilizes a virtual machine that executes programs in a special instruction set that consists of integer math, floating point math and branches. > These programs can be translated into the CPU's native machine code on the fly (example: program.asm). > At the end, the outputs of the executed programs are consolidated into a 256-bit result using a cryptographic hashing function (Blake2b).
I doubt that you anyone managed to create an ASIC that does this more efficiently and cost effective than a basic CPU. So, no, probably no one is mining Monero using an ASIC.
Yes, for Monero it is the only real viable option. I'd also assume that the OP's instance is one of many other victims whose total mining might add up to a significant amount of crypto.
Its easily worth it as they are not spending any money on compute or power.
If they can enslave 100s or even 1000s of machine mining XMR for them, easy money if you set aside the legality of it.
Hard for it not to be worthwhile, since it’s free for them. Same automated exploit run across the entire internet.
Optimal hardware costs money. Easy to hack machines are free and in nearly unlimited numbers.
When your cost per host is $0, even $5 / mo / hacked host profit can make for an ok business
If the effectiveness of mining is represented as profit divided by the cost of running the infrastructure, then a CPU that someone else is paying for is worth it as long as the profit is greater than zero.
I don't use Docker for my containers at home, but I take it by the concern that user namespacing is not the employed by them or something?
If you're root in a namespace and manage to escape, you can have root privileges outside of it.
Are you referring to user namespaces and, if so, how does that kind of break out to host root work? I thought the whole point of user namespaces was your UID 0 inside the container is UID 100000 or whatever from the perspective of outside the container. Escaping the container shouldn't inherently grant you ability to change your actual UID in the host's main namespace in that kind of setup, but I'm not sure Docker actually leverages user namespaces or not.
E.g. on my systemd-nspawn setup with --private-users=pick (enables user namespacing) I created a container and gave it a bind mount. From the container it appears like files in the bind mount created by the container namespace's UID 0 are owned by UID 0 but from outside the container the same file looks owned by UID 100000. Inverted, files owned by the "real" UID 0 on the host look owned by 0 to the host but as owned by 65534 (i.e. "nobody") from the container's perspective. Breaking out of the container shouldn't inherently change the "actual" user of the process from 100000 to 0 any more than breaking out of the container as a non-0 UID in the first place - same as breaking out of any of the other namespaces doesn't make the "UID 0" user in the container turn into "UID 0" on the host.
Users in user namespaces are granted capabilities that root has, user namespaces themselves need to be locked down to prevent that, but if a user with root capabilities escapes the namespace, they have the capabilities on the host.
They also expose kernel interfaces that, if exploited, can lead to the same.
In the end, namespaces are just for partitioning resources, using them for sandboxes can work, but they aren't really sandboxes.
This article is very interesting at first but I once again get disappointed after reading clear signs of AI like "Why this matters" and "The moment of truth", and then the whole thing gets tainted with signs all over the place.
Yeah personally I’d much rather read a poorly constructed article with actually interesting content than the same content put into the formulaic AI voice.
Article's been edited:
>Edit: A few people on HN have pointed out that this article sounds a little LLM generated. That’s because it’s largely a transcript of me panicking and talking to Claude. Sorry if it reads poorly, the incident really happened though!
For what it's worth, this is not an excuse, and I still don't appreciate being fed undisclosed slop. I'm not even reading it.
a) containers don't contain
b) if you want to limit your hosting environment to only the language/program you expect to run you should provision with unikernels which enforce it
Still confused what I am supposed to do to avoid all this.
Learning to manage an operating system in full, and having a healthy amount of paranoia, is a good first step.
Then, write all your own software to please the paranoia for the next 15 years.
Next year is the 5th year of my current personal project. Ten to go.
Well written blog post. Well done, I've learned something new.
Was dad notified of the security breach? If not he may want to consider switching hosting providers. Dad deserves a proper LLM-free post mortem.
Hahaha, I did tell him this afternoon. This is the bloke who has the same password for all his banking apps despite me buying him 1password though. The imminent threat from RCE's just didn't land.
Buying someone 1Pass, or the like, and calling it good is not enough. People using password managers forget how long it takes to visit all of the sites you use to create that site's record, then update the password to a secure one, and then log out and log back in with the new password to test it is good. For a lot of people having a password manager bought for them is going to be over it after the second site. Just think about how many videos on TikTok they could have been watching instead
Yeah, mom and I sat down one afternoon and we changed all of her passwords to long, secure ones, generated by 1Password. It was a nice time! It also helped her remember all of the different services she needs to access, and now they're all safely stored with strong passwords. And it was a nice way to connect and spend some time together. :)
tl:dr: He got hacked but the damage was only restricted to one docker container runn ing Umami (that is built on top of NextJS). Thankfully, he was running the docker container as a non privileged non-root user which saved him big time considering the fact that the attack surface was limited only within the container and could not access the entire host/filesystem.
Is there ever a reason someone should run a docker container as root ?
If you're using the container to manage stuff on the host, it'll likely need to be a process running as root. I think the most common form of this is Docker-in-Docker style setups where a container is orchestrating other containers directly through the Docker socket.
You're lucky that Hetzner didn't delete your server and terminate your account.
With which justification?
Cryptocurrency software usage. It is strictly against their policy. Afaik, their policy does not differentiate with voluntary and involuntary use.
They have done it to others.
Whew, load average of 0 here.
Only lesson seems to be use ufw! (or equivalent)
As others have mentioned, a firewall might have been useful in restricting outbound connections to limit the usefulness of the machine to the hacker after the breach.
An inbound firewall can only help protect services that aren't meant to be reachable on the public internet. This service was exposed to the internet intentionally so a firewall wouldn't have helped avoid the breach.
The lesson to me is that keeping up with security updates helps prevent publicly exposed services from getting hacked.
It makes me irrationally angry that cryptos have given a clear monetary value to raw CPU time, and a very strong incentive to create botnets.
I also run Umami, but patched once the CVE patch was released. Also, I only expose the tracking js endpoint and /api/send via Caddy publically (though, /api/send might be enough to exploit the vul). To actually interact with Umami UI I use Twingate (similar to Tailscale) to tunnel into the VPC locally.
so what's the point of containers here? seems only to make things less transparent and more complex to manage.
js scripts running on frameworks running inside containers
PS so I see the host ended up staying uncompromised
I still can't believe that there are so many people out here popping boxen and all they do is solve drug sudokus with the hardware. Hacks are so lame now.
> ls -la /tmp/.XIN-unix/javae
Unless ran as root this could return file not found because of missing permissions, and not just because the file doesn't actually exist, right?
> “I don’t use X” doesn’t mean your dependencies don’t use X
That is beyond obvious, and I don't understand how anyone would feel safe from reading about a CVE on a widely used technology when they run dozens of containers on their server. I have docker containers and as soon as I read the article I went and checked because I have no idea what technology most are built with.
> No more Umami. I’m salty. The CVE was disclosed, they patched it, but I’m not running Next.js-based analytics anymore.
Nonsensical reaction.
Yeah, my Umami box was hit, but the time between the CVE disclosure and my box getting smacked was incredibly low. Umami patched it very quickly. And then patched it again a second time when the second CVE dropped right after.
Nothing is immune. What analytics are you going to run? If you roll your own you'll probably leave a hole somewhere.
[flagged]
Just use Hetzner managed servers? Very high specs, they manage everything, and you can install a lot of languages, apps etc.
Never expose your server IP directly to the internet, vps or baremetal.
Unless you need it to be reachable from the Internet, at which point it has to be... reachable from the Internet.
Public facing services routed through a firewall or waf (cloudflare) always.
Backend access trivial with Tailscale, etc.
Stupid question probably, but: how can it not be routed through a firewall? If you have it at home, it's behind a router that should have a firewall already, right? And just forwards the one port you expose to the server?
Cloudflare can certainly do more (e.g. protect against DoS and hide your personal IP if your server is at home).
Not expose the server IP is one practice (obfuscation) in a list of several options.
But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.
Even cloudflare had an outage trying to protect his customers[3].
- [0] https://blog.cloudflare.com/waf-rules-react-vulnerability/ - [1] https://www.fastly.com/blog/fastlys-proactive-protection-cri... - [2] https://blog.cloudflare.com/5-december-2025-outage/
Any server? How do you run a public website? Even if you put it behind a load balancer, the load balancer is still a “server exposed to the internet”
Public facing services routed through a firewall or waf (cloudflare) always.
Backend access trivial with Tailscale, etc.
Public IP never needs to be used. You can just leave it an internal IP if you really want.
You're going to hate this thing called DNS
Been running production servers for a long time.
DNS is no issue. External DNS can be handled by Cloudflare and their waf. Their DNS service can can obsfucate your public IP, or ideally not need to use it at all with a Cloudflare tunnel installed directly on the server. This is free.
Backend access trivial with Tailscale, etc.
Public IP doesn't always need to be used. You can just leave it an internal IP if you really want.
Is there a way to do that and still be able to access the server?
Yes, cloudflare tunnels do this, but I don't think it's really necessary for this.
I use them for self-hosting.
That server is still exposed to the internet on a public IP. Just only known and courted through a 3rd party's castle.
The tunnel doesn't have to use the Public IP inbound, the cloudflare tunnel calls outbound that can be entirely locked up.
If you are using Cloudflare's DNS they can hide your IP on the dns record but it would still have to be locked down but some folks find ways to tighten that up too.
If you're using a bare metal server it can be broken up.
It's fair that it's a 3rd party's castle. At the same time until you know how to run and secure a server, some services are not a bad idea.
Some people run pangolin or nginx proxy manager on a cheap vps if it suits their use case which will securely connect to the server.
We are lucky that many of these ideas have already been discovered and hardened by people before us.
Even when I had bare metal servers connected to the internet, I would put a firewall like pfsense or something in between.
What does the tunnel bring except DoS protection and hiding your IP? And what is the security concern with divulging your IP? Say when I connect to a website, the website knows my IP and I don't consider this a security risk.
If I run vulnerable software, it will still be vulnerable through a Cloudflare tunnel, right?
Genuinely interested, I'm always scared to expose things to the internet :-).
Many ways. Using a "bastion host" is one option, with something like wireguard or tinc. Tailscale and similar services are another option. Tor is yet another option.
The bastion host is a server, though, and would be exposed to the internet.
>Never expose your server IP directly to the internet, vps or baremetal.
Yes, of course.
Free way - sign up for a cloudflare account. Use the DNS on cloudflare, they wil put their public ip in front of your www.
Level 2 is install the cloudflare tunnel software on your server and you never need to use the public IP.
Backend access securely? Install Tailscale or headscale.
This should cover most web hosting scenarios. If there's additional ports or services, tools like nginx proxy manager (web based) or others can help. Some people put them on a dedicated VPS as a jump machine.
This way using the Public IP can almost be optional and locked down if needed. This is all before running a firewall on it.
Yes, CloudFlare ZeroTrust. It's entirely free, I use it for loads of containers on multiple hosts and it works perfectly.
It's really convenient. I don't love that its a one of one service, but it's a decent enough placeholder.
As in "always run a network firewall" or "keep the IP secret"? Because I've had people suggest both and one is silly.
A network firewall is mandatory.
Keeping the IP secret seems like a misnomer.
Its often possible to lock down the public IP entirely to not accept connections except what's initiated from the inside (like the cloudflare tunnel or otherwise reaching out).
Something like a Cloudflare+tunnel on one side, tailscale or something to get into it on the other.
Folks other than me have written decent tutorials that have been helpful.