Jemalloc Postmortem
jasone.github.io778 points by jasone 3 days ago
778 points by jasone 3 days ago
I understand the decision to archive the upstream repo; as of when I left Meta, we (i.e. the Jemalloc team) weren’t really in a great place to respond to all the random GitHub issues people would file (my favorite was the time someone filed an issue because our test suite didn’t pass on Itanium lol). Still, it makes me sad to see. Jemalloc is still IMO the best-performing general-purpose malloc implementation that’s easily usable; TCMalloc is great, but is an absolute nightmare to use if you’re not using bazel (this has become slightly less true now that bazel 7.4.0 added cc_static_library so at least you can somewhat easily export a static library, but broadly speaking the point still stands).
I’ve been meaning to ask Qi if he’d be open to cutting a final 6.0 release on the repo before re-archiving.
At the same time it’d be nice to modernize the default settings for the final release. Disabling the (somewhat confusingly backwardly-named) “cache oblivious” setting by default so that the 16 KiB size-class isn’t bloated to 20 KiB would be a major improvement. This isn’t to disparage your (i.e. Jason’s) original choice here; IIRC when I last talked to Qi and David about this they made the point that at the time you chose this default, typical TLB associativity was much lower than it is now. On a similar note, increasing the default “page size” from 4 KiB to something larger (probably 16 KiB), which would correspondingly increase the large size-class cutoff (i.e. the point at which the allocator switches from placing multiple allocations onto a slab, to backing individual allocations with their own extent directly) from 16 KiB up to 64 KiB would be pretty impactful. One of the last things I looked at before leaving Meta was making this change internally for major services, as it was worth a several percent CPU improvement (at the cost of a minor increase in RAM usage due to increased fragmentation). There’s a few other things I’d tweak (e.g. switching the default setting of metadata_thp from “disabled” to “auto”, changing the extent-sizing for slabs from using the nearest exact multiple of the page size that fits the size-class to instead allowing ~1% guaranteed wasted space in exchange for reducing fragmentation), but the aforementioned settings are the biggest ones.
That was me that filed the Itanium test suite failure. :)
Ah, porting to HP Superdome servers. It’s like being handed a brochure describing the intricate details of the iceberg the ship you just boarded is about to hit in a few days.
A fellow traveler, ahoy!
I worked on the Superdome servers back in the day. What a weird product. I still can't believe it was a profitable division (at my time circa 2011).
HP was going through some turbulent waters in those days.
The Itanic was kind of great :). I'm convinced it helped sink SGI.
Why was the sinking of SGI great?
Oh, that wasn't the intent. I meant two separate things. The Itanic itself was kind of fascinating, but mostly panned (hence the nickname).
SGI's decision to built out Itanium systems may have helped precipitate their own downfall. That was sad.
Still makes me sad. I partially think a major reason for the demise was that it was simply constructed too soon. Compiler tech wasn't nearly good enough to handle the ISA.
Nowadays because of the efforts that have gone in to making SIMD effective, I'd think modern compilers would have an easier time taking advantage of that unique and strange uarch.
VLIW has a fatal flaw in how it was used in these systems. You cannot run general purpose dynamically scheduled workloads unless you combine the JIT engine and the scheduler. PRIOR ART. Which is the same exact problem of trying to run multiple compute kernels on a GPU at the same time. VLIW with an OS and runtime that uses a higher level language, Wasm or the JVM, could forseably support dynamic workloads where the main cpu was VLIW.
Now if they had been designed as GPU like devices for processing data, then Fortune 1000 would have never needed or used Hadoop.
one of the best books on Linux architecture i've read was the one on the Itanium port
i think, because Itanic broke a ton of assumptions
Stuff like this is what keeps me coming back here. Thanks for posting this!
What's hard about using TCMalloc if you're not using bazel? (Not asking to imply that it's not, but because I'm genuinely curious.)
It’s just a huge pain to build and link against. Before the bazel 7.4.0 change your options were basically:
1. Use it as a dynamically linked library. This is not great because you’re taking at a minimum the performance hit of going through the PLT for every call. The forfeited performance is even larger if you compare against statically linking with LTO (i.e. so that you can inline calls to malloc, get the benefit of FDO , etc.). Not to mention all the deployment headaches associated with shared libraries.
2. Painfully manually create a static library. I’ve done this, it’s awful; especially if you want to go the extra mile to capture as much performance as possible and at least get partial LTO (i.e. of TCMalloc independent of your application code, compiling all of TCMalloc’s compilation units together to create a single object file).
When I was at Meta I imported TCMalloc to benchmark against (to highlight areas where we could do better in Jemalloc) by pain-stakingly hand-translating its bazel BUILD files to buck2 because there was legitimately no better option.
As a consequence of being so hard to use outside of Google, TCMalloc has many more unexpected (sometimes problematic) behaviors than Jemalloc when used as a general purpose allocator in other environments (e.g. it basically assumes that you are using a certain set of Linux configuration options [1] and behaves rather poorly if you’re not)
[1] https://google.github.io/tcmalloc/tuning.html#system-level-o...
Thanks for sharing the insight!
As I observed when I was at Google: tcmalloc wasn't a dedicated team but a project driven by server performance optimization engineers aiming to improve performance of important internal servers. Extracting it to github.com/google/tcmalloc was complex due to intricate dependencies (https://abseil.io/blog/20200212-tcmalloc ). As internal performance priorities demanded more focus, less time was available for maintaining the CMake build system. Maintaining the repo could at best be described as a community contribution activity.
> Meta’s needs stopped aligning well with those of external uses some time ago, and they are better off doing their own thing.
I think Google's diverged from the external uses even long ago:) (For a long time google3 and gperftools's tcmalloc implementations were so different.)
Everything from Google is an absolute pain to work with unless you're in Google using their systems, FWIW. Anything from the Chromium project is deeply intangled with everything else from the Chromium project as part of one gigantic Chromium source tree with all dependencies and toolchains vendored. They do not care about ABI what so ever, to the point that a lot of Google libraries change their public ABI based on whether address sanitizer is enabled or not, meaning you can't enable ASAN for your code if you use pre-built (e.g package manager provided) versions of their code. Their libraries also tend to break if you link against them from a project with RTTI enabled, a compiler set to a slightly different compiler version, or any number of other minute differences that most other developers don't let affect their ABI.
And if you try to build their libraries from source, that involves downloading tens of gigabytes of sysroots and toolchains and vendored dependencies.
Oh and you probably don't want multiple versions of a library in your binary, so be prepared to use Google's (probably outdated) version of whatever libraries they vendor.
And they make no effort what so ever to distinguish between public header files and their source code, so if you wanna package up their libraries, be prepared to make scripts to extract the headers you need (including headers from vendored dependencies), you can't just copy all of some 'include/' folder.
And their public headers tend to do idiotic stuff like `#include "base/pc.h"`, where that `"base/pc.h"` path is not relative to the file doing the include. So you're gonna have to pollute the include namespace. Make sure not to step on their toes! There's a lot of them.
I have had the misfortune of working with Abseill, their WebRTC library, their gRPC library and their protobuf library, and it's all terrible. For personal projects where I don't have a very, very good reason to use Google code, I try to avoid it like the plague. For professional projects where I've had to use libwebrtc, the only reasonable approach is to silo off libwebrtc into its own binary which only deals with WebRTC, typically with a line-delimited JSON protocol on stdin/stdout. For things like protobuf/gRPC where that hasn't been possible, you just have to live with the suffering.
..This comment should probably have been a blog post.
I think your rant isn't long enough to include everything relevant ;) The Blink web engine (which I sometimes compile for qtwebengine) takes a really long time to compile, several times longer than Gecko according to some info I found online. Google has a policy of not using forward declarations, including everything instead. That's a pretty big WTF for anyone who has ever optimized build time. Google probably just throws hardware and (distributed) caching at the problem, not giving a shit about anyone else building it. Oh, it also needs about 2 GB of RAM per build thread - basically nothing else does.
Even with Firefox using Rust and requiring a build of many crates, qtwebengine takes more time. It was so bad that I had to remove packaged from my system (Gentoo) that were pulling qtwebengine.
And I build all Rust crates (including rustc) with -O3, same as C/C++.
Chromium deviates from Google-wide policy and allows forward-declarations: https://chromium.googlesource.com/chromium/src/+/main/styleg..., "Forward declarations vs. #includes".
That is really nice to hear, but AFAICS it only means that it may change in the future. Because in current code, it was ~all includes last time I checked.
Well, I remember one - very biased - example where I had a look at a class that was especially expensive to compile, like 40 seconds (on a Ryzen 7950X) and maybe 2 GB of RAM. It had under 200 LOC and didn't seem to do anything that's typically expensive to compile... except for the stuff it included. Which also didn't seem to do anything fancy. But transitive includes can snowball if you don't add any "compile firewalls".
> Because in current code, it was ~all includes last time I checked.
That's another matter - just because forward-declares are allowed, doesn't mean they are mandated, but in my experience the reviewers were paying attention to that pretty well.
Counter-exeamples to "~all includes": https://source.chromium.org/chromium/chromium/src/+/main:thi..., https://source.chromium.org/chromium/chromium/src/+/main:thi..., https://source.chromium.org/chromium/chromium/src/+/main:thi....
I picked couple random headers from the directory where I've contributed the most to blink, and from what I'm seeing, most of the classes that could be forward-declared, were. I have not looked at .cc files given that those tend to need to see the declaration (except when it's unused, but then why have a forward-decl at all?) or the compiler would complain about access into incomplete type.
> Well, I remember one - very biased - example where I had a look at a class that was especially expensive to compile, like 40 seconds (on a Ryzen 7950X) and maybe 2 GB of RAM. It had under 200 LOC and didn't seem to do anything that's typically expensive to compile... except for the stuff it included.
Maybe the stuff was actually being compiled because of some member in a class (so it was actually expensive to compile). Or maybe you stumbled upon a place where folks weren't paying attention. Hard to say without a concrete example. The "compile firewall" was added pretty recently I think, but I don't know if it's going to block anything from landing.
Edit: formatting (switched bulleted list into comma-separated because clearly I don't know how to format it).
This is actually tracked at a publicly visible URL: https://commondatastorage.googleapis.com/chromium-browser-cl...
And the include graph analysis: https://commondatastorage.googleapis.com/chromium-browser-cl...
The annotated red dots correspond to the last time Chrome developers did a big push to prune the include graph to optimize build time. It was effective, but there was push back. C++ developers just want magic, they don't want to think about dependency management, and it's hard to blame them. But, at the end of the day, builds scale with sources times dependencies, and if you aren't disciplined, you can expect superlinear build times.
Good that it's being tracked, but Jesus, these numbers!
110 CPU hours for a build. (Fortunately, it seems to be a little over half that for my CPU. "Cloud CPUs" are kinda slow.)
I picked the 5001st largest file with includes. It's zoom_view_controller.cc, 140 lines in the .cc file, size with includes: 19.5 MB.
Initially I picked the 5000th largest file with includes, but for devtools_target_ui.cc, I see a bit more legitimacy for having lots of includes. It has 384 "own" lines in he .cc file and, of course, also about 19.5 MB size with includes.
A C++20 source file including some standard library headers easily bloats to a little under 1 MB IIRC, and that's already kind of unreasonable. 20x of that is very unreasonable.
I don't think that I need to tell anyone on the Chrome team how to improve performance in software: you measure and then you grab the dumb low-hanging fruit first. From these results, it doesn't seem like anyone is working with the actual goal to improve the situation as long as the guidelines are followed on paper.