I built a 2x faster lexer, then discovered I/O was the real bottleneck

modulovalue.com

161 points by modulovalue 6 days ago


marginalia_nu - 21 hours ago

Zip with no compression is a nice contender for a container format that shouldn't be slept on. It effectively reduces the I/O, while unlike TAR, allowing direct random to the files without "extracting" them or seeking through the entire file, this is possible even via mmap, over HTTP range queries, etc.

You can still get the compression benefits by serving files with Content-Encoding: gzip or whatever. Though it has builtin compression, you can just not use that and use external compression instead, especially over the wire.

It's pretty widely used, though often dressed up as something else. JAR files or APK files or whatever.

I think the articles complaints about lacking unix access rights and metadata is a bit strange. That seems like a feature more than a bug, as I wouldn't expect this to be something that transfers between machines. I don't want to unpack an archive and have to scrutinize it for files with o+rxst permissions, or have their creation date be anything other than when I unpacked them.

torginus - 8 hours ago

I started programming on DOS - I remember how amazing was that you basically almost talked to hardware directly, there was very little restriction on what you could do, and the OS (which imo was much more akin to a set of libraries) provided very little abstraction for you.

Then I moved to Windows, and Linux. Each had its own idiosyncrasies, like how everything is a file on Linux, and you're supposed to write programs by chaining existing executables together, or on the desktop, both Win32 and X11 started out with their own versions of UI elements, so XWindow or Win32 would know about where a 'button' was, and the OS was responsible for event handling and drawing stuff.

Eventually both Windows and Linux programs moved to a model where the OS just gave you the window as a drawing surface, and you were supposed to fill it.

Similarly, all other OS supplied abstractions slowly fell out of use beyond the bare minimum.

Considering this, I wonder if it's time to design a new, much lower level abstraction, for file systems in this case, this would be a way to mmap an entire directory into the process space, where each file would be a struct, whicha had a list of pointers for the pages on the disk, and each directory would be a list of such entries, again stored in some data structure you could access, synchronizing reads/writes would be orechestrated by the kernel somehow (I'm thinking locking/unlocking pages being written to).

So that way there'd be no difference between traversing an in-memory data structure and reading the disk.

I know this approach isnt super compatible with the async/await style of I/O, however I'm not 100% convinced that's the correct approach either (disk paging is a fundamental feature of all OSes, yet is absolutely inexpressible in programming terms)

solatic - 18 hours ago

Headline is wrong. I/O wasn't the bottleneck, syscalls were the bottleneck.

Stupid question: why can't we get a syscall to load an entire directory into an array of file descriptors (minus an array of paths to ignore), instead of calling open() on every individual file in that directory? Seems like the simplest solution, no?

Sparkyte - 2 hours ago

I/O has been the bottleneck for many things especially databases.

So as someone who has seen a long spread of technological advancements over the years I can confidently tell you that chips have far surpassed any peripheral components.

Kind of that scenario where compute has to be fast enough anyway to support I/O. So really it always has to be faster, but I am saying that it has exceeded those expectations.

sheepscreek - 10 hours ago

Always knew this anecdotally - through experience - told myself it’s the file system block size that’s the culprit. Put together with SSD random seek times, it makes sense on the surface. I never thought of syscalls being so expensive. But they might just be symptom and not the bottleneck itself (after all it’s just a function call to the server). My initial thought was DMA. You see, CPUs usually have direct access to only one PCI/e in most consumer hardware. The other PCI/e and mvme.2 slots share the same bandwidth and take turns. When someone wants access, they do a dance with the CPU and other parts of the computer using INT or (or interrupt) instructions that make the CPU pause so I/O can take over for a bit. The switching back and forth is costly too and adds up quickly.

That said, it wouldn’t explain why a MacBook (which should have the SSD already on the fastest/dedicated pathway) be this slow unless something else in the OS was the bottleneck?

I think we’re just scratching the surface here and there is more to this story that is waiting to be discovered. But yeah, to get the job done, package it in fewer files for the OS, preload into RAM or use mmap, then profit.

mnahkies - 17 hours ago

Something that struck me earlier this week was when profiling certain workloads, I'd really like a flame graph that included wall time waiting on IO, be it a database call, filesystem or other RPC.

For example, our integration test suite on a particular service has become quite slow, but it's not particularly clear where the time is going. I suspect a decent amount of time is being spent talking to postgres, but I'd like a low touch way to profile this

1vuio0pswjnm7 - 5 hours ago

"TAR vs ZIP: Sequential vs Random Access"

https://199.233.217.201/pub/pkgsrc/distfiles/dictd-1.13.3.ta...

Wikipedia:

"In order to efficiently store dictionary data, dictzip, an extension to the gzip compression format (also the name of the utility), can be used to compress a .dict file. Dictzip compresses file in chunks and stores the chunk index in the gzip file header, thus allowing random access to the data."

stabbles - 21 hours ago

"I/O is the bottleneck" is only true in the loose sense that "reading files" is slow.

Strictly speaking, the bottleneck was latency, not bandwidth.

akaltar - 21 hours ago

Amazing article, thanks for sharing. I really appreciate the deep investigations in response to the comments

jchw - 17 hours ago

Sounds more like the VFS layer/FS is the bottleneck. It would be interesting to try another FS or operating system to see how it compares.

hwspeed - 8 hours ago

Classic case of optimizing the wrong thing. I've hit similar issues with ML training pipelines where GPU utilization looks terrible because data loading is the bottleneck. The profiler tells you the GPU kernel is fast, but doesn't show you it's sitting idle 80% of the time waiting for the next batch. Amdahl's law is brutal when you've got a serial component in your pipeline.

raggi - 20 hours ago

there are a loooot of languages/compilers for which the most wall-time expensive operation in compilation or loading is stat(2) searching for files

Dwedit - 9 hours ago

On Windows, an optimization is to call CloseHandle from a secondary thread.

rurban - 15 hours ago

I still use a 10x faster lexer, RE2C over flex, because it does so much more at compile-time. And on top of that has a bunch of optimization options for better compilers, like computed goto's.

Of course syscalls suck, slurping the whole file at once always wins, and in this case all files at once.

Kernels suck in general. You don't really need one for high perf and low space.

djmips - 16 hours ago

profile profile profile

nudpiedo - 18 hours ago

Same thing applies to other system aspects:

compressing the kernel loads it faster on RAM even if it still has to execute the un compressing operation. Why?

Load from disk to RAM is a larger bottleneck than CPU uncompressing.

Same is applied to algorithms, always find the largest bottleneck in your dependent executions and apply changes there as the rest of the pipeline waits for it. Often picking the right algorithm “solves it” but it may be something else, like waiting for IO or coordinating across actors (mutex if concurrency is done as it used to).

That’s also part of the counterintuitive take that more concurrency brings more overhead and not necessarily faster execution speeds (topic largely discussed a few years ago with async concurrency and immutable structures).