Love C, hate C: Web framework memory problems
alew.is159 points by OneLessThing 3 days ago
159 points by OneLessThing 3 days ago
Good C code will try to avoid allocations as much as possible in the first place. You absolutely don’t need to copy strings around when handling a request. You can read data from the socket in a fixed-size buffer, do all the processing in-place, and then process the next chunk in-place too. You get predictable performance and the thing will work like precise clockwork. Reading the entire thing just to copy the body of the request in another location makes no sense. Most of the “nice” javaesque XXXParser, XXXBuilder, XXXManager abstractions seen in “easier” languages make little sense in C. They obfuscate what really needs to happen in memory to solve a problem efficiently.
> Good C code will try to avoid allocations as much as possible in the first place.
I've upvoted you, but I'm not so sure I agree though.
Sure, each allocation imposes a new obligation to track that allocation, but on the downside, passing around already-allocated blocks imposes a new burden for each call to ensure that the callees have the correct permissions (modify it, reallocate it, free it, etc).
If you're doing any sort of concurrency this can be hard to track - sometimes it's easier to simply allocate a new block and give it to the callee, and then the caller can forget all about it (callee then has the obligation to free it).
The most important pattern to learn in C is to allocate a giant arena upfront and reuse it over and over in a loop. Ideally, there is only one allocation and deallocation in the entire program. As with all things multi-threaded, this becomes trickier. Luckily, web servers are embarrassingly parallel, so you can just have an arena for each worker thread. Unluckily, web servers do a large amount of string processing, so you have to be careful in how you build them to prevent the memory requirements from exploding. As always, tradeoffs can and will be made depending on what you are actually doing.
Short-run programs are even easier. You just never deallocate and then exit(0).
Arenas are a nice tool, but they don't work for all use cases. In the limit you're reimplementing malloc on top of your big chunk of memory.
Most games have to do this for performance reasons at some point and there are plenty of variants to choose from. Rust has libraries for some of them, but in c rolling it yourself is the idiom. One I used in c++ and worked well as a retrofit was to overload new to grab the smallest chunk that would fit the allocation from banks of them. Profiling under load let the sizes of the banks be tuned for efficiency. Nothing had to know it wasn't a real heap allocation, but it was way faster and with zero possibility of memory fragmentation.
Most pre-2010 games had to. As a prior gamedev after that period I can confidently say that it is a relic of the past in most cases now. (Not like that I don't care, but I don't have to be that strict about allocations.)
Because why?
Virtual memory gets rid of a lot of fragmentation issues.
Yeah. Fragmentation was a niche concern of that embedded use case. It had an mmu, just wasn't used by the rtos. I am surprised that allocations aren't a major hitter anymore. I still have to minimize/eliminate them in linux signal processing code to stay realtime.
Probably because hardwares became powerful enough that you can make a performant game without thinking much about allocations.
They don't work for all use-cases, but they most certainly work for this use-case (HTTP server).
The normal practical version of this advice that isn't a "guy who just read about arenas post" is that you generally kick allocations outward; the caller allocates.
> Ideally, there is only one allocation and deallocation in the entire program.
Doesn't this techically happen with most of the modern allocators? They do a lot of work to avoid having to request new memory from the kernel as much as possible.
last time i checked, the glibc allocator doesnt ask the OS that often for new heap memory.
Like, every ~thousand malloc calls invoked (s)brk and that was it.
> there is only one allocation and deallocation in the entire program.
> Short-run programs are even easier. You just never deallocate and then exit(0).
What's special about "short-run"? If you deallocate only once, presumably just before you exit, then why do it at all?
Just because there's only one deallocation doesn't mean it's run only once. It would likely be run once every time the thread it belongs to is deallocated, like when it's finished processing a request.
I agree, which is why I wrote an arena allocator library I use (somewhere on github, probably public and free).
To reduce the amount of allocation instead of:
struct parsed_data * = parse (...);
struct process_data * = process (..., parsed_data);
struct foo_data * = do_foo (..., process_data);
you can do parse (...) {
...
process (...);
...
}
process (...) {
...
do_foo (...);
...
}
It sounds like violating separation of concerns at first, but it has the benefit, that you can easily do procession and parsing in parallel, and all the data can become readonly. Also I was impressed when I looked at a call graph of this, since this essentially becomes the documentation of the whole program.How testable is this, though?
It might be a problem when you can't afford side-effects that you later throw away, but I haven't experienced that yet. The functions still have return codes, so you still can test, whether a correct input results in no error check being followed and that incorrect input results in an error check being triggered.
is there any system where doing the basics of http (everything up to framework handoff of structured data) are done outside of a single concurrency unit?
Not exactly what you’re looking for, but https://github.com/simdjson/simdjson absolutely uses micro-parallel techniques for parsing, and those do need to think about concurrency and how processors handle shared memory in pipelined and branch-predicted operations.
Why does "good" C have to be zero alloc? Why should "nice" javaesque make little sense in C? Why do you implicitly assume performance is "efficient problem solving"?
Not sure why many people seem fixated on the idea that using a programming language must follow a particular approach. You can do minimal alloc Java, you can simulate OOP-like in C, etc.
Unconventional, but why do we need to restrict certain optimizations (space/time perf, "readability", conciseness, etc) to only a particular language?
Because in C, every allocation incurs a responsibility to track its lifetime and to know who will eventually free it. Copying and moving buffers is also prone to overflows, off-by-one errors, etc. The generic memory allocator is a smart but unpredictable complex beast that lives in your address space and can mess your CPU cache, can introduce undesired memory fragmentation, etc.
In Java, you don't care because the GC cleans after you and you don't usually care about millisecond-grade performance.
No. Look up Arenas. In general group allocations to avoid making a mess.
If you send a task off to a work queue in another thread, and then do some local processing on it, you can't usually use a single Arena, unless the work queue itself is short lived.
I don't see how arenas solve the problems.
You group things from the same context together, so you can free everything in a single call.
> Why should "nice" javaesque make little sense in C?
Very importantly, because Java is tracking the memory.
In java, you could create an item, send it into a queue to be processed concurrently, but then also deal with that item where you created it. That creates a huge problem in C because the question becomes "who frees that item"?
In java, you don't care. The freeing is done automatically when nobody references the item.
In C, it's a big headache. The concurrent consumer can't free the memory because the producer might not be done with it. And the producer can't free the memory because the consumer might not have ran yet. In idiomatic java, you just have to make sure your queue is safe to use concurrently. The right thing to do in C would be to restructure things to ensure the item isn't used before it's handed off to the queue or that you send a copy of the item into the queue so the question of "who frees this" is straight forward. You can do both approaches in java, but why would you? If the item is immutable there's no harm in simply sharing the reference with 100 things and moving forward.
In C++ and Rust, you'd likely wrap that item in some sort of atomic reference counted structure.
> Why does "good" C have to be zero alloc?
GP didn't say "zero-alloc", but "minimal alloc"
> Why should "nice" javaesque make little sense in C?
There's little to no indirection in idiomatic C compared with idiomatic Java.
Of course, in both languages you can write unidiomatically, but that is a great way to ensure that bugs get in and never get out.
> Of course, in both languages you can write unidiomatically, but that is a great way to ensure that bugs get in and never get out.
Why does "unidiomatic" have to imply "buggy" code? You're basically saying an unidiomatic approach is doomed to introduce bugs and will never reduce them.It sounds weird. If I write Python code with minimal side effects like in Haskell, wouldn't it at least reduce the possibility of side-effect bugs even though it wasn't "Pythonic"?
AFAIK, nothing in the language standard mentions anything about "idiomatic" or "this is the only correct way to use X". The definition of "idiomatic X" is not as clear-cut and well-defined as you might think.
I agree there's a risk with an unidiomatic approach. Irresponsibly applying "cool new things" is a good way to destroy "readability" while gaining almost nothing.
Anyway, my point is that there's no single definition of "good" that covers everything, and "idiomatic" is just whatever convention a particular community is used to.
There's nothing wrong with applying an "unidiomatic" mindset like awareness of stack/heap alloc, CPU cache lines, SIMD, static/dynamic dispatch, etc in languages like Java, Python, or whatever.
There's nothing wrong either with borrowing ideas like (Haskell) functor, hierarchical namespaces, visibility modifiers, borrow checking, dynamic dispatch, etc in C.
Whether it's "good" or not is left as an exercise for the reader.
> Why does "unidiomatic" have to imply "buggy" code?
Because when you stray from idioms you're going off down unfamiliar paths. All languages have better support for specific idioms. Trying to pound a square peg into a round hole can work, but is unlikely to work well.
> You're basically saying an unidiomatic approach is doomed to introduce bugs and will never reduce them.
Well, yes. Who's going to reduce them? Where are you planning to find people who are used to code written in an unusual manner?
By definition alone, code is written for humans to read. If you're writing it in a way that's difficult for humans to read, then of course the bug level can only go up and not down.
> It sounds weird. If I write Python code with minimal side effects like in Haskell, wouldn't it at least reduce the possibility of side-effect bugs even though it wasn't "Pythonic"?
"Pythonic" does not mean the same thing as "Idiomatic code in Python".
In C, direct memory control is the top feature, which means you can assume anyone who uses your code is going to want to control memory through the process. This means not allocating from wherever and returning blobs of memory, which means designing different APIs, which is part of the reason why learning C well takes so long.
I started writing sort of a style guide to C a while ago, which attempts to transfer ideas like this one more by example:
Good C has minimal allocations because you, the human, are the memory allocator. It's up to your own meat brain to correctly track memory allocation and deallocation. Over the last century, C programmers have converged on some best practices to manage this more effectively. We statically allocate, kick allocations up the call chain as far as possible. Anything to get that bit of tracked state out of your head.
But we use different approaches for different languages because those languages are designed for that approach. You can do OOP in C and you can do manual memory management in C#. Most people don't because it's unnecessarily difficult to use languages in a way they aren't designed for. Plus when you re-invent a wheel like "classes" you will inevitably introduce a bug you wouldn't have if you'd used a language with proper support for that construct. You can use a hammer to pull out a screw, but you'd do a much better job if you used a screwdriver instead.
Programming languages are not all created equal and are absolutely not interchangeable. A language is much, much more than the text and grammar. The entire reason we have different languages is because we needed different ways to express certain classes of problems and constructs that go way beyond textual representation.
For example, in a strictly typed OOP language like C#, classes are hideously complex under the hood. Miles and miles of code to handle vtables, inheritance, polymorphism, virtual, abstract functions and fields. To implement this in C would require effort far beyond what any single programmer can produce in a reasonable time. Similarly, I'm sure one could force JavaScript to use a very strict typing and generics system like C#, but again the effort would be enormous and guaranteed to have many bugs.
We use different languages in different ways because they're different and work differently. You're asking why everyone twists their screwdrivers into screws instead of using the back end to pound a nail. Different tools, different uses.
A long time ago I was involved in building compilers. It was common that we solved this problem with obstacks, which are basically stacked heaps. I wonder one could not build more things like this, where freeing is a bit more best effort but you have some checkpoints. (I guess one would rather need tree like stacks) Just have to disallow pointers going the wrong way. Allocation remains ugly in C and I think explicit data structures are are definitely a better way of handling it.
This shared memory and pointer shuffling is of course fraught with requiring correct logic to avoid memory safety bugs. Good C code doesn't get you pwned, I'd argue.
> Good C code doesn't get you pwned, I'd argue.
This is not a serious argument because you don't really define good C code and how easy or practical it is to do. The sentence works for every language. "Good <whatever language> code doesn't get you pwned"
But the question is whether "Average" or "Normal" C code gets you pwned? And the answer is yes, as told in the article.
The comment I was responding to suggested Good C Code employes optimizations that, I opined, are more error prone wrt memory safety - so I was not attempting to define it, but challenging the offered characterisation.
Can you do parsing of JSON and XML without allocating?
Of course. You can do it in a single pass/just parse the token stream. There are various implementations like: https://zserge.com/jsmn/
It requires manual allocation of an array of tokens. So it needs a backing "stack vector" of sorts.
And what about escapes?
Yes, you can do it with minimal allocations - provided that the source buffer is read-only or is mutable but is unused later directly by the caller. If the buffer is mutable, any un-escaping can be done in-place because the un-escaped string will always be shorter. All the substrings you want are already in the source buffer. You just need a growable array of pointer/length pairs to know where tokens start.
It depends what you intend to do with the parsed data, and where the input comes from and where the output will be going to. There are situations that allocations can be reduced or avoided, but that is not all of them. (In some cases, you do not need full parsing, e.g. to split an array, you can check if it is a string or not and the nesting level, and then find the commas outside of any arrays other than the first one, to be split.) (If the input is in memory, then you can also consider if you can modify that memory for parsing, which is sometimes suitable but sometimes not.)
However, for many applications, it will be better to use a binary format (or in some cases, a different text format) rather than JSON or XML.
(For the PostScript binary format, there is no escaping, and the structure does not need to be parsed and converted ahead of time; items in an array are consecutive and fixed size, and data it references (strings and other arrays) is given by an offset, so you can avoid most of the parsing. However, note that key/value lists in PostScript binary format is nonstandard (even though PostScript does have that type, it does not have a standard representation in the binary object format), and that PostScript has a better string type than JavaScript but a worse numeric type than JavaScript.)
Yes! The JSON library I wrote for the Zephyr RTOS does this. Say, for instance, you have the following struct:
struct SomeStruct {
char *some_string;
int some_number;
};
You would need to declare a descriptor, linking each field to how it's spelled in the JSON (e.g. the some_string member could be "some-string" in the JSON), the byte offset from the beginning of the struct where the field is (using the offsetof() macro), and the type.The parser is then able to go through the JSON, and initialize the struct directly, as if you had reflection in the language. It'll validate the types as well. All this without having to allocate a node type, perform copies, or things like that.
This approach has its limitations, but it's pretty efficient -- and safe!
Someone wrote a nice blog post about (and even a video) it a while back: https://blog.golioth.io/how-to-parse-json-data-in-zephyr/
The opposite is true, too -- you can use the same descriptor to serialize a struct back to JSON.
I've been maintaining it outside Zephyr for a while, although with different constraints (I'm not using it for an embedded system where memory is golden): https://github.com/lpereira/lwan/blob/master/src/samples/tec...
Theoretically yes. Practically there is character escaping.
That kills any non-allocation dreams. Moment you have "Hi \uxxxx isn't the UTF nice?" you will probably have to allocate. If source is read-only you have to allocate. If source is mutable you have to waste CPU to rewrite the string.
> Moment you have "Hi \uxxxx isn't the UTF nice?" you will probably have to allocate.
Depends on what you are doing with it. If you aren't displaying it (and typically you are not in a server application), you don't need to unescape it.
And this is indeed something that the C++ Glaze library supports, to allow for parsing into a string_view pointing into the original input buffer.
I'm confused why this would be a problem. UTF-8 and UTF-16 (the only two common unicode subsets) are a maximum of 4 bytes wide (and, most commonly, 2 in English text). The ASCII representation you gave is 6-bytes wide. I don't know of many ASCII unicode representations that have less bytewidth than their native Unicode representation.
Same goes for other characters such as \n, \0, \t, \r, etc. All half in native byte representation.
> Practically there is character escaping
The voice of experience appears. Upvoted.
It is conceivable to deal with escaping in-place, and thus remain zero-alloc. It's hideous to think about, but I'll bet someone has done it. Dreams are powerful things.
It’s just two pointers the current place to write and the current place to read, escapes are always more characters than they represent so there’s no danger of overwriting the read pointer. If you support compression this can become somewhat of and issue but you simply support a max block size which is usually defined by the compression algorithm anyway.
If you have a place to write, then it's not zero allocation. You did an allocation.
And usually if you want maximum performance, buffered read is the way to go, which means you need a write slab allocation.
> If you have a place to write, then it's not zero allocation. You did an allocation.
Where did that allocation happen? You can write into the buffer you're reading from, because the replacement data is shorter than the original data.
You have a read buffer and somewhere where you have to write to.
Even if we pretend that the read buffer is not allocating (plausible), you will have to allocate for the write source for the general case (think GiB or TiB of XML or JSON).
> You have a read buffer and somewhere where you have to write to.
The "somewhere you have to write to" is the same buffer you are reading from.
Not if you are doing buffered reads, where you replace slow file access with fast memory access. This buffer is cleared every X bytes processed.
Writing to it would be pointless because clears obliterate anything written; or inefficient because you are somehow offsetting clears, which would sabotage the buffered reading performance gains.