Cost of enum-to-string: C++26 reflection vs. the old ways

vittorioromeo.com

95 points by sagacity 4 days ago


vanderZwan - 3 days ago

> The header is the cost. Not the reflection. The reflection algorithm is fast – asymptotically ~0.07 ms per enumerator, essentially the same as the hand-rolled switch in the X-macro version (~0.06 ms). What makes reflection look expensive is <meta>: just including it costs ~155 ms per TU over the baseline.

So speaking of old ways, I'm not a C++ dev, but a while ago saw someone comment that they still organize their C++ projects using tips from John Lakos' Large-scale C++ software design from 1997, and that their compile times are incredibly fast. So I decided to find a digital copy on the high seas and read it out of historical curiosity. While I didn't finish it, one wild thing stood out to me: he advised for using redundant external include guards around every include, e.g.

     #ifndef INCLUDED_MATH
     #include <math>
     #define INCLUDED_MATH
     #endif
The reason for this being that (in 1997) every include required that the pre-processor opened the file just to check for an include guard and reading it all the way to the end to find the closing #endif, causing potentially O(N*2) disk read overhead (if anyone feels like verifying this, it's explained on pages 85 to 87).

Again, that was in 1997. I have no idea what mitigations for this problem exist in compilers by now, but I hope at least a few, right?

This conclusion is making me wonder if following that advice still would have a positive impact on compile times today after all though. Surely not, right? Can anyone more knowledgeable about this comment on that?

sagacity - 4 days ago

Oof, that first example (the idiomatic C++26 way) looks so foreign if you're mostly used to C++11.