Great things about Rust that aren't just performance
ntietz.com183 points by vortex_ape 6 hours ago
183 points by vortex_ape 6 hours ago
For me, there's a headline draw, which is the borrow checker. Really great.
But apart from that, Rust is basically a bag of sensible choices. Big and small stuff:
- Match needs to be exhaustive. When you add something to the enum you were matching, it chokes. This is good.
- Move by default. If you came from c++, I think this makes a lot of sense. If you have a new language, don't bring the baggage.
- Easy way to use libraries. For now it hasn't splintered into several ways to build yet, I think most people still use cargo. But cargo also seems to work nicely, and it means you don't spend a couple of days learning cmake.
- Better error handling. There's a few large firms that don't use exceptions in c++. New language with no legacy? Use the Ok/Error/Some/None thing.
- Immutable by default. It's better to have everything locked down and have to explicitly allow mutation than just have everything mutable. You pay every time you forget to write mut, but that's pretty minor.
- Testing is part of the code, doesn't seem tacked on like it does in c++.
> Better error handling. There's a few large firms that don't use exceptions in c++. New language with no legacy? Use the Ok/Error/Some/None thing.
I think this is still very much a debatable point. There are disadvantages to exceptions, mostly around code size and performance. But they are still the only error handling mechanism that anyone has found that defaults to adding enough context to errors to actually be useful (except of course in C++, because C++ doesn't like having useful constructs).
Rust error handling tends towards not adding any kind of context whatsoever to errors - if you use the default error mechanisms and no extra libraries. That is, if you have a call stack three functions deep that uses `?` for error handling, at the top level you'll only get an error value, you'll have no idea where the value originated from, or any other information about the execution path. This can be disastrous for actually debugging hard to reproduce errors.
I feel like your last point is the exact issue with exceptions, not rust’s errors. Exceptions are like having “?” on every single line.
When an exception happens, you get a stack trace somewhere in your logs (unless you do something really weird). That doesn't always include all the information you'd like (for example, if the error happened in a loop, you don't get info about the loop variable).
In contrast, unless you manually add context to the error (or use a library that does something like this for you, overriding the default ? behavior), you won't get any information about where an error occurred at all.
Sure, with exceptions, you don't know statically where an exception might happen. But at runtime, you do get the exact information. So, if the error is hard to reproduce, you still have information about where exactly it occurred in those rare occasions where it happened.
> When an exception happens, you get a stack trace somewhere in your logs
OK, so, if I write the canonical modern C++ Hello World, execute it against an environment where the "standard output" doesn't exist, where does this stack trace get recorded? Maybe it depends on the compiler and standard library implementation somehow?
My impression is that in reality C++ just ignores the problem and carries on, so actually there was no stack trace, no logging, it just didn't work and too bad. Unsurprisingly people tasked with making things work prefer a language which doesn't do that.
How does any other language deal with POSIX standard I/O streams or the lack thereof? Definitely not a C++ or exceptions problem. Which language lets you compile a "Hello, World!" program and then execute it against a non-POSIX-compatible environment and get the correct output... somewhere?
If you're executing against a POSIX-compatible environment, then stdin, stdout, and stderr are expected to exist and be configured properly if you want them to work[1].
If you're executing against some other environment, like webassembly or an embedded system, then you'll already (hopefully) be using some logging and error handling approach that sends output to the correct place. Doesn't matter if you're using C, C++, .NET, Rust, Zig, etc.
For example, webassembly is an environment without stdio streams. It's your responsibility to make sure there is a proper way to record output, even if it's just a compatibility layer that goes to console.log.
[1]: https://pubs.opengroup.org/onlinepubs/9799919799/functions/s...
> Match needs to be exhaustive.
When I see people mention C++ with MISRA rules, I just think -- why do we need all these extra rules, often checked by a separate static analysis tool and enforced manually (that comes down to audit/compliance requirement), when they make perfect sense and could be done by the compiler? Missing switch cases happens often when an enum value is modified to include one extra entry and people don't update all code that uses it. Making it mandatory at compiler level is an obvious choice.
-Wswitch ¶
Warn whenever a switch statement has an index of enumerated type and lacks a case for one or more of the named codes of that enumeration. (The presence of a default label prevents this warning.) case labels that do not correspond to enumerators also provoke warnings when this option is used, unless the enumeration is marked with the flag_enum attribute. This warning is enabled by -Wall.
<https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#inde...>The compiler can do that... And it's included in -Wall. It's not on by default but is effectively on in any codebase where anyone cares...
Please don't argue about "but I don't need to add a flag in Rust" it's not rust, there's reasons the standard committee finds valid for why and honestly your welcome to implement your own compiler that turns it on by default just like the rust compiler which has no standard because "the compiler is the standard".
MISRA won't be OK with that.
MISRA requires that you explicitly write the default reject. So -Wswitch doesn't get it done even though I agree that if C had (which it did not) standardized this requirement that would get you what you need.
C also lacks Rust's non_exhaustive trait. If the person making a published Goose type says it's non-exhaustive then in their code nothing changes, all their code needs to account for all the values of type Goose as before - but everybody else using that type must accept that the author said it's non-exhaustive, so they cannot account for all values of this type except by writing a default handler.
So e.g if I publish an AmericanPublicHoliday type when Rust 1.0 ships in 2015, and I mark it non-exhaustive since by definition new holidays may be added, you can't write code to just handle each of the holidays separately, you must have a default handler. When I add Juneteenth to the type, your code is fine, that's a holiday you must handle with your default handler, which you were obliged to write.
On the other hand IPAddr, the IP address, is an ordinary exhaustive type, if you handle both IPv6Addr and IPv4Addr you've got a complete handling of IPAddr.
> Please don't argue about "but I don't need to add a flag in Rust"
Why not? It's a big issue. You say it's "on in any codebase where anyone cares", and I agree with that but in my experience most C++ developers don't care.
I regularly have to work with other people's C++ where they don't have -Wall -Werror. It's never an issue in Rust.
Also I don't buy that they couldn't fix this because it would be a breaking change. That's just an excuse for not bothering. They've made backwards incompatible changes in the past, e.g. removing checked exceptions, changing `auto`, changing the behaviour around operator==. They can just use the standard version, just like Rust uses Editions.
Of course they won't, because the C++ standards committee is still very much "we don't need seatbelts, just drive well like me".
> I regularly have to work with other people's C++ where they don't have -Wall -Werror.
To be fair, -Werror is kind of terrible. The set of warnings is very sensitive to the compiler version, so as soon as people work on the project with more than one compiler or even more than one version of the same compiler, it just becomes really impractical.
An acceptable compromise can be that -Werror is enabled in CI, but it really shouldn't be the default at least in open-source projects.
A common trope that is probably ignored or even unknown to uninformed C/C++ programmers, is that -Werr should be used for debug builds (as you use during development) and never for release builds (as otherwise it will most probably break compilation in future releases of the compiler)
> A common trope that is probably ignored or even unknown to uninformed C/C++ programmers, is that -Werr (...)
Not even that. -Wall -Werror should be limited to local builds, and should never touch any build config that is invoked by any pipeline.
Yes that is the standard practice for open source projects (where it happens at all), but again that's another way in which C++ warnings are not even close to Rust errors.
> I regularly have to work with other people's C++ where they don't have -Wall -Werror.
I think you inadvertently showed why this sort of thing: it's simply bad practice and a notorious source of problems. With -Wall -Werror you can turn any optional nit remark into a blocked pipeline requiring urgent maintenance. I know it because I had to work long hours in a C++ project that suddenly failed to build because a moron upstream passed -Wall -Werror as transitive build flags. We're talking about production pipelines being blocked due to things like function arguments being declared but not used.
Sometimes I wonder if these discussions on the virtues of blindly leaning on the compiler are based on solid ground or are instead opinionated junior devs passing off their skinner box as some kind of operational excellence.
Wall Werror is a nice idea that university professors will tell you about that collides at first contact with the real world where you are including 3rdparty headers that then spit 50 pages of incomprehensible GCC "overflow analysis" warnings.
You can use `-Isystem` for that. It isn't particularly well supported by C++ build systems, but also your assertion that third party headers don't compile with `-Wall -Werror` doesn't match my experience. Usually they're fine.
> GCC "overflow analysis" warnings
I think I've seen this with `fmt`, and it was a GCC compiler bug. Not much you can do about that.
> It's not on by default but is effectively on in any codebase where anyone cares...
Then why is this a MISRA rule by itself? Shouldn't it just be "every codebase must compile with -Wall or equivalent"?
>...honestly your welcome to implement your own compiler that turns it on by default just like the rust compiler which has no standard because "the compiler is the standard".
The C and C++ standards are quite minimal and whether or not an implementation is "compliant" or not is often a matter of opinion. And unlike other language standards (e.g. Java or Ada) there isn't even a basic conformance test suite for implementations to test against. Hence why Clang had to be explicitly designed for GCC compatibility, particularly for C++.
Merely having a "language standard" guarantees very little. For instance, automated theorem proving languages like Coq (Rocq now, I suppose)/Isabelle/Lean have no official language standard, but they far more defined and rigorous than C or C++ ever could be. A formal standard is a useful broker for proprietary implementations, but it has dubious value for a language centered around an open source implementation.
MISRA's rules are a real mix in three interesting senses
Firstly, in terms of what the rules require. Some MISRA rules are machine checkable. Your compiler might implement them or, more likely, a MISRA auditing tool you bought does so. Some MISRA rules need human insight in practice. Is this OK, how about that? A good code review process should be able to catch these, if the reviewers are well trained. But a final group are very vague, almost aspirational, like the documentation requirements, at their best these come down to a good engineering lead, at their worst they're completely futile.
Secondly in terms of impact, studies have shown some MISRA rules seem to have a real benefit, codebases which follow these rules have lower defect rates. Some are neutral, some are net negative, code which followed these MISRA rules had more defects.
Thirdly in terms of what they do to the resulting software. Some MISRA rules are reasonable choices in C, you might see a good programmer do this without MISRA prompting just because they thought it was a good idea. Some MISRA rules prohibit absolute insanity. Stuff like initializing a variable in one switch clause, then using it in a different clause! Syntactically legal, and obviously a bad idea, nobody actually does that so why write a whole rule to prohibit it? But then a few MISRA rules require something no reasonable C programmer would ever write, and for a good reason, but it also just doesn't really matter. Mostly this is weird style nits, like if your high school English essay was marked by a NYT copy editor and got a D minus because you called it NASCAR not Nascar. You're weird NYT, you're allowed to be weird but that's not my fault and I shouldn't get penalized.
Because MISRA is also insane and has long bled into a middle managers dream of a style guide? It would make for a terrible language (that ironically isn't much more "secure" "safe" "reliable")
> Easy way to use libraries
This is both a blessing and a curse. Seeing the rust docs require 561 crates makes it clear that rust/cargo is headed down the same path as node/npm
Downloaded 561 crates (50.7 MB) in 5.21s (largest was `libsqlite3-sys` at 5.1 MB)
By "rust docs" you seem to mean "docs.rs, the website that hosts documentation for all crates in the Rust ecosystem", which is a little bit different than the impression you give.
It's a whole web services with crates.io webhooks to build and update new documentation every time a crates gets updated, tracks state in a database and stores data on S3, etc. Obviously if you just want to build some docs for one crate yourself you don't need any of that. The "rustdoc" command has a much smaller list of dependencies.
Cargo is 10 years old, and it's been working great. It has already proven that it's on a different path than npm.
* Rust has a strong type system, with good encapsulation and immutability by default, so the library interfaces are much less fragile than in JS. There's tooling for documenting APIs and checking SemVer compat.
* Rust takes stability more seriously than Node.js. Node makes SemVer-major releases regularly, and for a long time had awful churn from unstable C++ API.
* Cargo/crates-io has a good design, and a robust implementation. It had a chance to learn from npm's mistakes, and avoid them before they happened (e.g. it had a policy preventing left-pad from day one).
And the number of deps looks high, but it isn't what it seems. Rust projects tend split themselves into many small packages, even when they all are part of the same project written by the same people.
Cargo makes all transitive dependencies very visible. In C you depend on pre-built dynamic libraries, so you just don't see what they depend on, and what their dependencies depend on.
For example, Rust's reqwest shows up as 150 transitive dependencies, but it has fewer supported protocols, fewer features, and less code overall than a 1 dep of libcurl.
Almost all of the things that were wrong with NPM were self inflicted. No name spacing packages by default, allowing packages to be deleted / removed without approval, specifying install ranges and poor lock file implementation and so on.
There's an argument to be made that there are too many packages from too many authors to trust everything. I don't find the argument to be too convincing, because we can play what-if games all day long, and if you don't want to use them, you get to write your own.
The issue is micro-packages. Instead of a few layers between the os and your code, you find yourself with a wide dependency tree, with so many projects that it’s impossible to audit.
An alternative of "now everyone who uses a linked list has their own mostly-the-same, but-just-different-enough" list.c and list.h files that need separate auditing (if you care) isn't better.
If list.c is part of the project, it’s easier because you don’t have to hunt down every dependency’s repository. It’s much easier to audit and trust 5 projects/orgs, than 50 different entities.
50 different dependencies covers a _lot_ more behaviour than a list.c. The point would be to audit a list package, and have audited it for all users, rather than all users needing to audit their own.
> Match needs to be exhaustive. When you add something to the enum you were matching, it chokes. This is good.
There’s a reason why ML and Haskell compilers generally have that as a warning by default and not an error: when you need a pipeline of small transformations of very similar languages, the easiest way to go is usually declare one tree type that’s the union of all of them, then ignore the impossible cases at each stage. This takes the problem entirely out of the type system, true, but an ergonomic alternative for that hasn’t been invented, as far as I know. Well, aside from the micropass framework in Scheme, I guess, but that requires exactly the kind of rich macros that Rust goes out of its way to make ugly. (There have been other attempts in the Haskell world, like SYB, but I haven’t seen one that wouldn’t be awkward.)
Add to this: trait system vs deep OOP.
Really nice macro system.
First class serde.
First class sync/send
Derives!
> First class serde.
What do you mean? `Serialize` and `Deserialize` are not part of std.
It's true, they're not part of the standard library. Nevertheless, it is conventional to provide implementations for things you reasonably expect your users might want to serialize and deserialize. Standard guidance includes telling you to name a feature flag (if you want one for this) serde and not something else so as to reduce extra work for your users.
Because Rust's package ecosystem is more robust it's less anxious about the strict line between things everybody must have (in the standard library) and things most people want (maybe or maybe not in the standard library). In C++ there's a powerful urge to land everything you might need in the stdlib, so that it's available.
For example the FreeBSD base system includes C++. They're not keen on adding to their base system, so for example they seem disinclined to take Rust, but when each C++ ISO standard bolts in whatever new random nonsense well that's part of C++ so it's in the base system for free. Weird data structure a game dev wants? An entire linear algebra system from Fortran? Comprehensive SI unit systems? It's not up to the FreeBSD gatekeepers, a WG21 vote gets all of those huge requirements into FreeBSD anyway.
"Applying Traits to the Smalltalk Collection Classes", 2003
https://rmod-files.lille.inria.fr/Team/Texts/Papers/Blac03a-...
Traits as CS concept, are part of OOP paradigm.
Traits in Rust are more a variant of Haskell typeclasses than of Smalltalk traits.
The whole FP vs OOP distinction does make little sense these days, as it has mostly been shown that each concept from the one can neatly fit within the other and vice versa.
Traits as CS concept, are part of FP paradigm.
Reverse Uno!
Yes, and someone called Simon Peyton Jones happens to have a talk on how Haskell type classes and classical OOP interfaces interrelate.
Yes, and someone called Gabriella Gonzales happens to have a blog post on how objects are like comonads:
https://www.haskellforall.com/2013/02/you-could-have-invente...
And someone called Samuel the Bloggy Badger happens to have another blog posts on how comonads are really more like neighborhoods:
https://gelisam.blogspot.com/2013/07/comonads-are-neighbourh...
...so it might all just be a scam!
The traits concept mentioned in your link looks very different from Rust traits. It describes something more akin to Java interfaces.
Java interfaces are based on Objective-C protocols.
The only big difference is how implementation is mapped into the trait specification.
And that's the problem isn't it? Rust traits are based on GHC type classes, not at all from either Java or Objective-C or Smalltalk.
Thankfully this fellow Simon Peyton Jones has a talk about how they map into OOP paradigm.
"Classes, Jim, But Not as We Know Them — Type Classes in Haskell: What, Why, and Whither"
https://www.microsoft.com/en-us/research/publication/classes...
"Adventure with Types in Haskell"
https://www.youtube.com/watch?v=6COvD8oynmI
https://www.youtube.com/watch?v=brE_dyedGm0
On the first lecture he discusses how Haskell relates to OOP in regards of subtyping and generic polymorphism and how although different on the surface they share those CS concepts in their own ways.
No. Did you read the contents of the links you shared? The name of the slides in your first link is "Classes, Jim, but not as we know them". And let me quote from the slides in your first link:
From slide 40:
> So the links to intensional polymorphism are closer than the links to OOP.
From the first bullet of slide 43:
> No problem with multiple constraints
> f :: (Num a, Show a) => a -> ...
From the second bullet:
> Existing types can retroactively be made instances of new type classes (e.g. introduce new Wibble class, make existing types an instance of it):
> class Wibble a where
> wib :: a -> Bool
> instance Wibble Int where
> wib n = n+1
From slide 46:
> In Haskell you must anticipate the need to act on arguments of various type
> f :: Tree -> Int
> vs
> f’ :: Treelike a => a -> Int
> (in OO you can retroactively sub-class Tree)
From slide 50:
> In Java (ish):
> inc :: Numable -> Numable
> from any sub-type of Numable to any super-type of Numable
> In Haskell:
> inc :: Num a => a -> a
> Result has precisely same type as argument
I appreciate you sharing informative links even though they prove you wrong. I haven't seen this set of slides before but I find it a very good concise explanation of why Haskell classes are not traditional OOP classes or interfaces.
I didn't say they were exactly 100% the same thing, and from those videos starting at 1:01:00, I advise the section "Two approaches to polymorphism", including the overlapped set of features.
There are shades of OOP, and while you're technically correct I think the meaning of my post is clear.
> Rust is basically a bag of sensible choices.
Mostly yes. In C/C++, the defaults are usually in the less safe direction for historical reasons.
It's not about less safe, the C++ defaults are usually just wrong. It's so well known that Phil Nash had to make clear whether he was giving the same talk about how all the defaults are wrong at CppCon or a different talk, otherwise who knows.
For some cases you can make an argument that the right default would have been safer. For mutability, for avoiding deductions, these are both sometimes footguns. But in other cases the right default isn't so much safer as just plain better, the single argument constructors should default to explicit for example, all the functions which qualify as constexpr might as well be constexpr by default, there's no benefit remaining for the contrary.
My favourite wrong default is the memory ordering. The default memory ordering in C++ is Sequentially Consistent. This default doesn't seem obviously wrong, what would have been better? Surely we don't want Relaxed? And we can't always mean Release, or Acquire, and in some cases the combination Acquire-Release means nothing, so that's bad too. Thus, how can Sequentially Consistent be the wrong default? Easy - having a default was wrong. All the options were a mistake, the moment the committee voted they'd already fucked up.
> Move by default. If you came from c++, I think this makes a lot of sense.
> Immutable by default.
In C++, these two fight each other. You can't (for the most part) move from something that's immutable.
How does Rust handle this? I assume it drops immutability upon the move, and that doesn't affect optimizations because the variable is unused thereafter?
In Rust, when you move out of a variable, that variable is now effectively out-of-scope; trying to access it will result in a compile error.
Mutability in Rust is an attribute of a location; not a value, so you can indeed move a value from an immutable location into a mutable one, thus "dropping immutability". (But you can only move out of a location that you have exclusive access to -- you can't move out of an & reference, for example -- so the effect is purely local.)
You can't refer to any old location so there is no observable mutation. For example you can't move if a reference exists.
Rust moves aren't quite the same as C++ moves, you can think of them more like a memcpy where the destructor (if there is one) doesn't get run on the original location. This means you can move an immutable object, the object itself doesn't have to do anything to be moved.
So when are we going to get a proper application (not systems) programming language with all these nice things about Rust?
> Testing is part of the code, doesn't seem tacked on like it does in c++.
Or most languages! Many could easily imitate it too. I'd love a pytest mode or similar framework for python that looked for doc tests and has a 'ModTest' or something class.
> There's a few large firms that don't use exceptions in c++
Google: https://google.github.io/styleguide/cppguide.html#Exceptions
Just make sure you read the whole darn thing:
> Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project.
> ...Things would probably be different if we had to do it all over again from scratch.
It's quite ironic to cite the Google C++ Style Guide as somehow supporting the case against exceptions. It's saying the opposite: we would probably use exceptions, but it's too late now, and we can't.
Somehow people miss this...
I can't remember the last time I worked on a C++ code base at any company that used exceptions. This is for good reason; making some types of systems-y code exception-safe can be tricky to get right and comes with a performance cost. For many companies the juice is not worth the squeeze.
> This is for good reason; making some types of systems-y code exception-safe can be tricky to get right and comes with a performance cost. For many companies the juice is not worth the squeeze.
Those types of systems-y code can avoid exceptions if they want. Nobody said exceptions are a panacea. The alternative error models have their own performance and other problems, and those can manifest differently to other types of codebases.
exceptions in C++ are a foot gun. Even the top C++ gurus/leaders know this and are trying to find some new solution
Thanks for the 1-hour video. Could you link to the timestamp of the strongest argument(s) you see in the video that are relevant in the current discussion (i.e. the existing error models we're talking about in Rust and C++, rather than a hypothetical future one)?
Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc. In a discussion like this, those two are probably the worst examples of exceptions. They're the most severe exceptions, and the one the fewest people care to actually catch, and the ones that error codes are possibly the worst at handling anyway. (Do you really want an error returned from push_back?) The most common stuff is I/O errors, permission errors, format errors, etc. which aren't well represented by resource exhaustion at all, much less memory exhaustion.
P.S. W.r.t. "the top C++ gurus/leaders" - Herb is certainly talented, but I should note that the folks who wrote Google's style guide are... not amateurs. They have been involved in the language development and standardization process too. And they're just as well aware of the benefits and footguns as anyone.
The general problem cited with exceptions is that they're un-obvious control flow. The impact it has is clearer in Rust, because of the higher bar it sets for safety/correctness.
As a specific example, and this is something that's been a problem in the std lib before. When you code something that needs to maintain an invariant, e.g. a length field for an unsafe operation, that invariant has to be upheld on every path out of your function.
In the absence of exceptions, you just need to make sure your length is correct on returns from your function.
With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function, but it needs to deal with fixing up your invariant wherever the exception occurred (e.g. of the fix-up operation that needs to happen is different based on where in your function the exception occurred).
To avoid that you can wrap every call that can cause an exception so you can do the specific cleanup that needs to happen at that point in the function... But at that point what's the benefit of exceptions?
> With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function [...] To avoid that you can wrap every call [...]
That's the wrong way to handle this though. The correct way (in most cases) is with RAII. See scope guards (std::experimental::scope_exit, absl::Cleanup, etc.) if you need helpers. Those are not "way harder" to deal with, and whether the control flow out of the function is obvious or not is completely irrelevant to them -- in fact, that's kind of their point.
In fact, they're better than both exception handling and error codes in at least one respect: they actually put the cleanup code next to the setup code, making it harder for them to go out of sync.
None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.
> None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.
Huh? I don't get it. This:
stack.push_back(k);
absl::Cleanup _ = [&] { assert(stack.back() == k); stack.pop_back(); }
if (foo()) {
printf("foo()\n");
return 1;
}
if (bar()) {
printf("bar()\n");
return 2;
}
baz();
return 3;
is both easier, more readable, and more robust than: stack.push_back(k);
if (foo()) {
printf("foo()\n");
assert(stack.back() == k);
stack.pop_back();
return 1;
}
if (bar()) {
printf("bar()\n");
assert(stack.back() == k);
stack.pop_back();
return 2;
}
baz();
assert(stack.back() == k);
stack.pop_back();
return 3;
as well as: stack.push_back(k);
auto pop_stack = [&] { assert(stack.back() == k); stack.pop_back(); }
if (foo()) {
printf("foo()\n");
pop_stack();
return 1;
}
if (bar()) {
printf("bar()\n");
pop_stack();
return 2;
}
baz();
pop_stack();
return 3;
and unlike the others, it avoids repeating the same code three times.(Ironically, I missed the manual cleanups before the final returns in the last two examples right as I posted this comment. Edited to fix now, but that itself should say something about which approach is actually more bug-prone...)