Zig – io_uring and Grand Central Dispatch std.Io implementations landed
ziglang.org364 points by Retro_Dev 2 days ago
364 points by Retro_Dev 2 days ago
It's interesting to see this land while Rust support of io_uring in a mainstream library is lagging. And not for lack of trying, its just difficult to design a safe (zero-cost) idiomatic Rust abstraction over io_uring's completion based IO.
I don't want to be the negative guy, but this is news about two unfinished implementations. There is a lot of work needed for this to be considered done. For example, no networking in the GCD version yet. And as these are being implemented, the interface stops being an interface, the vtable keeps growing, and it's just the current snapshot of what's needed by the std implementations.
They aknowlege that at the beggining of the post?
> They are now available to tinker with, by constructing one’s application using std.Io.Evented. They should be considered experimental because there is important followup work to be done before they can be used reliably and robustly:
And then they proceed to list 6 important pending work to be done.
I think another way of thinking about this interface is: it’s kind of like an abstraction over Linux system calls and ntdll. It’s naturally gonna have a kind of subset of all the useful calls, with some wrapping.
I don’t see anything wrong with this, it’s kind of how Windows forces developers to use DLL to access syscalls (the syscall numbers can change) which IMO is a good architectural decision.
There's a relevant open issue[1] here about stack memory optimization. It would be nice to be able to use a [500]u8 in a block and another [500]u8 in another block, and have that only contribute 500 bytes to the stack frame, but Zig can't currently do this.
(The green threads coro stack stuff makes this more important.)
[1]: https://github.com/ziglang/zig/issues/23475#issuecomment-279...
Contrary to the neggies, I am positive in Zigs effort to iterate & improve.
Right now there is no language that is good at io-uring. There are ok offerings, but nothing really has modern async joy that works with uring.
Whoever hammers out a good solution here is going to have a massive leg up. Rust is amazing in so many ways but it has been quite a brutal road to trying to support io-uring ok, and efforts are still a bit primitive, shall we say. If Zig can nail this down that would be fantastic!!
I would way rather Zig keep learning and keep changing, keep making new and better. Than to have it try to appease those who are too conservative for the project, unwilling to accept change and improvement, people focused on stability. It takes a lot of learning to make really good systems, to play with fit and finish. Zig is doing the good work. Imo we ought be thankful.
It’s surprising to me how much people seem to want async in low level languages. Async is very nice in Go, but the reason I reach for a language like Zig is to explicitly control those things. I’m happily writing a Zig project right now using libxev as my io_uring abstraction.
Using async in low level languages goes all the way back to the 1960's, became common in systems languages like Solo Pascal, Modula-2, with Dr.Dobbs and The C/C++ User's Journal having plenty of articles regarding C extensions for similar purposes.
Hardly anything radical.
When I look at historical cases, it seems different from a case today. If I’m a programmer in the 60s wanting async in my “low level language,” what I actually want is to make some of the highest level languages available at the time even more high level in their IO abstractions. As I understand it, C was a high-level language when it was invented, as opposed to assembly with macros. People wanting to add async were extending the state of the art for high level abstraction.
A language doing it today is doing it in the context of an ecosystem where even higher level languages exist and they have made the choice to target a lower level of abstraction.
There’s two things I think a low level language should have:
1. Standardise on a sync/async agnostic IO interface (or something similar) so you don’t get fragmentation in the ecosystem.
2. Stackless coroutines. Should give the most efficient async io code, and efficient code is one of the reasons I want to use low level language
I am also positive, but when is the language going to hit a stable very LTS version that won't be touched for a long time?
If you want to compete with C, you can't do so without understanding that its stability and the developers focusing on mastering its practices, design, limitations, tooling has been one of the major successes.
> when is the language going to hit a stable very LTS version that won't be touched for a long time?
Is there any reason to be rushing it? Zig isn't languishing without activity. Big things are happening, and it's better in my opinion for them to get the big important stuff right early than it is to get something stable that is harder to change and improve later.
"Competing with C" means innovating, not striving to meet feature parity so it can be frozen in time. It's not as though C has anything terribly exciting going on with it. Let them cook.
There are so many other options available. If that is a concern, zig is not the answer now. Rushing to "LTS" would go completely against the ethos of constant experimentation and improvement that is and has been making zig great. C is 50 years old. Maybe give it a little time...
FWIW C++ has quite a few async I/O libraries that support io-uring. For example, ASIO has had a io-uring backend since 1.21 (late 2021).
I like that zig takes freestanding target seriously. And seems like 0.16 becomes even better for freestanding code reusability.
Haven’t looked into MacOS internals for a while, happy to see they stuck to GCD, great middle ground for parallelisation
I'm not a zig fan myself, but I'm glad to see a substantial project with momentum and vision moving ahead. It's not languishing. It's trying interesting new things. It's striving for incremental gains consistently over time.
There's a lot of hate in these comments. Nobody is forcing you to use Zig and it's not trying to be "done" right now. And in fact, if the only thing they were focusing on was putting a bow on the project to call it "1.0", it probably wouldn't achieve any of it's long term goals of being a mainstream systems programming language. If it takes another five years or fifteen, as long as the project moves forward with the same energy, it's going to be fine.
For a fairly small project that's largely one dude, this is far more than most of us have or could hope to ever achieve ourselves. Give the people putting in the work credit where credit is due.
Every once in a while I build a game engine with a tcp & udp multiplayer server to learn a new language. I started doing this with zig a couple months ago.
It might be because I've done it a few times now, and/or because of the existence of LLMs, but this is the most fun I've had doing, and the most productive I've been, and the engine absolutely rips performance wise.
Zig makes it very easy to do this kind of lowish-level data-oriented programming, and tbh, I'm hooked. I was using rust for my performance critical services but dancing around the strictness and verbosity of memory management in rust gives me nothing in comparison and just gets in my way. This is partially a skill issue, but life is short and I just want to make fast, well organized software that works.
[dead]
I feel like it's worthless to keep up with Zig until they reach 1.0.
That thing, right here, is probably going to be rewritten 5 times and what not.
If you are actively using Zig (for some reasons?), I guess it's a great news, but for the Grand Majority of the devs in here, it's like an announcement that it's raining in Kuldîga...
So m'yeah. I was following Zig for a while, but I just don't think I am going to see a 1.0 release in my lifetime.
IME Zig's breaking changes are quite manageable for a lot of application types since most of the breakage these days happens in the stdlib and not in the language. And if you just want do read and write files, the highlevel file-io interfaces are nearly identical, they just moved to a different namespace and now require a std.Io pointer to be passed in.
And tbh, I take a 'living' language any day over a language that's ossified because of strict backward compatibility requirements. When updating a 3rd-party dependency to a new major version it's also expected that the code needs to be fixed (except in Zig those breaking changes are in the minor versions, but for 0.x that's also expected).
I actually hope that even after 1.x, Zig will have a strategy to keep the stdlib lean by aggressively removing deprecated interfaces (maybe via separate stdlib interface versions, e.g. `const std = @import("std/v1");`, those versions could be slim compatibility wrappers around a single core stdlib implementation.
> I take a 'living' language any day over of a language that's ossified because of strict backward compatibility requirements
Maybe you would, but >95% of serious projects wouldn't. The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace, where low-level languages are commonly used, codebases typically last for over 30 years), and while such changes are manageable early on, they become less so over time.
You say "strict" as if it were out of some kind of stubborn princple, where in fact backward compatibility is one of the things people who write "serious" software want most. Backward compatibility is so popular that at some point it's hard to find any feature that is in high-enough demand to justify breaking it. Even in established languages there's always a group of people who want somethng badly enough they don't mind breaking compatibility for it, but they're almost always a rather small minority. Furthermore, a good record of preserving compatibility in the past makes a language more attractive even for greenfield projects written by people who care about backward compatibility, who, in "serious" software, make up the majority. When you pick a language for such a project, the expectation of how the language will evolve over the next 20 years is a major concern on day one (a startup might not care, but most such software is not written by startups).
> The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace).
Either those applications are actively maintained, or they aren't. Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version (e.g. when in doubt, "never change a running system"), old compiler toolchains won't suddenly stop working.
FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial, depending on the complexity of the code base (especially when there's UB lurking in the code, or the code depends on specific compiler bugs to be present - e.g. changing anything in a project setup always comes with risks attached).
> Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version
Of course, but you want to make that as easy as you can. Compatibility is never binary (which is why I hate semantic versioning), but you should strive for the greatest compatibility for the greatest portion of users.
> FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial
I know that well (especially for C++; in C the situation is somewhat different), and the backward compatibility of C++ compilers leaves much to be desired.
You could fix versions, and probably should. However willful disregard of prior interfaces encourages developers code to follow suit.
It’s not like Clojure or Common Lisp, where a decades old software still runs, mostly unmodified, the same today, any changes mainly being code written for a different environment or even compiler implementation. This is largely because they take breaking user code way more seriously. Alot of code written in these languages seem to have similar timelessness too. Software can be “done”.
I would also add that Rust manages this very well. Editions let you do breaking changes without actually breaking any code, since any package (crate) needs to specify the edition it uses. So when in 30 years you're writing code in Rust 2055, you can still import a crate that hasn't been updated since 2015 :)
Unfortunately editions don't allow breaking changes in the standard library, because Rust codes written in different "editions" must be allowed to interoperate freely even within a single build. The resulting constraint is roughly similar to that of never ever breaking ABI in C++.
> The resulting constraint is roughly similar to that of never ever breaking ABI in C++.
No, not even remotely. ABI-stability in C++ means that C++ is stuck with suboptimal implementations of stdlib functions, whereas Rust only stabilizes the exposed interface without stabilizing implementation details.
> Unfortunately editions don't allow breaking changes in the standard library
Surprisingly, this isn't true in practice either. The only thing that Rust needs to guarantee here is that once a specific symbol is exported from the stdlib, that symbol needs to be exported forever. But this still gives an immense amount of flexibility. For example, a new edition could "remove" a deprecated function by completely disallowing any use of a given symbol, while still allowing code on an older edition to access that symbol. Likewise, it's possible to "swap out" a deprecated item for a new item by atomically moving the deprecated item to a new namespace and making the existing item an alias to that new location, then in the new edition you can change the alias to point to the new item instead while leaving the old item accessible (people are exploring this possibility for making non-poisoning mutexes the default in the next edition).
Only because Rust is a source only language for distribution.
One business domain that Rust currently doesn't have an answer for, is selling commercial SDKs with binary libraries, which is exactly the kind of customers that get pissed off when C and C++ compilers break ABIs.
Microsoft mentions this in the adoption issues they are having with Rust, see talks from Victor Ciura, and while they can work around this with DLLs and COM/WinRT, it isn't optimal, after all Rust's safety gets reduced to the OS ABI for DLLs and COM.
I'm not expecting to convince you of this position, but I find it to be a feature, not a bug, that Rust is inherently hostile to companies whose business models rely on tossing closed-source proprietary blobs over the wall. I'm fairly certain that Andrew Kelley would say the same thing about Zig. Give me the source or GTFO.
In the end it is a matter of which industries the Rust community sees as relevant to gain adoption, and which ones the community is happy that Rust will never take off.
Do you know one industry that likes very much tossing closed-source proprietary blobs over the wall?
Game studios, and everyone that works in the games industry providing tooling for AAA studios.