Windows: Prefer the Native API over Win32
codeberg.org59 points by nikbackm 4 days ago
59 points by nikbackm 4 days ago
> Comparing the comprehensive Win32 API reference against the incidentally documented Native APIs, its clear which one Microsoft would prefer you use. The native API is treated as an implementation detail, whilst core parts of Windows' backwards compatibility strategy are implemented in Windows subsystem.
> A general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
Zig clearly doesn't actually care that much about building robust and reusable software if they're going to forgo Microsoft's decades-long backwards compatibility functionality for the dubious gains of using bare-metal APIs.
They defer all very real issues caused by their approach as being problems for others to solve (wine, antiviruses, users, even microsoft). That's such a weird level of hubris.
I think the only place where avoiding win32 is desirable is to write drivers, but zig already has support for some level of bare-metal development and I'm sure a package can provide shims to all ntdll utilities for that use-case.
I think it's pretty clear that they're doing it because it's a more fun challenge. As a low-level developer myself, I agree that using the lowest-level API possible is fun, especially if it's poorly documented and you have to try to try to preemptively mitigate breakage! But this is no mentality to have when you're writing a language ecosystem...
The Zig maintainers clearly think that keeping up with the undocumented native API is less headache than using the documented but notoriously inelegant win32 API.
This might very well be a good idea. Microsoft is not going to change a vital piece of their OS just on a whim. I would wager that even if they wanted to, they would not be able to do so that easily. A large organization maintaining a large software with a billion users just does not move that fast.
lately it feels like zig is attempting to speed run irrelevance. which is a shame.
I don't know, everyone here seems plenty okay to tolerate worse levels of instability from Linux binaries. :)
im too much of a mac user to understand what this references :p
Take a random Linux binary which does anything non-trivial (has a GUI, does system monitoring, etc.), try running it on a different distribution from 3 years earlier without a packaging system, and tell me how it goes.
Zig is proposing the opposite problem: future versions of windows wont run even trivial zig programs from today.
I can tell you that old Linux binaries run just fine on current distros.
Looking at how many times you repeated your misunderstanding in this thread it's clear that, not only do you not understand the solution, you don't understand the problem either.
> I can tell you that old Linux binaries run just fine on current distros.
Simple: I don't believe you. Grab this copy of Firefox from 2022, https://www.firefox.com/en-US/firefox/96.0/releasenotes/, and run it on a modern distribution in 2026. If you fail, my point is made.
Does Debian Stable count?
The page didn't include a download link, so I found it here: https://ftp.mozilla.org/pub/firefox/releases/96.0/linux-x86_...
And yes, it runs perfectly fine on Debian 13.
I dunno about firefox (newer crypto libs wont work) but I ran a gimp compiled in 2018 on a 2025 distro.
Why specifically FF? Software without TLS does run just fine 5 years later on 5 year newer distros.
> > Won't this get flagged by anti-virus scanners as suspicious?
> Unfortunately, yes. We consider this a problem for the anti-virus scanners to solve.
I don't think the anti-virus scanners consider Zig important enough, or even know about. They will not be the ones experiencing problems. Having executables quarantined and similar problems will fall on Zig developers and users of their software. That seems like a major drawback for using Zig.
Yup. This sentiment expresses quite clearly how Zig has no significant understanding or interest in being a language used for widely distributed applications, like video games.
There's no way I can ship a binary that flags the scanners. This wouldn't be the first language I've avoided because it has this unfortunate behaviour.
And expecting virus scanner developers to relax their rules for Zig is a bit arrogant. Some virus scanners started flagging software built with Nim simply because Nim became popular with virus authors as a means to thwart scanners!
Yeah, I had this problem when shipping go binaries on Windows. Antivirus vendors really do not care that your program regularly shows up as a false positive due to their crappy heuristics, even if you have millions of users.
Have you tried code-signing with an EV certificate? If so, did it help? Asking for a friend.
Statistically notable improvement, but it didn't help a whole lot.
I always upload a copy to https://www.virustotal.com to help combat the false positives.
It was really bad a couple years ago because anything wrapped in Inno Setup kept being flagged. Now maybe one or two flag vendors do; Bkav Pro and CrowdStrike Falcon are the dominate culprits always.
>> Unfortunately, yes. We consider this a problem for the anti-virus scanners to solve.
In reality it will be a problem for the developers to solve, and the solution will be to use a different language lol
> the worst case scenario is really mild: A new version of windows comes out, breaking ntdll compatibility. Zig project adds a fix to the std lib. Application developer recompiles their zig project from source, and ships an update to their users.
The ~only good thing that programmers have achieved in the past ~60 years has been Windows stability.
Create a popular programming language, and then make programs written in it not run on newer Windowses is just something else. I so hate this.
Was "robust, optimal and reusable" always "run an older Windows on your newer Windows to run Zig software"?
... this is just Linux binaries. It's humorous to me that we literally do exactly this, for Linux, with even less stability, but heaven forbid we do something approaching that on Windows despite the snobbery against Windows.
>> Microsoft are free to change the Native API at will, and you will be left holding both pieces when things break.
> [...] the worst case scenario is really mild: A new version of windows comes out, breaking ntdll compatibility. Zig project adds a fix to the std lib. Application developer recompiles their zig project from source, and ships an update to their users.
That assumes the application developer will continue to maintain it until the end of time.
Also "the fix" would mean developers wanting to support earlier Windows versions would need to use an older std library? Or is the library going to have runtime checks to see what Windows build its running on?
> Microsoft are free to change the Native API at will,...
But they won't, because if there is one thing that Microsoft has always been extremely good at and cared for is backward compatibility. And changing Native API will break a ton of existing software, because even though undocumented it is very widely used.
you are confusing the ntdll interface (which is undocumented and subject to change), and win32 (which is stable, mostly)
they tell you not to use ntdll, and say they will change it whenever they want
and they have in the past
(they have had to moderate this policy with "containers", but it's still what they say)
Actually they do change the native API quite a bit. Not in minor releases so much but in major releases
Sounds like the iOS model: your app only exists as long as you are alive and able to pay $99/year. This mentality is a nightmare for software preservation.
The one thing that really benefits from using NT Native API over Win32 is listing files in a directory. You get to use a 64KB buffer to get directory listing results, while Win32 just does the one at a time. 64KB buffer size means fewer system calls.
(Reading the MFT is still faster. Yes, you need admin for that)
I'm really struggling to see the pros of this:
> Performance - using the native API bypasses the standard Windows API, thus removing a software layer, speeding things up.
But the article cites no bemchmarks
> Power - some capabilities are not provided by the standard Windows API, but are available with the native API.
Makes sense when you are doing something that needs that power, but that makes more sense as an exception to prefering win32 than a general reason to prefer native.
> Dependencies - using the native API removes dependencies on subsystem DLLs, creating potentially smaller, leaner executables.
Linking win32 is a miniscule cost. (unless you have a benchmark to show me...)
> Flexibility - in the early stages of Windows boot, native applications (those dependent on NtDll.dll only) can execute, while others cannot.
Is Zig being used for such applications? If so, why are the calls that the document says will be kept on win32 not an issue?
Anyone who has some experience with native apis knows that a standard library should never rely on unstable apis. Ntdll is not "stable" as in Microsoft can change it at any time since they expect anyone to use kernel32. It's questionable that they referenced a random book on this top claiming that ntdll is more performant than kernel32 which is doubtful. There are some specific cases where this is true (the ntfs stuff), but, in general, it's not, at least not in a significant matter. A standard library should never do this, it might break binaries for no reason, other than making a cool blog post. I, as a developer, can choose to use ntfs, but a standard library should never.
https://news.ycombinator.com/item?id=25997506 https://github.com/golang/go/issues/68678
Is there an official stance on whether ntdll is stable? Obviously they're not going to change things arbitrarily since applications depend on it, but I'm wondering if there is a guarantee like the linux syscall interface or how you can run a win32 application compiled in 2004 on Win11.
It's partially stable.
Basically any thing documented on msdn in the API docs is considered stable.
Such as: https://learn.microsoft.com/en-us/windows/win32/api/winternl...
Indeed. Anything documented has a function wrapper. `NtCreateFile` is a function wrapper for the syscall number, so any user-mode code that has `NtCreateFile` instead of directly loading the syscall number 0x55 will be stable. The latter might not. In fact, it is not; the number has increased by 3 since Windows XP[1].
One could probably produce some sort of function pointer loader library with these tables, but at that point... Why not just use the documented APIs?
[1]: https://github.com/j00ru/windows-syscalls/blob/8a6806ac91486...
Interesting, some functions explicitly mention:
> [NtQuerySystemTime may be altered or unavailable in future versions of Windows. Applications should use the GetSystemTimeAsFileTime function.] [0]
So it does seem like a bad idea for a standard library.
[0]: https://learn.microsoft.com/en-us/windows/win32/api/winternl...
Honestly, this sounds like a future headache that would otherwise go unnoticed unless the programmer is dealing with porting or binding over source code meant for older Windows systems to Zig (or supporting older systems in general). Eventually it might result in a bunch of people typing out blogposts venting their frustrations, and the creation of tutorials and shims for hooking to Win32 instead of the Zig standard library with varying results. Which is fine, I suppose. Legacy compiler targets are a thing.
This is already a problem with Linux binaries for systems that don't have a recent enough Glibc (unless the binaries themselves don't link to it and do syscalls directly).
Go famously tried to bypass macOS's libc and directly use the underlying syscall ABI, which is unstable, and then a macOS update came out and broke everything, which taught them the error of their ways (https://github.com/golang/go/issues/17490). I wonder if this will happen to Zig too.
Don’t know about windows programming to give opinion but by the sentiment here maybe they should give dev to choose bashed on some comptime flag or sth and maintain two versions
For people not familiar with Windows development, another name for the NT native API is "the API that pretty much every document on Windows programming tells you not to use". It's like coding to the Linux syscall interface instead of libc.
Linux syscall interface is actually stable and can easily be targeted. It’s BSDs (and Mac OS) that force everyone to link to only libc.
> It's like ...
Considering the level of the API. But it is total opposite comparing a bit deeper. Linux has a famous rule "WE DO NOT BREAK USERSPACE!" e.g. [1].
One thing that is amusing about the prevalence of advanced anti-cheat in Windows gaming is it's actually causing said API/ABIs to undergo ossification. A good data point is the invention of Syscall User Dispatch^1 on Linux which would allow a program to basically install a syscall handler when they originate from various regions of memory. I do not know how usable this is in practice, admittedly -- but I think the fact it was contributed at all speaks to the growing need.
^1 https://docs.kernel.org/admin-guide/syscall-user-dispatch.ht...
With the crucial difference that Linux places high value on syscall interface binary compatibility, while the NT native API is not guaranteed to be stable in any way.
A bit more comparable is OpenBSD where applications are very much expected to only use libc wrappers, which threw a wrench into the works for the Go runtime.
"Every document" notwithstanding, Native API is very widely used in practice and generally considered stable.
If in doubt, try and find examples of its breakage, semantic changes, etc.
Yeah, I know go has had issues because they subvert libc themselves in similar fashion. I wonder how this will turn out.
Go backed out of their strategy on MacOS and started using libc (libsystem?), because when Apple says something is internal and may change without notice, they really mean it. It may be a better risk with Microsoft, but it’s still a risk.
I think they had to revert back to libc on macOS/iOS because those have syscall interfaces that truly are not stable (and golang found that out the hard way). I wonder if they had to do the same on BSDs because of syscall filtering.
Indeed, OpenBSD recently added hardening measures and started restricting the generic syscall interface to libc.
Except unlike Linux syscall interface and like almost every other OS out there, ABI compatibility is an accident, not a guarantee.
> It's like coding to the Linux syscall interface instead of libc.
The right thing to do? I don't see why I would want to use libc.
Nope, in UNIX proper syscalls and libc overlap, that is how C and UNIX eventually evolved side by side, in a way one could argue UNIX is C's runtime, and hence why most C deployments also expect some level of compatibility with UNIX/POSIX.
Linux is the exception offering its guts to userspace with guarantees of stability.
On Windows, the stability guarantees are opposite to that of Linux. The kernel ABI is not guaranteed to be stable, whereas the Win32 ABI is.
And frankly, the Windows way is better. On Linux, the 'ABI' for nearly all user-mode programs is not the kernel's ABI but rather glibc's (plus the variety of third-party libraries, because Win32 has a massive surface area and is an all-in-one API). Now, glibc's ABI constantly changes, so linking against a newer glibc (almost certainly the 'host' glibc, because it is almost impossible to supply a different 'target' glibc without Docker) will result in a program that doesn't run on older glibc. So much for Torvalds' 'don't break userspace'.
Not so for a program compiled for 'newer' Win32; all that matters are API compatibilities. If one only uses old-hat interfaces that are documented to be present on Windows 2000, one can write and compile one's code on Windows 11, and the executable will run on the former with no issues. And vice versa, actually.
The Win32 ABI is also just a wrapper on the native API, which is only stable in practice, but not officially according to any Microsoft documentation.
Glibc is userspace seen from the perspective of the Linux kernel.
A lot of the native API is considered stable these days. The actual signals aren't, but the wrappers in ntdll are.
> I don't see why I would want to use libc
To make your code portable? Linux-only software is even worse than Windows-only
Let me narrow down the scope here. I am a Rust developer, developing software that will run on my Linux server. Why would I want to use libc? Why does Rust standard library use libc? Zig, for example, doesn't.
Yikes. Are they going to rename the language to "cyg" also? Does not inspire confidence.
Why not use both DLLs? Prefer win32 wherever possible and use the lower level APIs only if absolutely necessary. Benchmark after you have figured this out. Performance is probably not a thing at this level of abstraction.
What makes you think they haven't benchmarked?
Here's one fun example from following development on Zulip: advapi.dll loads bcrypt.dll, which loads bcryptprimitives.dll. bcryptprimitives.dll runs an internal test suite every time it's loaded into any process. So if you can avoid loading advapi.dll, your process will start faster.
Are you talking about the cipher tests that are run when any cipher library is loaded?
There's a reason they do that and it's not for shits and giggles. You could find yourself with broken ciphers and not know it.
Skipping the cipher (or hash - not sure now) tests seem like a good way to get exploited.
Is there a source for this? My Google- and GitHub-fu turns up nothing.
He might be talking about cipher test that respected cryptography libs do on initialisation to verify integrity.
Skipping those seem like a really bad idea.
Fool's errand. Apps built with this will have to be maintained forever (vs the apps from Win 9x which still work in Windows 11).
This is a terrible idea! _Maybe_, _maybe_ using only the documented APIs with only the documented parameters.
Unfortunately it makes too many false assumptions about interoperability between Win32 and the underlying native API that aren't true.
For example (and the Go runtime does this, much to my chagrin), querying the OS version via the native API always gives you "accurate" version information without needing to link a manifest into your application. Unfortunately that lack of manifest will still cause many Win32 APIs above the native layer to drop into a compatibility mode, creating a fundamental inconsistency between what the application thinks the OS capabilities are versus which Win32 subsystem behaviours the OS thinks it should be offering.
For me this is too much. I wish Zig all the best, but decisions like this make me want to jump off this sinking ship.
> While this can happen, we have not (yet) been affected by any changes in the Win32 -> Native layers.
Frankly this is dumb. Zig hasn't been around long enough to have even seen any changes, so using this as a reason is just plain dumb.
The view that, if windows ever changes, the code must be recompiled is a naive view one would expect from a child, not from a group of experienced devs.