Apple announces Foundation Models and Containerization frameworks, etc
apple.com855 points by thm 8 days ago
855 points by thm 8 days ago
There's a different thread if you want to wax about Fluid Glass etc [1], but there's some really interesting new improvements here for Apple Developers in Xcode 26.
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
> And it's local and on device.
Does that explain why you don't have to worry about token usage? The models run locally?
> You don’t have to worry about the exact tokens that Foundation Models operates with, the API nicely abstracts that away for you [1]
I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?
[1] https://developer.apple.com/videos/play/wwdc2025/301/?time=1...
I guess I'd say "mu", from a dev perspective, you shouldn't care about tokens ever - if your inference framework isn't abstracting that for you, your first task would be to patch it to do so.
To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes
Ish - it always depends how deep in the weeds you need to get. Tokenisation impacts performance, both speed and results, so details can be important.
I maintain a llama.cpp wrapper, on everything from web to Android and cannot quite wrap my mind around if you'd have any more info by getting individual token IDs from the API, beyond what you'd get from wall clock time and checking their vocab.
I don’t really see a need for token IDs alone, but you absolutely need per-token logprob vectors if you’re trying to do constrained decoding
Interesting point, my first reaction was "why do you need logprobs? We use constrained decoding for tool calls and don't need them"...which is actually false! Because we need to throw out those log probs then find the highest log prob of a token meeting the constraints.
Haha yeah. I’ve seen you mention the llama cpp wrapper elsewhere, it sounds cool! I’ve worked enough with vLLM and sglang to get angry at xgrammar, which I believe has some common ancestry with the GGML stack (GBNF if I’m not mistaken, which I may be). The constrained decoding part is as simple as you’d expect, just applies a bitmask to the logprobs during the “logit processing” and continuing as normal.
Do we have the vocab? That's part of the point here. Does it take images? How are they tokenised?
The direction the software engineering is going in with this whole "vibe coding" thing is so depressing to me.
I went into this industry because I grew up fascinated by computers. When I learned how to code, it was about learning how to control these incredible machines. The joy of figuring something out by experimenting is quickly being replaced by just slamming it into some "generative" tool.
I have no idea where things go from here but hopefully there will still be a world where the craft of hand writing code is still valued. I for one will resist the "vibe coding" train for as long as I possibly can.
To be meta about it, I would argue that thinking "generatively" is a craft in and of itself. You are setting the conditions for work to grow rather than having top-down control over the entire problem space.
Where it gets interesting is being pushed into directions that you wouldn't have considered anyway rather than expediting the work you would have already done.
I can't speak for engineers, but that's how we've been positioning it in our org. It's worth noting that we're finding GenAI less practical in design-land for pushing code or prototyping, but insanely helpful helping with research and discovery work.
We've been experimenting with more esoteric prompts to really challenge the models and ourselves.
Here's a tangible example: Imagine you have an enormous dataset of user-research, both qual and quant, and you have a few ideas of how to synthesize the overall narrative, but are still hitting a wall.
You can use a prompt like this to really get the team thinking:
"What empty spaces or absences are crucial here? Amplify these voids until they become the primary focus, not the surrounding substance. Describe how centering nothingness might transform your understanding of everything else. What does the emptiness tell you?"
or
"Buildings reveal their true nature when sliced open. That perfect line that exposes all layers at once - from foundation to roof, from public to private, from structure to skin.
What stories hide between your floors? Cut through your challenge vertically, ruthlessly. Watch how each layer speaks to the others. Notice the hidden chambers, the unexpected connections, the places where different systems touch.
What would a clean slice through your problem expose?"
LLM's have completely changed our approach to research and, I would argue, reinvigorated an alternate craftsmanship to the ways in which we study our products and learn from our users.
Of course the onus is on us to pick apart the responses for any interesting directions that are contextually relevant to the problem we're attempting to solve, but we are still in control of the work.
Happy to write more about this if folks are interested.
Personally I still love the craft of software. But there are times where boilerplate really kills the fun of setting something up, to take one example.
Or like this week I was sick and didn't have the energy to work in my normal way and it was fun to just tell ChatGPT to build a prototype I had in mind.
We live in a world of IKEA furniture - yet people still desire handmade furniture, and people still enjoy and take deep satisfaction in making them.
All this to say I don't blame you for being dismayed. These are fairly earth shattering developments we're living through and if it doesn't cause people to occasionally feel uneasy or even nostalgia for simpler times, then they're not paying attention.
I share your frustration. But for better or worse, computer language will eventually be replaced by human language. It's inevitable :(
This sounds like a boomer trying to resist using Google in favor of encyclopedias.
Vibe coding can be whatever you want to make of it. If you want to be prescriptive about your instructions and use it as a glorified autocomplete, then do it. You can also go at it from a high-level point of view. Either way, you still need to code review the AI code as if it was a PR.
Is any AI assisted coding === Vibe Coding now?
Coding with an AI can be whatever one can achieve, however I don’t see how vibe coding would be related to an autocomplete: with an autocomplete you type a bit of code that a program (AI or not) complete. In VC you almost doesn’t interact with the editor, perhaps only for copy/paste or some corrections. I’m not even sure for the manual "corrections" parts if we take Simon Willinson definition [0], which you’re not forced to obviously, however if there’s contradictory views I’ll be glad to read them.
0 > If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant
https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gn...
(Your may also consider rethinking your first paragraph up to HN standards because while the content is pertinent, the form sounds like a youngster trying to demo iKungFu on his iPad to Jackie Chan)
Vibe coding is pretty broad and is a spectrum
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
This sounds like someone who doesn't actually know how to code, doesn't enjoy the craft, and probably only got into the industry because it pays well and not because they actually enjoy it.
I enjoy it, but I enjoy what the product enables me to do more than the process; It's a means to an end for me and the process is great, but it gets tedious after more than a decade of it.
I also like cooking, but I like eating more than the actual cooking. It's an means to an end, and I don't need to always enjoy the cooking process.
No, that's what's separates the vibecoding from the glorified autocomplete. as originally defined, vibe coding doesn't include the final code review of the generated code, just a quick spot check, and then moving on to the next prompt.
The definition is broad and can include testing. Refining requires you to review the code for iterations.
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
Karpathy's definition of vibe coding as I understood it was just verbally directing an agent based on vibes you got from the running app without actually seeing the code.
https://en.wikipedia.org/wiki/Vibe_coding
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
You can take an augmented approach, a sort of capability fusion, or you can spam regenerate until it works.
No, this sounds like an IC resisting becoming a manager.
Not sure if this is supposed to be an insult... Should I probably lean into management at some point? Sure. But do I still enjoy coding and am I still quite capable (without AI assistance)? Yup.
So as long as I can, and as long as I still enjoy it, you'll find me writing code. Lucky to get payed to do this.
Oh, it's not. I'm an IC totally unwilling to become a manager. Some people just enjoy coding.
I might we wrong but I guess this will only works on iphone 16 devices and iphone 15 pro - thus drastically limits your user base and you would still have to use online API for most apps. I was hoping they provide free ai api on their private cloud for other devices even if also running small models
If you start writing an app now, by the time it's polished enough to release it, the iPhone 16 will already be a year old phone, and there will be plenty potential customers.
If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.
Skate to where the puck is going...
Developers could be adding a feature utilizing LLMs to their existing app that already has a large user base. This could be a matter of a few weeks from an idea ti shipping the feature. While competitors use API calls to just "get things done", you are trying to figure out how to serve both iPhone 16 and older users, and potentially Android/web users if your product is also available elsewhere. I don't see how an iPhone 16 only feature helps anyone's product development, especially when the quality still remains to be seen.
Basically this - network effects are huge. People will definitely by hardware if it solves a problem for them - so many people bought blackberries just for BBM.
Exactly, it can take at least a couple of years to get big/important apps to use iOS, macOS features. By that Iphone 16 would be quite common.
Drastically limits your user base for like 3 years.
Phones still get replaced often, and the people who don’t replace them are the type of people who won’t spend a lot of money on your app.
If the new foundation models are on device, does that mean they’re limited to information they were trained on up to that point?
Or do have the ability to reach out to the internet for up to the moment information?
In addition to context you provide, the API lets you programmatically declare tools
I hoped for a moment that "Containerization Framework" meant that macOS itself would be getting containers. Running Linux containers and VMs on macOS via virtualization is already pretty easy and has many good options. If you're willing to use proprietary applications to do this, OrbStack is the slickest, but Lima/Colima is fine, and Podman Desktop and Rancher Desktop work well, too.
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
--
1: https://macoscontainers.org/
> The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers
Linux container processes run on the host kernel with extra sandboxing. The container image is an easily sharable and runnable bundle.
macOS .app bundles are kind of like container images.
You can sign them to ensure they are not modified, and put them into the “registry” (App Store).
The Swift ABI ensures it will likely run against future macOS versions, like the Linux system APIs.
There is a sandbox system to restrict file and network access. Any started processes inherit the sandbox, like containers.
One thing missing is fine grained network rules though - I think the sandbox can just define “allow outbound/inbound”.
Obviously “.app”s are not exactly like container images , but they do cover many of the same features.
You're kind of right. But at the same time they are nowhere close. The beauty of Linux containerization is that processes can be wholly ignorant that they are not in fact running as root. The containers get, what appear to them, to be the whole OS to themselves.
You don't get that in macOS. It's more of a jail than a sandbox. For example, as an app you can't, as far as I know, shell out and install homebrew and then invoke homebrew and install, say, postgres, and run it, all without affecting the user's environment. I think that's what people mean when they say macOS lacks native containers.
Good point, apps are missing the docker layered file system to isolate container file writes.
It's not that macoscontainers is empty, it's that the site is https://darwin-containers.github.io
Read more about it here - https://github.com/darwin-containers
The developer is very responsive.
One of Apple's biggest value props to other platforms is environment integrity. This is why their containerization / automation story is worse than e.g. Android.
Hard same. I wonder if this does anything different to the existing projects that would mean one could use the WSL2 approach where containerd is running in the Linux micro-VM. A key component is the RPC framework - seems to be how orbstack's `macctl` command does it. I see mention of GRPC, sandboxes and containers in the binfmt_misc handling code, which is promising:
https://github.com/apple/containerization/blob/d1a8fae1aff6f...
What would these be useful for?
Providing isolated environments for CI machines and other build environments!
If the sandboxing features a native containerization system relied on were also exposed via public APIs, those could could also potentially be leveraged by developer tools that want to have/use better sandboxing on macOS. Docker and BuildKit have native support for Windows containers, for instance. If they could also support macOS the same way, that would be cool for facilitating isolated macOS builds without full fat VMs. Tools like Dagger could then support more reproducible build pipelines on macOS hosts.
It could also potentially provide better experiences for tools like devcontainers on macOS as well, since sharing portions of your filesystem to a VM is usually trickier and slower than just sharing those files with a container that runs under your same kernel.
For many of these use cases, Nix serves very well, giving "just enough" isolation for development tasks, but not too much. (I use devenv for this at work and at home.) But Nix implementations themselves could also benefit from this! Nix internally uses a sandbox to help ensure reproducible builds, but the implementation on macOS is quirky and incomplete compared to the one on Linux. (For reasons I've since forgotten, I keep it turned off on macOS.)
Clean build environments for CICD workflows, especially if you're building/deploying many separate projects and repos. Managing Macs as standalone build machines is still a huge headache in 2025.
What's wrong with Cirrus CLI and Tart built on Apple's Virtualization.framework?
Tart is great! This is probably the best thing available for now, though it runs into some limitations that Apple imposes for VMs. (Those limitations perhaps hint at why Apple hasn't implemented this-- it seems they don't really want people to be able to rent out many slices of Macs.
One clever and cool thing Tart actually does that sort of relates to this discussion is that it uses the OCI format for distributing OS images!
(It's also worth noting that Tart is proprietary. Some users might prefer something that's either open-source, built-in, or both.)
Same thing containers/jails are useful for on Linux and *BSD, without needing to spin up an entirely separate kernel to run in a VM to handle it.
MacOS apps can already be sandboxed. In fact it's a requirement to publish them to the Mac App Store. I agree it'd be nice to see this extended to userland binaries though.
You can't really sandbox development dependencies in any meaningful way. I want to throw everything and the kitchen sink into one container per project, not install a specific version of Python, Node, Perl or what have you globally/namespaced/whatever. Currently there's no good solution to that problem, save perhaps for a VM.
uv doesn't provide strong isolation; a package you install using uv can attempt to delete random files in your home folder when you import it, for example.
People use containers server side in Linux land mostly... Some desktop apps (flatpak is basically a container runtime) but the real draw is server code.
Do you think people would be developing and/or distributing end user apps via macOS containers?
I might misunderstand the project, but I wish there was a secure way for me to execute github projects. Recently, the OS has provided some controls to limit access to files, etc. but I'd really like a "safe boot" version that doesn't allow the program to access the disk or network.
the firewall tools are too clunky (and imho unreliable).
Orchestrating macOS only software, like Xcode, and software that benefits from Environment integrity, like browsers.
ie: You want to build a binary for macOS from your Linux machine. Right now, it is possible but you still need a macOS license and to go through hoops. If you were able to containerize macOS, then you create a container and then compile your program inside it.
No, that's not at all how that would work. You're not building a macOS binary natively under a Linux kernel.
Okay, the AI stuff is cool, but that "Containerization framework" mention is kinda huge, right? I mean, native Linux container support on Mac could be a game-changer for my whole workflow, maybe even making Docker less of a headache.
FWIW, here are the repos for the CLI tool [1] and backend [2]. Looks like it is indeed VM-based container support (as opposed to WSLv1-style syscall translation or whatever):
Containerization provides APIs to:
[...]
- Create an optimized Linux kernel for fast boot times.
- Spawn lightweight virtual machines.
- Manage the runtime environment of virtual machines.
[1] https://github.com/apple/container
[2] https://github.com/apple/containerizationI'm kinda ignorant about the current state of Linux VMs, but my biggest gripe with VMs is that OS kernels kind of assume they have access to all the RAM the hardware has - unlike the reserve/commit scheme processes use for memory.
Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Or maybe could Apple patch the kernel to do exactly this?
Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.
It's still a problem for containers-in-VMs. You can in theory do something with either memory ballooning or (more modern) memory hotplugging, but the dance between the OS and the hypervisor takes a relatively long time to complete, and Linux just doesn't handle it well (eg. it inevitably places unmovable pages into newly reserved memory, meaning it can never be unplugged). We never found a good way to make applications running inside the VM able to transparently allocate memory. You can overprovision memory, and hypervisors won't actually allocate it on the host, and that's the best you can do, but this also has problems since Linux tends to allocate a bunch of fixed data structures proportional to the size of memory it thinks it has available.
That's called memory balooning and is supported by KVM on Linux. Proxmox for example can do that. It does need support on both the host and the guest.
> Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.
> or just what the guest OS is actually using should depend on the hypervisor itself.
How can the hypervisor know which memory the guest OS is actually using? It might have used some memory in the past and now no longer needs it, but from the POV of the hypervisor it might as well be used.
This is a communication problem between hypervisor and guest OS, because the hypervisor manages the physical memory but only the guest OS known how much memory should actually be used.
A generic vmm can not, but these are specific vmms so they can likely load dedicated kernel mode drivers into the well known guest to get the information back out.
The driver would still be part of the guest.
If you control both the VMM and the guest through a driver you have an essentially infinite latitude to set up communications between the two: virtual devices, iommu, interrupts, ...
Just looked it up - and the answer is 'baloon drivers', which are special drivers loaded by the guest OS, which can request and return unused pages to the host hypervisor.
Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.
The short answer is yes, Linux can be informed to some extent but often you still want a memory balloon driver so that the host can “allocate” memory out of the VM so the host OS can reclaim that memory. It’s not entirely trivial but the tools exist, and it’s usually not too bad on vz these days when properly configured.
It’s one reason i don’t like WSL2. When you compile something which needs 30 GB RAM the only thing you can do is terminate the wsl2 vm to get that ram back.
Since late 2023, WSL2 has supported "autoMemoryReclaim", nominally still experimental, but works fine for me.
add:
[experimental] autoMemoryReclaim=gradual
to your .wslconfig
See: https://learn.microsoft.com/en-us/windows/wsl/wsl-config
I just noticed the addition of container cask when I ran b”brew update”.
I chased the package’s source and indeed it’s pointing to this repo.
You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.
There’s some problem with networking: if you try to run multiple containers, they won’t see each other. Could probably be solved by running a local VPN or something.
WSLv1 never supported a native docker (AFAIK, perhaps I'm wrong?)
That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.
This doesn't look like WSL1. They're not running Linux syscalls to the macOS kernel, but running Linux in a VM, more like the WSL2[0] approach.
[0] https://devblogs.microsoft.com/commandline/announcing-wsl-2/...
In the end they're probably run into the same issues that killed WSL1 for Microsoft— the Linux kernel has enormous surface area, and lots of pretty subtle behaviour, particularly around the stuff that is most critical for containers, like cgroups and user namespaces. There isn't an externally usable test suite that could be used to validate Microsoft's implementation of all these interfaces, because... well, why would there be?
Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.
Yeah, it probably would be feasible to dust off the FreeBSD Linux compatibility layer[1] and turn that into native support for Linux apps on Mac.
I think Apple’s main hesitation would be that the Linux userland is all GPL.
If they built as a kernel extension it would probably be okay with gpl.
There’s a huge opportunity for Apple to make kernel development for xnu way better.
Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).
If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.
I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.
My impression is they’re basically trying to end third party kernel development; macOS has been making it progressively more difficult to use kexts and has been providing alternate toolkits for doing things that used to require drivers.
It's impossible to have "native" support for Linux containers on macOS, since the technology inherently relies on Linux kernel features. So I'm guessing this is Apple rolling out their own Linux virtualization layer (same as WSL). Probably still an improvement over the current mess, but if they just support LXC and not Docker then most devs will still need to install Docker Desktop like they do today.
Apple has had a native hypervisor for some time now. This is probably a baked in clone of something like https://mac.getutm.app/ which provides the stuff on top of the hypervisor.
In case you're wondering, the Hypervisor.framework C API is really neat and straightforward:
1. Creating and configuring a virtual machine:
hv_vm_create(HV_VM_DEFAULT);
2. Allocating guest memory: void* memory = mmap(...);
hv_vm_map(memory, guest_physical_address, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
3. Creating virtual CPUs: hv_vcpu_create(&vcpu, HV_VCPU_DEFAULT);
4. Setting registers: hv_vcpu_write_register(vcpu, HV_X86_RIP, 0x1000); // Set instruction pointer
hv_vcpu_write_register(vcpu, HV_X86_RSP, 0x8000); // Stack pointer
5. Running guest code: hv_vcpu_run(vcpu);
6. Handling VM exits: hv_vcpu_exit_reason_t reason;
hv_vcpu_read_register(vcpu, HV_X86_EXIT_REASON, &reason);
One of the reasons OrbStack is so great is because they implement their own hypervisor: https://orbstack.dev/
Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.
How does it compare to apple’s hv?
Better filesystem support (https://orbstack.dev/blog/fast-filesystem) and memory utilization (https://orbstack.dev/blog/dynamic-memory)
Using a hypervisor means just running a Linux VM, like WSL2 does on Windows. There is nothing native about it.
Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.
Hyper-V is a type 1 hypervisor, so Linux and Windows are both running as virtual machines but they have direct access to hardware resources.
It's possible that Apple has implemented a similar hypervisor here.
Surely if Windows kernel can be taught to respond to those syscalls, XNU can be taught it even easier. But, AIUI the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
XNU similarly has a concept of "flavors" and uses FreeBSD code to provide the BSD flavor. Theoretically, either Linux code or a compatibility layer could be implemented in the kernel in a similar way. The former won't happen due to licensing.
> the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.
Exactly. So it wouldn't necessarily be easier. NT is almost a microkernel.
Yep. People consistently underestimate the great piece of technology NT is, it really was ahead of its time. And a shame what Microsoft is doing with it now.
Was it ahead? I am not sure. There was lots of research on microkernels at the time and NT was a good compromise between a mono and a microkernel. It was an engineering product of its age. A considerably good one. It is still the best popular kernel today. Not because it is the best possible with today's resouces but because nobody else cares about core OS design anymore.
I think it is the Unix side that decided to burry their heads into sand. We got Linux. It is free (of charge or licensing). It supported files, basic drivers and sockets. It got commercial support for servers. It was all Silicon Valley needed for startups. Anything else is a cost. So nobody cared. Most of the open source microkernel research slowly died after Linux. There is still some with L4 family.
Now we are overengineering our stacks to get closer to microkernel capabilities that Linux lacks using containers. I don't want to say it is ripe for disruption becuse it is hard and again nobody cares (except some network and security equipment but that's a tiny fraction).
> Was it ahead? I am not sure.
You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
NT brought a HAL, proper multi-user ACLs, subsystems in user mode (that alone is amazing, even though they sadly never really gained momentum), preemptive multitasking. And then there's NTFS, with journaling, alternate streams, and shadow copies, and heaps more. A lot of it was very much ahead of UNIX at the time.
> nobody else cares about core OS design anymore.
Agree with you on that one.