Anthropic acquires Bun
bun.com1335 points by ryanvogel 7 hours ago
1335 points by ryanvogel 7 hours ago
A lot of people seem confused about this acquisition because they think of Bun as a node.js compatible bundler / runtime and just compare it to Deno / npm. But I think its a really smart move if you think of where Bun has been pushing into lately which is a kind of cloud-native self contained runtime (S3 API, SQL, streaming, etc). For an agent like Claude Code this trajectory is really interesting as you are creating a runtime where your agent can work inside of cloud services as fluently as it currently does with a local filesystem. Claude will be able to leverage these capabilities to extend its reach across the cloud and add more value in enterprise use cases
Yea, they just posted this a few days ago:
https://www.anthropic.com/engineering/advanced-tool-use
They discussed how running generated code is better for context management in many cases. The AI can generate code to retrieve, process, and filter the data it needs rather than doing it in-context, thus reducing context needs. Furthermore, if you can run the code right next to the server where the data is, it's all that much faster.
I see Bun like a Skynet: if it can run anywhere, the AI can run anywhere.
Java can run anywhere too
It’s relevant enough that I feel I can roll out this bash.org classic…
<Alanna> Saying that Java is nice because it works on all OS's is like saying that anal sex is nice because it works on all genders
EDIT: someone has (much to my joy) made an archive of bash.org so here is a link[1], but I must say I’m quite jealous of today’s potential 1/10,000[2] who will discover bash.org from my comment!
Not discovered from scratch, but was a big fan when it was alive and kicking. Went there from time to time to get some mood boosters. So was very sad when found that it's gone (original one). Thanks a lot for sharing that bash-org-archive.com exists, what a great fun going down this memory lane.
That's hilarious. My comment is mostly a joke, but also trying to say that "runs everywhere" isn't that impressive anymore.
wait - how do you search the quotes??
I don’t think there is a search function, I got the exact wording from a web search (I think “bash Java anal”, arguably a dangerous search!) and then after submitting I wondered if there is an archive of the quotes.
Java is not for sale.
Java can be depended on without buying anything.
Java's price is your time which you will need tons of as Java is highly verbose. The ultimate enterprise language
Not in the browser, and no – webassembly doesn't count, otherwise you can say the same about Go and others.
java did run in the browser once.... it was embedded directly on the browser there was also nsapi
you could also run java with js if you are brave enough https://kreijstal.github.io/java-tools/
May I ask, what is this obsession with targeting the browser? I've also noticed a hatred of k8s here, and while I truly understand it, I'd take the complication of managing infrastructure over frontend fads any day.
HN has a hatred of K8s? That’s new to me
This is a site for startups. They have no business running k8s, in fact, many of the lessons learned get passed on from graybeards to the younger generation along those lines. Perhaps I'm wrong! I'd love to talk shop somewhere.
Why doesn’t wasm count?
Compile step makes things more complicated.
run code anywhere hamstrung by 90s syntax and hidden code indirections
Haven’t checked in on Java in a while?
From what I gather everyone is still stuck on Java 8 so no need to check?
This is absolutely untrue. Code from JDK 8 runs fine on JDK 25 (just released LTS). It is true that if you did something silly that locks you into certain dependency versions, you may be stuck, but this is not the majority of applications.
Yea - if you want a paranoidly-sandboxed, instant-start, high-concurrency environment, not just on beefy servers but on resource-constrained/client devices as well, you need experts in V8 integration shenanigans.
Cloudflare Workers had Kenton Varda, who had been looking at lightweight serverless architecture at Sandstorm years ago. Anthropic needs this too, for all the reasons above. Makes all the sense in the world.
Bun isn't based on V8, it's JavaScriptCore, but your point still stands.
you left out the best part...what happened to Kenton? He looked at lightweight serverless architecture..and then what?
Isn't what you're describing just a set of APIs with native bindings that the LLM can call?
I'm not sure I understand why it's necessary to even couple this to a runtime, let alone own the runtime?
Can't you just do it as a library and train/instruct the LLM to prefer using that library?
It's fine but why is Js a good language for agents? I mean sure its faster than python but wouldn't something that compiles to native be much better?
JS has the fastest, most robust and widely deployed sandboxing engines (V8, followed closely by JavaScriptCore which is what Bun uses). It also has TypeScript which pairs well with agentic coding loops, and compiles to the aforementioned JavaScript which can run pretty much anywhere.
Note that "sandboxing" in this case is strictly runtime sandboxing - it's basically like having a separate process per event loop (as if you ran separate Node processes). It does not sandbox the machine context in which it runs (i.e. it's not VM-level containment).
When you say runtime sandboxing, are you referring to JavaScript agents? I haven't worked all that much with JavaScript execution environments outside of the browser so I'm not sure about what sandboxing mechanics are available.
https://nodejs.org/api/vm.html
Bun claims this feature is for running untrusted code (https://bun.com/reference/node/vm), while Node says "The node:vm module is not a security mechanism. Do not use it to run untrusted code." I'm not sure whom to believe.
It's interesting to see the difference in how both treat the module. It feels similar to a realm which makes me lean by default to not trusting it for untrusted code execution.
It looks like Bun also supports Shadow Realms which from my understanding was more intended for sandboxing (although I have no idea how resources are shared between a host environment and Shadow Realms, and how that might potentially differ from the node VM module).
Doesn’t Bun use JavaScriptCore though? Perhaps their emulation, rather implementation, leans more towards security.
The reference docs are auto generated from node’s TypeScript types. node:vm is better than using the same global object to run untrusted code, but it’s not really a sandbox
Running it in a chroot or a scoped down namespace is all your need most of the time anyways.
> It also has TypeScript which pairs well with agentic coding loops
The language syntax has nothing to do with it pairing well with agentic coding loops.
Considering how close Typescript and C# are syntactically, and C#'s speed advantage over JS among many other things would make C# the main language for building Agents. It is not and that's because the early SDKs were JS and Python.
Typescript is probably generally a good LLM language because - static types - tons and tons of training data
Kind of tangent but I used to think static types were a must-have for LLM generated code. But the most magical and impressively awesome thing I’ve seen for LLM code generation is “calva backseat driver”, a vscode extension that lets copilot evaluate clojure expressions and generally do REPL stuff.
It can write MUCH cleaner and more capable code, using all sorts of libraries that it’s unfamiliar with, because it can mess around and try stuff just like a human would. It’s mind blowingly cool!!
> C#'s speed advantage over JS among many other things would make C# the main language
Nobody cares about this, JS is plenty fast for LLM needs. If maximum performance was necessary, you're better off using Go because of fast compiler and better performance.
> Nobody cares about this
And that was my point. The choice of using JS/TS for LLM stuff was made for us based on initial wave of SDK availabilities. Nothing to do with language merits.
It's widespread and good enough. The language just doesn't matter that much in most cases
This is one of those, "in theory, there's no difference between theory and practice. In practice, there is" issues.
In their, quality software can be written in any programming language.
In practice, folks who use Python or JavaScript as their application programming language start from a position of just not carrying very much about correctness or performance. Folks who use languages like Java or C#, do. And you can see the downstream effects of this in the difference in the production-grade developer experience and the quality of packages on offer in PIP and NPM versus Maven and NuGet.
As a developer that switches between java, python and typescript every day I think this is fairly myopic opinion. Being siloed to one lang for long enough tends to brings out our tribalistic tendencies, tread carefully.
I've seen codebases of varying quality in nearly every language, "enterprise" and otherwise. I've worked at a C# shop and it was no better or worse than the java/kotlin/typescript ones I've worked at.
You can blame the "average" developer in a language for "not caring ", but more likely than not you're just observing the friction imposed by older packaging systems. Modern languages are usually coupled with package managers that make it trivial to publish language artifacts to package hubs, whereas gradle for example is it's own brand of hell just to get your code to build.
That's not a fair comparison. In your example, you're talking about the average of developers in a language. In this situation, it's specific developers choosing between languages. Having the developers you already have choose language A or B makes no difference to their code quality (assuming they're proficient with both)
> In practice, folks who use Python or JavaScript as their application programming language start from a position of just not carrying very much about correctness or performance. Folks who use languages like Java or C#, do.
Nonsense. Average Java/C# is an enterprise monkey who barely knows outside of their grotesque codebase.
> production-grade developer experience
Please, Maven and Gradle are crimes against humanity. There's a special place reserved for Gradle creators in hell for sure.
The "production-grade" developers should ditch their piece of shit, ancient "tooling" and just copy uv/go/dart/rust tooling.
Chill out buddy. You're going to pop a vein here.
A typical backend developer using C#/Java is likely solving more complicated problems and having all the concerns of an enterprise system to worry about and maintain.
Dismissing a dev or a system because it is enterprisy is a weak argument to make against a language. A language being used a lot in an enterprise to carry the weight of the business is a sign the language is actually great and reliable enough.
> Nonsense. Average Java/C# is an enterprise monkey who barely knows outside of their grotesque codebase.
Netflix is Java. Amazon is mostly Java. Some of the biggest open source projects in the world are Java. Unity and Godot both use C# for scripting.I don't know where you're getting the impression that Java and C# are somehow only for "enterprise monkey who barely knows outside of their grotesque codebase"
Exactly! In the Java ecosystem, your intelligence is measured by how elaborate an interface hell you can conjure just to do CRUD.
Could also be a way to expand the customer for Claude Code from coding assistant to vibe coding, a la Replit creating a hosted app. CC working more closely with Bun could make all that happen much faster:
> Our default answer was always some version of "we'll eventually build a cloud hosting product.", vertically integrated with Bun’s runtime & bundler.
>Claude will be able to leverage these capabilities to extend its reach across the cloud and add more value in enterprise use cases
100%. even more robust if paired with an overlay network which provides identity based s3 access (rather than ip address/network based). else server may not have access to s3/cloud resource, at least for many enterprises with s3 behind vpn/direct connect.
ditto for cases when want agent/client side to hit s3 directly, bypassing the server, and agent/client may not have permitted IP in FW ACL, or be on vpn/wan.
That's a really cool use case and seems super helpful. working cloud native is a chore sometimes. having to fiddle with internal apis, acl/permissions issues.
The writeup makes it sound like an acquihire, especially the "what changes" part.
ChatGPT is feeling the pressure of Gemini [0]. So it's a bit strange for Anthropic to be focusing hard on its javascript game. Perhaps they see that as part of their advantage right now.
[0] https://timesofindia.indiatimes.com/technology/tech-news/goo...
What the ? I am either too old, or stupid, or both, to understand this. I'd expect this bullshit from Consultants.
This matches some previous comments around LLMs driving adoption of programming languages or frameworks. If you ask Claude to write a web app, why not have it use your own framework, that it was trained on, by default?
Users are far more likely to ask it about shadcn, or material, than about node/deno/bun. So, what is this about?
Currently Claude etc. can interact with services (including AWS) via MCPs.
What the user you're replying to is saying the Bun acquisition looks silly as a dev tool for Node. However if you look at their binding work for services like s3[0], the LLM will be able to interact directly with cloud services directly (lower latency, tighter integration, simplified deployment).
That doesn't make sense either. Agents already have access to MCPs and Tools. Your example is solved by having an S3 wrapper as a set of tools.
An AI company scoops up frontend tech. Do you really think it was because of s3?
As a commandline end user who prefers to retreive data from the www as text-only, I see deno and bun as potential replacements (for me, not necessarily for anyone else) for the so-called "modern" browser in those rare cases where I need to interpret Javascript^1
At present the browser monstrosity is used to (automatically, indiscriminantly) download into memory and run Javascripts from around the web. At least with a commandline web-capable JS runtime monstrosity the user could in theory exercise more control over what scripts are downloaded and if and when to run them. Perhaps more user control over permissions to access system resources as well (cf. corporate control)
1. One can already see an approach something like this being used in the case of
https://github.com/yt-dlp/yt-dlp/wiki/EJS
where a commandline JS runtime is used without the need for any graphics layer (advertising display layer)
> At the time of writing, Bun's monthly downloads grew 25% last month (October, 2025), passing 7.2 million monthly downloads. We had over 4 years of runway to figure out monetization. We didn't have to join Anthropic.
I believe this completely. They didn't have to join, which means they got a solid valuation.
> Instead of putting our users & community through "Bun, the VC-backed startups tries to figure out monetization" – thanks to Anthropic, we can skip that chapter entirely and focus on building the best JavaScript tooling.
I believe this a bit less. It'll be nice to not have some weird monetization shoved into bun, but their focus will likely shift a bit.
> They didn't have to join, which means they got a solid valuation.
Did they? I see a $7MM seed round in 2022. Now to be clear that's a great seed round and it looks like they had plenty of traction. But it's unclear to me how they were going to monetize enough to justify their $7MM investment. If they continued with the consultancy model, they would need to pay back investors from contracts they negotiate with other companies, but this is a fraught way to get early cashflow going.
Though if I'm not mistaken, Confluent did the same thing?
They had a second round that was $19m in late 2023. I don't doubt for a second that they had a long runway given the small team.
I don't like all of the decisions they made for the runtime, or some of the way they communicate over social media/company culture, but I do admire how well-run the operation seems to have been from the outside. They've done a lot with (relatively) little, which is refreshing in our industry. I don't doubt they had a long runway either.
Thanks I scrolled past that in the announcement page.
With more runway comes more investor expectations too though. Some of the concern with VC backed companies is whether the valuation remains worthwhile. $26mm in funding is plenty for 14 people, but again the question is whether they can justify their valuation.
Regardless happy for the Oven folks and Bun has been a great experience (especially for someone who got on the JS ecosystem quite late.) I'm curious what the structure of the acquisition deal was like.
> They didn't have to join, which means they got a solid valuation.
This isn't really true. It's more about who wanted them to join. Maybe it was Anthropic who really wanted to take over Bun/hire Jarred, or it was Jarred who got sick of Bun and wanted to work on AI.I don't really know any details about this acquisition, and I assume it's the former, but acquihires are also done for other reasons than "it was the only way".
Yeah, now they are part of Anthropic, who haven't figured out monetization themselves. Shikes!
I'm a user of Bun and an Anthropic customer. Claude Code is great and it's definitely where their models shine. Outside of that Anthropic sucks,their apps and web are complete crap, borderline unusable and the models are just meh. I get it, CC's head got probably a powerplay here given his department is towing the company and his secret sauce, according to marketing from Oven, was Bun. In fact VSCode's claude backend is distributed in bun-compiled binary exe, and the guy is featured on the front page of the Bun website since at least a week or so. So they bought the kid the toy he asked for.
Anthropic needs urgently, instead, to acquire a good team behind a good chatbot and make something minimally decent. Then make their models work for everything else as well as they do with code.
Anthropic is still a new company and so far they seem "friendly". That being said, I still feel this can go either way.
> I believe this a bit less.
They weren’t acquired and got paid just to build tooling as before and now completely ignoring monetization until the end of times.
Maybe they were though. Maybe Anthropic just wanted to bring a key piece of the stack in-house.
Given the worries about LLM focused companies reaching profitability I have concerns that Bun's runway will be hijacked... I'd hate for them to go down with the ship when the bubble pops.
This is my fear. It's one thing to lose a major sponsor. It's another to get cut due to a focus on profitability later down the line.
I work on Bun.
Happy to answer any questions
I'm sort of surprised to see that you used Claude Code so much. I had a vague idea that "Zig people" were generally "Software You Can Love" or "Handmade Software Movement" types, about small programs, exquisitely hand-written, etc, etc. And I know Bun started with an extreme attention to detail around performance.
I would have thought LLM-generated code would run a bit counter to both of those. I had sort of carved the world into "vibe coders" who care about the eventual product but don't care so much about the "craft" of code, and people who get joy out of the actual process of coding and designing beautiful abstractions and data structures and all that, which I didn't really think worked with LLM code.
But I guess not, and this definitely causes me to update my understanding of what LLM-generated code can look like (in my day to day, I mostly see what I would consider as not very good code when it comes from an LLM).
Would you say your usage of Claude Code was more "around the edges", doing things like writing tests and documentation and such? Or did it actually help in real, crunchy problems in the depths of low level Zig code?
I am not your target with this question (I don't write Zig) but there is a spectrum of LLM usage for coding. It is possible to use LLMs extensively but almost never ship LLM generated code, except for tiny trivial functions. One can use them for ideation, quick research, or prototypes/starting places, and then build on that. That is how I use them, anyway
Culturally I see pure vibe coders as intersecting more with entrepreneurfluencer types who are non-technical but trying to extend their capabilities. Most technical folks I know are fairly disillusioned with pure vibe coding, but that's my corner of the world, YMMV
> Culturally I see pure vibe coders as intersecting more with entrepreneurfluencer types who are non-technical but trying to extend their capabilities. Most technical folks I know are fairly disillusioned with pure vibe coding, but that's my corner of the world, YMMV
Anyone who has spent time working with LLMs knows that the LinkedIn-style vibecoding where someone writes prompts and hits enter until they ship an app doesn't work.
I've had some fun trying to coax different LLMs into writing usable small throwaway apps. It's hilarious in a way to the contrast between what an experienced developer sees coming out of LLMs and what the LinkedIn and Twitter influencers are saying. If you know what you're doing and you have enough patience you really can get an LLM to do a lot of the things you want, but it can require a lot of handholding, rejecting bad ideas, and reviewing.
In my experience, the people pushing "vibecoding" content are influencers trying to ride the trend. They use the trend to gain more followers, sell courses, get the attention of a class of investors desperate to deploy cash, and other groups who want to believe vibecoding is magic.
I also consider them a vocal minority, because I don't think they represent the majority of LLM users.
fwiw, copilots licence only explicitly permits using its suggestions the way you say.
putting everyone using the generated outputs into a sort of unofficial grey market: even when using first-party tools. Which is weird.
I'll give you a basic example where it saved me a ton of time to vibe code instead of doing it myself, and I believe it would hold true for anyone.
Creating ~50 different types of calculators in JavaScript. Gemini can bang out in seconds what would take me far longer (and it's reasonable at basic tailwind style front-end design to boot). A large amount of work smashed down to a couple of days of cumulative instruction + testing in my spare time. It takes far long to think of how I want something to function in this example than it does for Gemini to successfully produce it. This is a use case scenario where something like Gemini 3 is exceptionally capable, and far exceeds the capability requirements needed to produce a decent outcome.
Do I want my next operating system vibe coded by Gemini 3? Of course not. Can it knock out front-end JavaScript tasks trivially? Yes, and far faster than any human could ever do it. Classic situation of using a tool for things it's particularly well suited.
Here's another one. An SM-24 Geophone + Raspberry PI 5 + ADC board. Hey Gemini / GPT, I need to build bin files from the raw voltage figures + timestamps, then using flask I need a web viewer + conversion on the geophone velocity figures for displacement and acceleration. Properly instructed, they'll create a highly functional version of that with some adjustments/iteration in 15-30 minutes. I basically had them recreate REW RTA mode for my geophone velocity data, and there's no way a person could do it nearly as fast. It requires some checking and iteration, and that's assumed in the comparison.
> I had a vague idea that "Zig people" were generally "Software You Can Love" or "Handmade Software Movement" types, about small programs, exquisitely hand-written, etc, etc.
I feel like an important step for a language is when people outside of the mainline language culture start using it in anger. In that respect, Zig has very much "made it."
That said, if I were to put on my cynical hat, I do wonder how much of that Anthropic money will be donated to the Zig Software Foundation itself. After all, throwing money at maintaining and promoting the language that powers a critical part of their infrastructure seems like a mutually beneficial arrangement.
Handmade Cities founder here.
We never associated with Bun other than extending an invitation to rent a job booth at a conference: this was years ago when I had a Twitter account, so it's fair if Jarred doesn't remember.
If Handmade Cities had the opportunity to collaborate with Bun today, we would not take it, even prior to this acquisition. HMC wants to level up systems while remaining performant, snappy and buttery smooth. Notable examples include File Pilot [0] or my own Terminal Click (still early days) [1], both coming from bootstrapped indie devs.
I'll finish with a quote from a blog post [2]:
> Serious Handmade projects, like my own Terminal Click, don’t gain from AI. It does help at the margins: I’ve delegated website work since last year, and I enjoy seamless CI/CD for my builds. This is meaningful. However, it fails at novel problems and isn’t practical for my systems programming work.
All that said, I congratulate Bun even as we disagree on philosophy. I imagine it's no small feat getting acquired!
Finding this comment interesting, parent comment didn't suggest any past association but it seemingly uses project reference as pivot point to do various outgroup counter signaling / neg bun?
I understand the concern, but really? I found this quote enough to offer proper comments:
> had a vague idea that "Zig people" were generally "Software You Can Love" or "Handmade Software Movement" types
Folks at Bun are "Zig people" for obvious reasons, and a link was made with Handmade software. This happened multiple times before with Bun specifically, so my response is not a "pivot" of any kind. I've highlighted and constrasted our differences to prevent further associations inside a viral HN thread. That's not unreasonable.
I also explicitly congratulated them for the acquisition.
> I had a vague idea that "Zig people" were generally "Software You Can Love" or "Handmade Software Movement" types, about small programs, exquisitely hand-written, etc, etc.
In my experience, the extreme anti-LLM people and extreme pro-vibecoding people are a vocal online minority.
If you get away from the internet yelling match, the typical use case for LLMs is in the middle. Experienced developers use them for some small tasks and also write their own code. They know when to switch between modes and how to make the most of LLMs without deferring completely to their output.
Most of all: They don't go around yelling about their LLM use (or anti-use) because they're not interesting in the online LLM wars. They just want to build things with the tools available.
more people should have such a healthy approach not only to llms but to life in general. Same reason I partake less and less in online discourse: its so tribal and filled with anger that its just not worth it to contribute anymore. Learning how to be in the middle did wonders to me as a programmer and I think as a person as well.
I'm not sure about exquisite and small.
Bun genuinely made me doubt my understanding of what good software engineering is. Just take a look at their code, here are a few examples:
- this hand-rolled JS parser of 24k dense, memory-unsafe lines: https://github.com/oven-sh/bun/blob/c42539b0bf5c067e3d085646... (this is a version from quite a while ago to exclude LLM impact)
- hand-rolled re-implementation of S3 directory listing that includes "parsing" XML via hard-coded substrings https://github.com/oven-sh/bun/blob/main/src/s3/list_objects...
- MIME parsing https://github.com/oven-sh/bun/blob/main/src/http/MimeType.z...
It goes completely contrary to a lot of what I think is good software engineering. There is very little reuse, everything is ad-hoc, NIH-heavy, verbose, seemingly fragile (there's a lot of memory manipulation interwoven with business logic!), with relatively few tests or assurances.
And yet it works on many levels: as a piece of software, as a project, as a business. Therefore, how can it be anything but good engineering? It fulfils its purpose.
I can also see why it's a very good fit for LLM-heavy workflows.
"exquisitely hand-written"
This sounds so cringe. We are talking about computer code here lol
Thanks, Jarred. Seeing what you built with Bun has been a real inspiration, the way one focused engineer can shift an entire ecosystem. It pushed me back into caring about the lower-level side of things again, and I’m grateful for that spark. Congrats on the acquisition, and excited to see what’s next
Hi Jarred. Congratulations on the acquisition! Did (or will) your investors make any profit on what they put into Bun?
Is this acquihiring?
No. Anthropic need Bun to be healthy because they use it for Claude Code.
Isn't that still "acqui-hiring" according to common usage of the term?
Sometimes people use the term to mean that the buyer only wants some/all of the employees and will abandon or shut down the acquired company's product, which presumably isn't the case here.
But more often I see "acqui-hire" used to refer to any acquisition where the expertise of the acquired company are the main reason to the acquisition (rather than, say, an existing revenue stream), and the buyer intends to keep the existing team dynamics.
Acquihiring usually means that the product the team are working on will be ended and the team members will be set to work on other aspects of the existing company.
That is part of the definition given in the first paragraph of the Wikipedia article, but I think it’s a blurry line when the acquired company is essentially synonymous with a single open source project and the buyer wants the team of experts to continue developing that open source project.
I've never personally used Bun. I use node.js I guess. What makes Bun fundamentally better at AI than, say, bundling a node.js app that can run anywhere?
If the answer is performance, how does Bun achieve things quicker than Node?
on Bun's website, the runtime section features HTTP, networking, storage -- all are very web-focused. any plans to start expanding into native ML support? (e.g. GPUs, RDMA-type networking, cluster management, NFS)
Probably not. When we add new APIs in Bun, we generally base the interface off of popular existing packages. The bar is very high for a runtime to include libraries because the expectation is to support those APIs ~forever. And I can’t think of popular existing JS libraries for these things.
Congrats on the payday :)
Do you think Anthropic might request you implement private APIs?
You said elsewhere that there were many suitors. What is the single most important thing about Anthropic that leads you to believe they will be dominant in the coming years?
No idea about his feelings but believing that they will be dominant wouldn't have to be the reason he chose them. I could easily imagine that someone would decide based on (1) they offered enough money and (2) values alignment.
Does this acquisition preclude implementing an s3 style integration for AWS bedrock? Also is IMDSv2 auth on the roadmap?
How much of your day-to-day is spent contributing code to the Bun codebase and do you expect it to decrease as Anthropic assigns more people to work on Bun?
Hi Jarred,
I contributed to Bun one time for SQLite. I've a question about the licensing. Will each contributor continue to retain their copyright, or will a CLA be introduced?
Thanks
With Bun's existing OSS license and contribution model, all contributors retain their copyright and Bun retains the license to use those contributions. An acquisition of this kind cannot change the terms under which prior contributions were made without explicit agreement from all contributors. If Bun did switch to a CLA in the future, just like with any OSS project, that would only impact future contributions made after that CLA went into effect and it depends entirely on the terms established in that hypothetical CLA.
Any chance there will be some kind of updating mechanism for 'compiled' bun executables?
I have a PR that’s been sitting for awhile that exposes the extra options from the renameat2 and renameatx_np syscalls which is a good way to implement self-updaters that work even when multiple processes are updating the same path on disk at the same time. These syscalls are supported on Linux & macOS but I don’t think there’s an equivalent on Windows. We use these syscalls internally for `bun install` to make adding packages into the global install cache work when multiple `bun install` processes are running simultaneously
No high-level self updater api is planned right now, but yes for at least the low level parts needed to make a good one
Hi Jarred, thanks for all your work on Bun.
I know that one thing you guys are working on or are at least aware of is the size of single-file executables. From a technical perspective, is there a path forward on this?
I'm not familiar with Bun's internals, but in order to get the size down, it seems like you'd have to somehow split up/modularize Bun itself and potentially JavaScriptCore as well (not sure how big the latter is). That way only the things that are actually being used by the bundled code are included in the executable.
Is this even possible? Is the difficulty on the Bun/Zig side of things, or JSC, or something else? Seems like a very interesting (and very difficult) technical problem.
One more thing I hope doesn't change, is the fun Release videos :-) I really enjoy them. They're very apple-y, and for just a programming tool.
Yeah why are you not out on a boat somewhere enjoying this moment? Go have fun please.
Acq's typically have additional stips you have to follow - they probably have new deadlines and some temporary stress for the next few months.
I wonder if this is a sign of AI companies trying to pivot?
> Bun will ship faster.
That'll last until FY 2027. This is an old lie that acquirers encourage the old owner to say because they have no power to enforce it, and they didn't actually say it so they're not on the hook. It's practically a cheesy pickup line, and given the context, it kinda is.
From the comments here it sounds like most people think the amount Anthropic paid for the company was probably not much more than the VC funding which Bun raised.
How would the payout split work? It wouldn’t seem fair to the investors if the founder profited X million while the investors get their original money returned. I understand VC has the expectation that 99 out of 100 of investments will net them no money. But what happens in the cases where money is made, it just isn’t profitable for the VC firm.
What’s to stop everyone from doing this? Besides integrity, why shouldn’t every founder just cash out when the payout is life-changing?
Is there usually some clause in the agreements like “if you do not return X% profit, the founder forfeits his or her equity back to the shareholders”?
All VC's have preferred shares, meaning in case of liquation like now, they get their investment back, and then the remainder gets shared.
Additionally, depending on round, they also have multiples, like 2x meaning they get at least 2x their investment before anyone else gets anything
Probably not much more than their valuation, which is the key difference since the investor will still get a net return.
Anthropic has been trying to win the developer marketshare, and has been quite successful with Claude Code. While I understand the argument that this acquisition is to protect their usage in CC or even just to acquire the team, I do hope that part of their goal is to use this to strengthen their brand. Being good stewards of open source projects is a huge part of how positively I view a company.
> Being good stewards of open source projects is a huge part of how positively I view a company.
Maybe an easier first step would be to open source Claude Code...?
I wonder what this means for Deno.
Will this make it more or less likely for people to use Bun vs Deno?
And now that Bun doesn't need to run a profitable cloud company will they move faster and get ahead of Deno?
I think Deno's management have been somewhat distracted by their ongoing lawsuits with Oracle over the release of the Javascript trademark.
I started out with Deno and when I discovered Bun, I pivoted. Personally I don't need the NodeJS/NPM compatability. Wish there was a Bun-lite which was freed of the backward compatability.
Ironically, this was early Deno - but then adoption required backwards compatibility.
I'm in a similar position.
I use Hono, Zod, and Drizzle which AFAIK don't need Node compat.
IIRC I've only used Node compat once to delete a folder recursively with rm.
> Will this make it more or less likely for people to use Bun vs Deno?
I'm not sure it will make much of a difference in the short term.
For those who were drawn to Bun by hype and/or some concerns around speed, they will continue to use Bun.
For me personally, I will continue to use Node for legacy projects and will continue using Deno for current projects.
I'm not interested in Bun for it's hype (since hype is fleeting). I have a reserved interested in Bun's approach to speed but I don't see it being a significant factor since most JS speed concerns come from downloading dependencies (which is a once-off operation) and terrible JS framework practices (which aren't resolved by changing engines anyway).
----------------------------
The two largest problems I see in JS are:
1. Terrible security practices
2. A lack of a standard library which pushes people into dependency hell
Deno fixes both of those problems with a proper permission model and a standard library.
----------------------------
> And now that Bun doesn't need to run a profitable cloud company will they move faster and get ahead of Deno?
I think any predictions between 1-10 years are going to be a little too chaotic. It all depends on how the AI bubble goes away.
But after 10 years, I can see runtimes switching from their current engines to one based on Boa, Kiesel or something similar.
I’ll be honest, while I have my doubts about the match of interests and cohesion between an AI company and a JS runtime company I have to say this is the single best acquisition announcement blog post I’ve seen in 20 years or so.
Very direct, very plain and detailed. They cover all the bases about the why, the how, and what to expect. I really appreciate it.
Best of luck to the team and hopefully the new home will support them well.
But how is another company that is also VC backed and losing money providing stability for Bun?
How long before we hear about “Our Amazing Journey”?
On the other hand, I would rather see someone like Bun have a successful exit where the founders seem to have started out with a passion project, got funding, built something out they were excited about and then exit than yet another AI company by non technical founders who were built with the sole purpose of getting funding and then exit.
Anthropic may be losing money, but a company with $7bn revenue run rate (https://www.anthropic.com/news/statement-dario-amodei-americ...) is a whole lot healthier than a company with a revenue of 0.
If I had the cash, I could sell dollar bills for 50 cents and do a $7b run rate :)
If that was genuinely happening here - Anthropic were selling inference for less than the power and data center costs needed to serve those tokens - it would indeed be a very bad sign for their health.
I don't think they're doing that.
Estimates I've seen have their inference margin at ~60% - there's one from Morgan Stanley in this article, for example: https://www.businessinsider.com/amazon-anthropic-billions-cl...
>The bank's analysts then assumed Anthropic gross profit margins of 60%, and estimated that 75% of related costs are spent on AWS cloud services.
Not estimate, assumption.
Those are estimates. Notice they didn’t assume 0% or a million %. They chose numbers that are a plausible approximation of the true unknown values, also known as an estimate.
If Morgan Stanley are willing to stake their credibility on an assumption I'm going to take that assumption seriously.
This is pretty silly thing to say. Investment banks suffer zero reputational damage when their analysts get this sort of thing wrong. They don’t even have to care about accuracy because there will never be a way to even check this number, if anyone even wanted to go back and rate their assumptions, which also never happens.
Fair enough. I was looking for a shortcut way of saying "I find this guess credible", see also: https://news.ycombinator.com/item?id=46126597
Calling this unmotivated assumption an "estimate" is just plain lying though, regardless of the faith uou have in the source of the assumption.
I've seen a bunch of other estimates / claims of a %50-60 margin for Anthropic on serving. This was just the first one I found a credible-looking link I could drop into this discussion.
The best one is from the Information, but they're behind a paywall so not useful to link to. https://www.theinformation.com/articles/anthropic-projects-7...
They had pretty drastic price cuts on Opus 4.5. It's possible they're now selling inference at a loss to gain market share, or at least that their margins are much lower. Dario claims that all their previous models were profitable (even after accounting for research costs), but it's unclear that there's a path to keeping their previous margins and expanding revenue as fast or faster than their costs (each model has been substantially more expensive than the previous model).
It wouldn't surprise me if they found ways to reduce the cost of serving Opus 4.5. All of the model vendors have been consistently finding new optimizations over the last few years.
I sure hope serving Opus 4.5 at the current cost is sustainable. It’s the first model I can actually use for serious work.
I've been wondering about this generally... Are the per-request API prices I'm paying at a profit or a loss? My billing would suggest they are not making a profit on the monthly fees (unless there are a bunch of enterprise accounts in group deals not being used, I am one of those I think)
but those AI/ML researchers aka LLM optimization staff are not cheap. their salaries have skyrocketed, and some are being fought for like top-tier soccer stars and actors/actresses
The leaders of Anthropic, OpenAI and DeepMind all hope to create models that are much more powerful than the ones they have now.
A large portion of the many tens of billions of dollars they have at their disposal (OpenAI alone raised 40 billion in April) is probably going toward this ambition—basically a huge science experiment. For example, when an AI lab offers an individual researcher a $250 million pay package, it can only be because they hope that the researcher can help them with something very ambitious: there's no need to pay that much for a single employee to help them reduce the costs of serving the paying customers they have now.
The point is that you can be right that Anthropic is making money on the marginal new user of Claude, but Anthropic's investors might still get soaked if the huge science experiment does not bear fruit.
> their investors might still take a bath if the very-ambitious aspect of their operations do not bear fruit
Not really. If the technology stalls where it is, AI still have a sizable chunk of the dollars previously paid to coders, transcribers, translators and the like.
Surely you understand the bet Anthropic is making, and why it's a bit different than selling dollars at a discount
Because discounted dollar bills are still a tangible asset, but churning language models are intangible?
Maybe for those of us not-too-clever ones, what is the bet? Why is it different? Would be pretty great to have like a clear articulation of this!
The bet, (I would have thought) obviously, is that AI will be a huge part of humanity’s future, and that Anthropic will be able to get a big piece of that pie.
This is (I would have thought) obviously different from selling dollars for $0.50, which is a plan with zero probability of profit.
Edit: perhaps the question was meant to be about how Bun fits in? But the context of this sub-thread has veered to achieving a $7 billion revenue.
The question is/was about how they intend to obtain that big piece of pie, what that looks like.
You are saying that you can raise $7b debt at double-digit interest rate. I am doubtful. While $7b is not a big number, the Madoff scam is only ~$70b in total over many years.
> the Madoff scam is only ~$70b in total
Incorrect - that was the fraudulent NAV.
An estimate for true cash inflow that was lost is about $20 billion (which is still an enormous number!)
No, I'm scamming myself. Halving my fortune because I believe karma will somehow repay me ten fold some time later.
Somehow? I've been keeping an eye on my inbox, waiting to get a karma vesting plan from HN, for ages. What's this talk of somehow?
you have anthropic confused with something like lovable.
anthropic's unit margins are fine, many lovable-like businesses are not.
Or I'm just saying revenue numbers alone don't prove anything useful when you have deep pockets.
I am fairly skeptical about many AI companies, but as someone else pointed out, Anthropic has 10x'ed their revenue for the past 3 years. 100m->1b->10b. While past performance no predictor of future results, their product is solid and to me looks like they have found PMF.
Idk, I’m no business expert by any means, but I’m a hell of a lot more _scared_ by a company burning so much that’s $7b is still losing
Often it happens that VCs buy out companies from funds belonging to a fresh because the selling fund wants to show performance to their investors until "the big one", or move cash one from wealthy pocket to another one.
"You buy me this, next time I save you on that", etc...
"Raised $19 million Series A led by Khosla Ventures + $7 million"
"Today, Bun makes $0 in revenue."
Everything is almost public domain (MIT) and can be forked without paying a single dollar.
Questionable to claim that the technology is the real reason this was bought.
It's an acquihire. If Anthropic is spending significant resources, or see that they will have to, to improve Bun internally already it makes a lot of sense. No nefarious undertones required.
An analogous example off the top of my head is Shopify hired Rafael Franca to work on Rails full-time.
If it was an acquihire, still a lot less slimy than just offering the employees they care about a large compensation package and leaving the company behind as a husk like Amazon, Google and Microsoft have done recently.
Is it? What's wrong with hiring talent for a higher salary?
You have no responsibility for an unrelated company's operations; if that was important to them they could have paid their talent more.
From the acquirer’s perspective, you’re right. (Bonus: it diminishes your own employees’ ability to leave and fundraise to compete with you.)
From an ecosystem perspective, acquihires trash the funding landscape. And from the employees’ perspective, as an investor, I’d see them being on an early founding team as a risk going forward. But that isn’t relevant if the individual pay-off is big.
> And from the employees’ perspective, as an investor, I’d see them being on an early founding team as a risk going forward.
Every employee is a flight risk if you don't pay them a competitive salary; that's just FUD from VC bros who are getting their playbook (sell the company to the highest bidder and let early employees get screwed) used against them.
> Every employee is a flight risk if you don't pay them a competitive salary
Not relevant to acquihires, who typically aren’t hired away with promises of a salary but instead large signing bonuses, et cetera, and aren’t typically hired individually but as teams. (You can’t solve key man problems with compensation alone, despite what every CEO compensation committee will lead one to think.)
> that's just FUD
What does FUD mean in this context? I’m precisely relaying a personal anecdote.
> aren’t hired away with promises of a salary but instead large signing bonuses
Now you're being nitpicky. Take the vesting period of the sign on bonus, divide the bonus amount by that and add it to the regular salary and you get the effective salary.
> aren’t typically hired individually but as teams.
So? VC bros seem to forget the labor market is also a free market as soon it hurts their cashout opportunity.
> What does FUD mean in this context? I’m precisely relaying a personal anecdote.
Fear, Uncertainty and Doubt. Your anecdote is little more than a scare story. It can be summarized as: if you don't let us cashout this time, we'll hold this against you in some undefined future.