Rewrite Bun in Rust has been merged
github.com511 points by Chaoses 19 hours ago
511 points by Chaoses 19 hours ago
When announcements say that rewrite took 1 week, I wonder how much time went into preparing this file with very detailed instructions on mapping Zig to Rust idioms: https://github.com/oven-sh/bun/commit/46d3bc29f270fa881dd573...
On top of that, if you look at 'Pointers & ownership' and 'Collections' sections, the Bun codebase is already prepared, using internal smart pointer types that map 1-to-1 to Rust equivalents, and `bun_collections` Rust crate already exists.
This makes an impression, that rewrite was prepared long time ago and was Bun team proposition to Anthropic during the acquisition deal.
Yeah I don’t know what’s true when reading about LLMs. Same with comments here on hacker news. So much money on the line it’s clear they would seed communities with marketing shills (and some people are just tribal).
Same since they own Bun, they have every incentive to make this seem easier than it was.
This is a huge problem regarding the specifics of ai. Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines more and more.
Influencers are getting paid to promote ai for 10s of thousands of USD. This is one the reasons social media has been swamped with it lately.
Yes, some of the latest campaigns:
https://www.wired.com/story/super-pac-backed-by-openai-and-p...
Anthropic's own talking point guide:
https://news.ycombinator.com/item?id=47945021
There were earlier initiatives from the industry. This is just what is in the open and does not even include automated LLM "influencers".
> Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines
Since one of LLM's largest market (with product fit) is us developers, we are experiencing what the crypto bros did to others.
I'm not sure it matters what anyone claims. It's easy to use and experience its abilities and limitations.
Ignoring things like whether the Rust that was output could be deemed qualitatively good, whether the resulting line count is appropriate, how much the codebase was ready or primed for this kind of exercise going in, and so on, is it fair to say that a 622 line artefact created up front is a relatively small cost for a potential increase in consistency or quality of output when the output is ~1M LoC? It seems like there's a multiplicative power here given how much output there is. Or is that missing a lot of nuance?
I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.
This is effectively a very expensive and resource-intensive machine translation. As such, there is no increase in consistency or quality of output.
The translation is a starting point to enable follow-on work to take advantage of Rust's features.
How would you have achieved this “machine translation” without an LLM?
It seems to me it would have been highly likely to be more expensive and more resource intensive - if realistically possible at all, short of implementing a general Zig to Rust translator first.
I would guess it was a for ... each loop. They likely wrote a bunch of skills. The foor loop went through each file and generated a complimentary file, then had another process integrate/validate.
I doubt the entire process was a single week, just whatever harness they specially prepared for the work.
> I doubt the entire process was a single week, just whatever harness they specially prepared for the work.
it wasn't. probably quite a lot of preparation i would think. and it's very much a first pass which is far from idiomatic rust and far from memory safe. still impressive though for what it is.
https://x.com/jarredsumner/status/2053588764774269292 https://x.com/jarredsumner/status/2054984043708740093
> using internal smart pointer types that map 1-to-1 to Rust equivalents
Smart pointers weren't invented by Rust. If you write code in other languages with pointers you mentally model the same types already.
> and `bun_collections` Rust crate already exists.
This is wrong. It's part of the PR in the codebase. It did not previously exist.
It's the same thing with their gcc stunt.
It would be _so_ easy to alleviate any doubt from this and hype up the IPO even more. They just need start a separate repo with all the hidden work they needed to do to prod the AI along, and let everyone replicate the results. After all, isn't that what all their customers are trying to achieve? A million lines of usable code in "7" days? Never mind the fact that it will also boost Anthropic's usage metrics as everyone tries to replicate it into their workflows.
If it was beautiful, they would've started with a blog post about this with links and instructions. Perhaps I will still be proven wrong and a blog post is being written as I type this.
Seems like Zig Bun had 3 pointer types that map neatly to existing Rust pointer types. The other 7-8 needed types to be created.
Is that the conspiracy?
bun_collections doesn't look much older than the porting guide.
> +1009257 -4024
Bun is now over 1M lines of Rust code.
This is approaching the size of the Rust compiler itself; except that BunJs is mostly a JavaScript interpreter wrapper + a reimplementation of the NodeJS library (Rust STD wrapper).
I think BunJS is becoming the canary for software complexity management in the LLM era.
> mostly a JavaScript interpreter wrapper
Not accurate. Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code.
Now that Bun can leverage Rust do you think some of this code will get disaggregated? Eg, Bun could use swc crates
It wouldn't have been that hard to do that from Zig if they'd wanted to. They don't, because they want to do everything themselves so that it works exactly the way they want (except the core JS engine for which this is infeasible—though even that has custom patches). After all, there are already plenty of libraries on npm for those other parts of the stack and they do work in Bun.
Bun is not a JavaScript interpreter, it's "only" a reimplementation of the NodeJS library + various other libraries. Bun uses JavaScriptCore as its JS engine. So Bun itself does (or at least should do) no JavaScript parsing, interpreting or JITing.
EDIT: I misread, sorry! You said "JavaScript interpreter wrapper", which is correct.
Bun is now almost twice the size of JavaScriptCore, too, by linecount after this.
This is the 'world class' engineering that Jarred claims he can't hire Americans to do, by the way https://x.com/jarredsumner/status/1969751721737077247. This company is parasitic to its literal (javascript) core.
No, it does parsing and a bunch more. The Bun founder says it best in this comment:
"Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code."
That’s what they said - “JavaScript interpreter wrapper”.
I'm not sure if it's just the leading '+' or if there are other factors for phone number detection on iOS, but on mobile the line count changes are underlined and I can tap it to start a call, which, if it is because of the diff size, is something I find pretty amusing.
Apple has had a feature called Apple Data Detectors since the 90's that looks for different patterns in text and allows you to perform actions on them.
So if the text includes a phone number, email address, flight number, package tracking number, street address or other pattern in the data it is underlined and allows you to perform one or more actions.
The patterns it looks for and actions it takes are extensible by developers.
If you don't care for it, you can turn it off.
> +1009257 -4024
+1 (009) 257-4024
I think it just lines up with the typical size of a phone number and the '-' is interpreted as a separator. Just a simple regex probably.The leading “+” is not needed. Numbers with seven digits are automatically hyperlinked (possibly depends on locale).
123456
1234567
12345678
Interestingly, the entire line gets formatted once it reaches seven digits, +lines and -lines both, so I guess the -lines is just interpreted as a dash. But your eight digit string doesn't. Perhaps it's not interesting, though I've never really given it a second thought before.
There’s certainly some regex or similar involved that tries to recognize phone numbers, and then hyperlinks the whole thing. My point was that it’s not solely the plus sign that is triggering it.
The Bun codebase had a similar number of lines of code before the rewrite.
There's nothing unusual about a rewrite coming in with a similar LOC number.
I think the unusual thing is that it was written in a week. I highly doubt that they read and understood all 1M lines. But if it works and people use it, what does that mean for software? Should we still care about the code that’s written? Should we even look? I’ve always thought so, but maybe I’m just biased.
I think we should care way more about what the validation story is of code. The obvious question does it all work? I'm happy to not look at any code if we have good ways to validate what is there. The other thing I care about is the architectural structure of the code. Given its a port I don't think that would have changed.
I was going to comment this same thing.
I don't know enough about what Bun does... But Rust is so insanely complicated, it's hard for me to wrap my head around how Bun is equally complictated.
Complicated things can often be expressed very succinctly - the hard part is in understanding why the short program does what it is supposed to.
Simple things often take a lot of space, simply because there's a lot of similar but different simple things that each need to be written down.
Lines of code just isn't a good measure of "complicated".
They are complicated in different ways. The rust compiler doesn’t include redis, Postgres, and S3 clients for instance.
If anything, it's a little surprising that the Rust code isn't significantly larger because I tend to think of Rust as requiring somewhat more boilerplate than JS.
The code was using Zig before, not JS.
Ah fair point. I don't have a sense of which of those are more verbose.
Zig is, typically. And yet here, the rust rewrite is around 60% more lines of code.
Not to mention how trigger happy LLMs can be when it comes to being overly verbose and adding unnecessary bits even with explicit direction not to do so.
1MLOC for a JavaScriptCore wrapper is a great example of what agents are capable of.
Code is cheap. Only the quality and maintenance is interesting. Those will be seen later on.
I would not be surprised if the next major step for them is to audit the code and trim the fat.
> I think BunJS is becoming the canary for software complexity management in the LLM era.
Yeah, Cursor did the same thing, bragging about how many lines of code they managed to produce for a semi-working browser, completely missing the idea where less code is better, not the other way around.
I think their point was that the project is complex, with the implicit assumption that the complexity is to a large degree inherent.
Even if it's mostly accidental, and the code is overengineered slop (which it is), the system being able to decompose a problem and deliver something is impressive in terms of stability: it wasn't sucked into rewriting everything from scratch every time it would run into issues, it didn't have infinite subagent recursion with a one-agent-per-line type workflow, etc.
you can easy fix this by MAKE NO MISTAKES, DO NOT HALLUCINATE under your zig2rust.md skill agent flow /s
$ rg 'unsafe [{]' src/ | wc -l
10428
$ rg 'unsafe [{]' src/ -l | wc -l
736
Language Files Lines Code Comments Blanks
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Rust 1443 929213 732281 116293 80639
Zig 1298 711112 574563 59118 77431
TypeScript 2604 654684 510464 82254 61966
JavaScript 4370 364928 293211 36108 35609
C 111 305123 205875 79077 20171
C++ 586 262475 217111 19004 26360
C Header 779 100979 57715 29459 13805Cool you can just search specifically for potentially unsafe code in Rust. How do you search for unsafe code in Zig? Or do you just have to assume it's everywhere?
If half of your code is unsafe then unless you exercise tremendous discipline (Claude basically doesn't) you will just end up with a big ball of unsafe, peppered with hallucinations in whatever random documentary comments Claude decided to make. I doubt they enforced the confinement of unsafe to a specific architectural layer or anything like that.
Aren't the Rust unsafes a reflection of the Zig it was ported from? However now that you're working with Rust, you're in a position to continue improving and eliminating the unsafes.
if half of your files in a million line codebase are unsafe that doesn't tell you much any more. Presumably the point of a Rust rewrite is that you actually make use of Rust's safety features in a coherent way.
But given the whole "let AI rewrite this for me" stunt nature of this project that was not going to happen because that would require well, actual thinking and a re-design. So now you have Zig disguised as Rust and a line-by-line port because the semantics of idiomatic Rust don't map on the semantics of Zig.
>if half of your files in a million line codebase are unsafe that doesn't tell you much any more.
If half of your files in the first pass of a million line rewrite are unsafe then that's completely fine. Do you understand what the tag actually is? It doesn't even mean that the code is actually unsafe, just that the compiler can't guarantee its safety, which can happen for a number of reasons, some benign.
Who rewrites a 700K codebase trying to be idiomatic from the get go ? That's setting yourself up for failure, whether you're a human or a machine.
And? This is absolutely the correct and standardized way to do mechanical rewrites: you do a rewrite that maps directly to the original source so you can rely on the original correctness guarantees and bug-for-bug compatibility and log issues, and then you go into the next phase where you begin to use idiomatic constructs.
This is the same in COBOL-to-Java ports that have been done in banking and insurance for the past 20 years.
>This is the same in COBOL-to-Java ports
it isn't, because those guys didn't think a naive 1-1 machine translation would give them the benefits of Java, which somehow the people involved in this rust rewriting seem to think they've already gained despite the virtually identical code.
If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly
> If the whole point genuinely would have been to do a purely mechanical translation they could and should have written a transpiler, which would have had significantly higher correctness guarantees than this given that it'd be deterministic, but of course that would have defeated the PR purpose of this whole thing, which just looks like a marketing for Anthropic frankly
If it were just a marketing stunt you wouldn't have a fraction of a percent of the test suite passing with the remaining bugs being realistically very fixable, and everything written in languages with type systems that give far more guarantees than what COBOL is possible.
You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected, and maps with many people's experience with using LLMs for tasks like these.
>You're being extremely negative about this whole endeavour without looking at the evidence that this effort is going far more smoothly than expected
no I'm being negative because as I just said, if you want to do a purely syntactic translation you don't even need an LLM, that's called transpilation and we've been doing it programmatically for decades.
This is the kind of thing that looks great to people who can't program, think this is some new superpower unlocked by the mystery magic of LLMs and that is exactly the kind of impression Claude wants to sell.
Transpilation won't get you passing 99.8% of a comprehensive test suite of a 700K+ codebase in a week (and maybe none at all) and that's assuming transpilation is practical for the pair in question. So if you remotely want these kinds of results, then you most certainly do need an LLM.
The half of the files contain 'unsafe' keyword? It doesn't seem as a good rewrite. What is the point of rewrite into Rust, if ~half of your code is still unsafe?
Bun is fundamentally a boundary-heavy system and it also rolls its own version of a lot of things that people typically use via libraries, where unsafe is hidden. (no async, memory arenas, etc). It also uses FFI heavily which requires unsafe.
It also looks like the top 2 maintainers are currently actively working on getting the amount of unsafe down and it's going down quickly.
1. Rewrite from zig to rust in as close to zig as you can.
2. Turn into idiomatic rust.
1. Get hired into a company where you have a solid bet on making multi-century lasting generational wealth (>$50,000,000).
2. Every waking moment do everything in your power to boost the company that might give you the ability to define the direction of technology for the rest of your life.
3. Use the only thing you have (bun) to help push you in this direction and do things to help boost LLM marketing (a technology that already deeply struggles to find customers and has to rely on welfare (lucrative government contracts) to make sales).
---
Honestly think this generation of tech workers in SF are more evil than those that worked at Google + Facebook in the early 10s.
> a technology that already deeply struggles to find customers
As far as I know it's the opposite, Anthropic struggles to satisfy demand, they have tons of paying customers and their customer base is growing fast.
What does that have to do with rewriting from zig to rust??? This thread is what's pushing LLM marketing, not the rewrite itself.
If the rewrite is just a stunt and it will crash and burn it will do that whether we spend our free (or work) time writing comments. If there is any hype around this particular topic, it's happening here not in the GitHub repo.
I’m honestly confused. What is it that you think makes these workers “more evil” than Google and Facebook workers from the early 2010s?
Google and Facebook workers just made a lot of cash and mostly made everyone's life harder by Leetcode and bad interview process, they didn't threaten and actively work to put millions of SE on the street.
> they didn't threaten and actively work to put millions of SE on the street
Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.
They (we) did it to tons of other industries. And we collectively patted ourselves on the back, saying that automation is a good thing and we're the good guys for doing it and people who lost their jobs will adapt and maybe they should just learn to code.
Now it's happening to (some of) us and suddenly it's evil?
No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.
We either don't think about it ("what could go wrong?"), don't care about it (eh), justify it ("I need to eat!!!", "I'm just following orders"), or actively embrace it ("It's the future!").
> Programmers in the 90s weren't less evil or had a stronger moral compass. They simply didn't have the opportunity to reduce the need for their fellow developers on a massive scale. They (we) would have, had we had the chance.
Nah. The fact that such opportunity wasn't available attracted a different sort of person.
What is it with tech bros and ridiculous asocial agenda? You have some guilt complex or whatever shit?
> No. The point is: programmers are whores. We like to act all righteous on forums, but very very few of us care enough about the consequences of our code to do something about it.
What the fuck are you even saying?
> What is the point of rewrite
To win a news cycle.
For the forseeable future, the AI market competition is not about which product can provide the most valuable utility to users. It's about which product can be holding the protective aura of social media and investment zeitgeist while competitors buckle under the strain from unfulfilled hype and over-leveraging.
Utility, engineering, efficiency... these are all menial details for the winners to reluctantly iron out in 2035.
unsafe just means that you take responsibility for the safety of the code contained within. Calling into non-Rust libraries has to be wrapped in unsafe. Making syscalls has to be wrapped in unsafe.
Bun needs to interact with FFI code. This gets wrapped in unsafe blocks.
There are many places where a JavaScript interpreter and library would need to make unsafe calls and operations.
It doesn't literally mean the code is unsafe. It means the code contained within is not something that can be checked by the compiler, so the writer takes responsibility for it.
There are many low-level data munging and other benign operations that a human can demonstrate are safe, but need to be wrapped in safe because they do things outside of what the compiler can check.
There's actually a good example of this in the rewrite [1], in `PathString::slice`. They are doing an unsafe operation to return a slice that could be a use-after-free, if the caller had not already guaranteed that an invariant will remain true. Following proper rust idiomatic practices, claude has added a SAFETY comment to the unsafe block to explain why it's safe: "caller guarantees the borrowed memory outlives this".
Now, normally, you'd communicate this contract to your API users by marking the type's constructor (PathString::init) as "unsafe", and including the contract in its documentation. Unfortunately in this case, this invariant does not exist - it appears to have been fabricated out of thin air by the LLM [2]. So, not only does this particular codebase have UB problems caused by unsafe code, the SAFETY blocks for the unsafe code are also, well, lies.
[1] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...
[2] https://github.com/oven-sh/bun/blob/63035b3e37/src/bun_core/...
`PathString` worked the exact same way in our Zig code, with less visibility from the compiler & type system. And yes, it will be refactored heavily (or deleted overall) in the next week or so.
One potential way to solve this in a principled manner is to turn at least some "unsafe" annotations into ghost capability tokens that are explicitly threaded through the code and consistently checked by the compiler. Manufacturing the capability could itself be left as an unsafe operation, or require a runtime check of some kind.
You already see this in some cases, for example the NonZero<T> generic type can be viewed as a T endowed with a capability or token that just says "this particular value of type T is nonzero, so the zero value is available for niche purposes". But this could be expanded a lot, especially with some AI assistance.
This already happens all the time in rust, including in the standard library. The typical pattern is to define your CheckedType to be
pub struct CheckedType(UncheckedType);
e.g. where its inner field is private. Then, you only present safe constructors that check your invariant, and only provide methods that maintain the invariant.
For a concrete example, String in rust is a Vec<u8> with the guarantee that the underlying bytes correspond to valid UTF8. Concretely, it is defined as
#[derive(PartialEq, PartialOrd, Eq, Ord)] #[stable(feature = "rust1", since = "1.0.0")] #[lang = "String"] pub struct String { vec: Vec<u8>, }
You can construct a string from a vec of bytes via
fn from_utf8(vec: Vec<u8>) -> Result<String, _>;
as well as the unsafe method
unsafe fun from_utf8_unchecked(vec: Vec<u8>) -> String;
Note here that there isn't a separate capability/token though. This is typically viewed as bad practice in rust, as you can always ignore checking a capability/token. See for example rust's mutexes Mutex<T>, which carry the data (T) that you want access to themself. So, to get access to the data, you must call .lock(). There is a similar philosophy behind Rust's `Result` type. to get data underlying it, you must handle the possibility of an error somehow (which can include panicing upon detecting the error of course).
Yes, or you could review the code.
Even before AI, deterministic checks by compilers are almost always better than "review the code"
"review the code" as a solution will eventually fail and cause a problem, even pre-AI.
The entire point of unsafe blocks and SAFETY comments is that they are easy for humans to find and audit, but not compiler checkable. If it can be compiler-checked by some clever token system, then ... it's just plain safe rust, and you don't need to document any special safety invariants in the first place
even when you can review the code, it's good to have the compiler check for you. This is for similar reasons why it's better to have CI check correctness on each code change, vs testing the code thoroughly one time, and then being careful going forward.
> unsafe just means that you take responsibility for the safety of the code contained within.
In this case it means you delegated the responsibility to a notably flaky heuristic.
> a JavaScript interpreter
Bun is not a Javascript interpreter. But I do see the point.
that sounds like a starting point and an honest translation. If it was originally unsafe and suddenly becomes safe immediately after the rewrite, it would mean they break existing behaviors
Some correct me if I'm wrong, but it's unlikely they wrote this first initial version of Rust and will leave it unchanged as-is. What's there now is a step in a long process, not the final destination.
Rust has a ton of other features besides safe. Like exhaustive checking of enum variants and the ability to avoid using null with option and result.
Zig has these modern language features too fwiw.
I think the goal was to do a massive rewrite for Anthropic (they acquired bun) and show that rewriting projects from lang -> lang with Claude can reduce security vulnerabilities to help with the hype for an IPO.
I don’t use/know Rust so I can’t comment on the quality, but there was a public security review that found issues with the new Rust code: https://x.com/SwivalAgent/status/2054468328119279923
This is an interesting experiment but I’m skeptical of any claims of success by Jarred/Anthropic due to the incentive to hype agents. There’s probably a trillion dollars at stake with the IPO. And Anthropic seems to be developing this part of their business with Mythos and the super review features.
But I’d like to see the same experiment done on a project without so much relying on the story being success.
There's a reasonable request to run the same analysis for the Zig version of the code as a comparison.
In lieu of that, it seems the Swivel devs ran an analysis on Tigerbeetle, one of the other major Zig projects, and found only 7 medium/low priority issues:
To clarify, those are things an LLM considers to be issues, and LLMs can make mistakes.
Some of those are clear false positives, others I need to revisit tomorrow to say one way or another.
Sure hope Mythos is as world beating as they claim, they’re gonna need it now.
Still writing the blog post about this. Will share more details.
For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.
You, nine days ago[0]:
> I work on Bun and this is my branch
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Maybe... it wasn't such an overreaction?
I'm really out the loop here so maybe you can help answer me a question - why is HN unhappy about this rewrite? why are people writing here almost as if they feel betrayed by Bun being rewritten from Zig into Rust?
I genuinely don't get it. I've been following this Bun stuff a bit but I don't understand where the HN sentiment is coming from.
The unhappiness is primarily stemming from Bun’s ownership by Anthropic - HN sees this as Anthropic using an OSS project for reckless marketing stunts.
For the record I don’t believe it’s a stunt, it’s ridiculous to me - everyone’s just seeing what they want to see out of sheer hate for anything Anthropic does.
In any case if the rewrite is really as reckless as many in this thread claim, we will see Bun collapse in on itself with a 1M LOC codebase the core team doesn’t understand, or rollback to Zig. So we don’t need to have a flamewar over it, time will answer the question.
Vibe coding a Rust rewrite of a widely used tool is basically catnip for the HN crowd.
My read is it's less the rewrite and more the messaging around the rewrite. Nine days between "you're over-reacting" and merge is surprising, to say the least. Sure will be interesting to see that blog post!
posting my read (since it differs so much from the others')- there's a 'holy war' being waged by people that think LLMs shouldn't do full rewrites of software. There are various reasons people think this (think LLMs are parrots that make slop and are incapable of writing good code, have environmental concerns, or are angry that software licenses can be circumvented). I call it a 'holy war' because I think most see our current trajectory as a bit inevitable and have a strong urge to proselytize their views and chide maintainers that use LLMs in ways they don't like.
Very similar angry comments happened with the discussions of the Chardet rewrite, next.js/vinext, and JSONata/gnata if you want to look at this in context.
You're not alone in voicing this, another (now dead) comment did it earlier too with a bit more of an emotional response (https://news.ycombinator.com/item?id=48134229).
Still, do you folks never do something to see how you feel about something, then chose to go one way or another? I'm not sure why it's so hard to see that it was an overreaction at the time, because it was an experiment, then at one point it stopped being an experiment and now they've chosen to actually run with it?
Is this not a common occurrence for other people? Personally I change my mind all the time, especially based on new evidence, which usually experiments like this surface, I'm not sure I understand the whole "You said X some days ago" outrage that seems to cause people's reaction here.
Yes sure it's ok to change your mind. But don't you think the people Jarred accused of "overreacting" in retrospect didn't?
No, what we knew then is still what was known then. Today is different, and seemingly they've committed to the rewrite, so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.
> so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.
It also makes sense to have strong feelings when you're able to pattern match well enough to predict something will happen despite others trying to convince you that your predictions are incorrect.
It's not overreacting when correctly predicting the future, just because others couldn't. In the same vein, the idea that "everyone out to get you" is not called paranoia when there are people actually out to get you. That's better called being observant.
Some of those who predicted correctly might also have overreacted, but I believe that the majority understood that to be a blanket statement about prediction as a whole vs any specific individual reaction.
Maybe the people who "were overreacting" just happened to have more foresight than you and me? Perhaps they saw where this was heading, and that led to their "overreaction"?
In what way? Foresight about what? It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.
> It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.
Yes - I think I didn't explain my feelings well. But, now I understood them finally! So:
It was an experiment back then. Now, nine days and a million lines later, it suddenly isn't an experiment anymore? I understand there's a comprehensive test suite (yay!) but still... a million-line diff in nine days still sounds like an experiment to me.
The difference is an assumption of good faith, for the most part, and that is to some extent modulated by how reasonable people believe a large scale LLM and/or rust rewrite is a reasonable idea.
Why are you defending them so much, lol. It's no longer an underdog open source project fighting for survival, it's a freaking Anthropic subsidiary that has been bought for hundreds of millions of dollars.
“Nobody could have seen this coming…”?
Well apparently a lot of people did. Maybe Jarred didn’t, maybe you didn’t, but most people correctly predicted what was coming.
See what coming?! I really don't understand what's going on here. Correctly predicted what, that Bun was being rewritten into Rust? I'm not sure anyone doubted that, all the work they did was public???
What on earth is going on here?
> I'm not sure anyone doubted that, all the work they did was public???
https://news.ycombinator.com/item?id=48019226
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
> What on earth is going on here?
With the nearly complete PR with the port to rust, a number of people predicted that it was going to happen. They were assured it's unlikely to happen and then they were accused of overreacting over effectively nothing. When those same people who were already upset about the rewrite, learned that their predictions the same ones that were rudely dismissed, were in fact, correct, they became upset again; this time about being lied to.
Correct or not, it's reasonable to conclude they were lied to. Especially given they correctly predicted the future.
>Correct or not, it's reasonable to conclude they were lied to.
No it's not. If we were 9 days away from a human written version of this experiment then yeah it would be reasonable to conclude they were lied to, because a human written version would progress so much slower and steadier that it's very unlikely you hadn't made up most of your mind a week before merge time.
But it's not human written. It's months, perhaps years of work compressed into a week, where the machine can go from 'nothing is working' to 'everything is working' in a few days. There is nothing reasonable about concluding you must have been lied to when such a delta in such a short time is possible. And if people fail to see that, then perhaps the initial assertions about an emotional meltdown were not so far off after all.