I am worried about Bun
wwj.dev469 points by remote-dev 15 hours ago
469 points by remote-dev 15 hours ago
I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.
Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.
Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.
It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.
It's interesting how quickly people buy the "abuse" line of thinking. We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription. That is independent of which agent/harness is used. The fair/real price for profitable use is the pay per use token pricing.
These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.
At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.
It's a big leap to go from "some users may be using large quantities of tokens" to "the labs are burning money on subs in an attempt to kill the competition."
Lots of businesses have subscription programs in which a small number of users are money losers, but which in aggregate make money.
It's not even obvious that the labs are losing a lot of money on even a minority of users; the rate use caps are fairly aggressive for Anthropic, and a cursory analysis of likely actual cost of serving tokens shows they are high margin products at the API level and unlikely to be unprofitable within the usage constraints provided to subscribers.
I do think subscription models make commercial sense because users want predictable costs, and it's a club good in which marginal token cost is zero which helps consolidate their customers' purchasing volume to one provider. But that's a different claim than them serving it unprofitably to kill competition.
Also, they (Anthropic) are transitioning many of their enterprise customers to API consumption billing anyway.
I work in the video AI world.
We gave up on subscriptions long ago. They're rinky dink and get you a paltry amount of utilization before they run out.
The per day per seat costs can exceed $1000. This is already normal for studios, and it's already producing positive ROI.
There's simply no way to price video any other way than by usage. I suspect the same will come for everything.
> There's simply no way to price video any other way than by usage. I suspect the same will come for everything.
I don't think there's any way for all of the current AI models to work except as a usage model. The question is whether or not people are willing to pay for it that way in the long-term.
It sounds like it is producing positive ROI for your side, but I’m curious what the bean counters at the studios think of the bill when the budgets tighten.
positive ROI for customers?
AI is already in commercials, TV, and movies. Companies for the most part just don't tell you because the reaction of the general public is "eww, AI".
It's already here in a big way. You just won't be told about it until the public lightens up on the "AI hate".
I think the vagueness of statements like this is why a lot of people (myself included) are just so very skeptical. Surely some company wants to brag about their use. I don’t doubt it’s found its way into certain spaces, but by and large a lot of the “big” claims have been demonstrated to be borderline fraudulent. That Brad Pitt/Tom Cruise AI fight is fake. It is misleading. Taking existing green screen choreography and using AI to impose Brad Pitt and Tom Cruise’s faces is not what it is being sold as. Darren Aronofsky’s AI works are not good either. They can’t seem to hold a shot for more than a few seconds, why is that?
If the argument is that AI is being used in the background or for some VFX, sure, I’ll buy that. It’s just another tool, then. If it is being used to generate entire scenes, there’s no evidence of this, unless something like that atrocious holiday Coca-Cola commercial is a herald of our future.
As written, your claim is just handwavy. I get you might not be able to cite anything concrete due to NDAs or whatnot but, you also have to understand why a lot of people find this kinda unpersuasive.
I can respond directly to this, I’m a former VFX industry person and still fairly well connected.
The the former you suggested. Background plates and the like. The lack of actual creative direction tools, trite visual style, lack of consistency/repeatability and complete inability to be edited or adjusted easily make it a non-starter for most tasks. Compositors are fast, LLMs are slow at that scale. There are tools like ComfyUI that sit in the “we’re running experiments/useful sometimes” category.
Loads of ML tools are in use and incredibly handy, but fit into that tool category, but actual wholesale video/image generation is not that prevalent, no.
> Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.
I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.
only if said galactic superintelligence takes immediate steps to kill all its potential competitors, or hoover up all the world's resources, or some other aggressively zero sum thing. otherwise I don't see what difference it makes down the line of you have the second superintelligence rather than the first.
and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.
This is also assuming that AGI is even possible. So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).
Edit: Meant to say AGI (superintelligence didn't make sense). Superintelligence is undefinable at the moment so even considering if it's possible or not is more of a philosophical thing/si-fi thought experiment than anything else.
> So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).
"The brain is so mysterious and unique, that we should abandon all attempts to even try to apply results like the general approximation theorem to it and discard all signs that some approximation is happening."
Why we don't see signs of intelligence in the universe? The simplest self-replicator requires accidental synthesis of the sequence of 200 (or so) RNA nucleobases.
BTW, your argument could have been applied word-for-word to powered flight in 1899. In short, argumentum ad ignorantiam.
No. To realize the possibility of powered flight one only needs to look at birds. AGI, on the other hand, is another word for God.
Just define "general" as "as general as allowed by math, physics, and practical limitations." Or use a conventional reading of AGI as a human-level intelligence (which we, naturally, have a working example of).
oh absolutely, no argument there, the case for AGI is pretty weak. I was just saying that I am even more sceptical that any of this is a "first or nothing" scenario - that is one of my biggest pet peeves about the entire tech sector.
ASI is the acronym you’re looking for. It stands for Artificial Superintelligence.
Arguably it’s already here. ChatGPT knows more than any human who has ever lived. It can carry out millions of conversations at once. And it has better working memory (“context”) than humans. And it can speak and write code much faster than humans.
Humans still have some advantages: Specialists are smarter than chatgpt in most domains. We’re better at using imagination. We understand the physical world better. But it seems like we’re watching the gap close in real time. A few years ago chatgpt could barely program. Now you can give it complex prompts and it can write large, complex programs which mostly work. If you extrapolate forward, is there any good reason to think humans will retain a lead?
ChatGPT can only respond to a prompt, and in the context of that prompt. It has no continuous awareness of anything. That isn't superintelligence. We are easily fooled because we have stupid monkey brains.
We have more like Artificial Superstupidity.
Ultimately our current model is extremely unlikely to perform better than the sum of current human knowledge. Godlike super-intelligence is a pipe dream with the current LLM based approaches.
Anthropic/OpenAI aren't planning to have their superintelligence take over the world, but they're still afraid that someone else will do it.
One could argue that AI has already started to hoover up all the world’s resources. AI buildout as a percent of GDP is already high and still rising.
Don't blame machines for our folly. This is just standard bubble behavior.
What if that's just the mechanism the machines take over the world?
Natural selection doesn't care why something replicated a lot.
Well no because no one is going to be coming in to work building the next AI model after the Singularity.
We’ll all be bblbrvkxn46?/4!gfbxf’mgv5fhxtgcsgjcucz to buvtcibycuvinovrYdyvuctYcrzuvhxh gcuch7…:!
If OpenAI has the second superintelligence they have to merge with the first and cooperate. It's a provision in their charter.
I'm not sure anyone thinks their charter carries much weight at this point.
I don't think this race to superintelligence idea should be taken too seriously. It is great for headlines and get peoples imaginations up. It is mostly a marketing gag.
I look at superintelligence this way: software engineering used to be considered amoung the most mentally demanding jobs one can have. And in this field more and more people give up large parts of their job and become approximately product managers to let the machine do the engineering part. So we are about there. Who cares that there are some puzzles in some "synthetic" benchmark in which humans outsmart AIs?
One thing I don’t understand about this viewpoint (which I understand isn’t your own): why does one benefit so tremendously from getting there a month before competitors? I’m sure having a month of superintelligence with no competition would be lucrative, but do they think achieving superintelligence first will impede competitors from also achieving it a month later?
A week of superintelligence should be enough to take over the world, or at least sabotage your competitors. And even if someone else gets there a week later, they'll be permanently one week behind the curve (until the AI hits some physical limit, I suppose).
But that's all just sci-fi worldbuilding.
>they'll be permanently one week behind the curve
What if the competitor's architecture is able to produce tokens twice as fast. What if the competitor secures a 1 month exclusivity deal on Nvidia's next generation?
A month with a superintelligence at your hands could be quite impactful, especially if you're willing to break the law / normal operating decorum in the pursuit of protecting what you have. A superintelligence, if wielded so, could destroy your competitors in a great many ways, including the relatively-benign solution of outcompeting them, to exploiting them and tearing them apart from the inside.
A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.
If I interpret "a machine superintelligence" as "a classroom of 300IQ humans," I'm not really sure how this is true? You still have material and energy constraints, you can't think your way out of those.
For the concrete problem we're discussing, you can hack your competitors out of existence, replace all of your knowledge workers to shed costs, hyperoptimise your logistics, etc. It's not just intelligence, it's speed and scale.
Bostrom's Superintelligence (2014) is a bit of a dreary read, and I didn't finish it, but it pulls no punches about the leverage that a superintelligence might have in our highly-connected world.
> For the concrete problem we're discussing, you can hack your competitors out of existence, replace all of your knowledge workers to shed costs, hyperoptimise your logistics, etc. It's not just intelligence, it's speed and scale.
For the concrete problem we're discussing, that hypothetical belongs in a Marvel movie, not reality. In the real world, you can't 'hack your competitors out of existence', and you'll be going to prison very quickly for trying this sort of thing.
I did say
> especially if you're willing to break the law / normal operating decorum
in my original post. If you have a superintelligence, you have something that can find and take advantage of every exploitation vector in parallel - technical, social, bureaucratic - and use that to destroy a company from the inside. A superintelligence that is subservient to its operator is an informational superweapon.
I agree that this sounds fanciful, but you can see what existing cyberattacks can do to organisations; it does not take that much imagination to gauge how much worse it could be when the process can be automated and scaled.