AI might yet follow the path of previous technological revolutions
economist.com182 points by mooreds 5 days ago
182 points by mooreds 5 days ago
Okay, so AI isn’t exceptional, but I’m also not exceptional. I run on the same tech base as any old chimpanzee, but at one point our differences in degree turned into one of us remaining “normal” and the other burning the entire planet.
Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.
I don't think LLMs are building towards an AI singularity at least.
I also wonder if we can even power an AI singularity. I guess it depends on what the technology is. But it is taking us more energy than really reasonable (in my opinion) just to produce and run frontier LLMs. LLMs are this really weird blend of stunningly powerful, yet with a very clear inadequacy in terms of sentient behaviour.
I think the easiest way to demonstrate that, is that it did not take us consuming the entirety of human textual knowledge, to form a much stronger world model.
True, but our "training" has been a billion years of evolution and multimodal input every waking moment of our lives. We come heavily optimised for reality.
I see no reason why not.
There was a lot of "LLMs are fundamentally incapable of X" going around - where "X" is something that LLMs are promptly demonstrated to be at least somewhat capable of, after a few tweaks or some specialized training.
This pattern has repeated enough times to make me highly skeptical of any such claims.
It's true that LLMs have this jagged capability profile - less so than any AI before them, but much more so than humans. But that just sets up a capability overhang. Because if AI gets to "as good as humans" at its low points, the advantage at its high points is going to be crushing.
If you use non-constructive reasoning¹ then you can argue for basically any outcome & even convince yourself that it is inevitable. The basic example is as follows, there is no scientific or physical principle that can prevent the birth of someone much worse than Hitler & therefore if people keep having children one of those children will inevitably be someone who will cause unimaginable death & destruction. My recommendation is to avoid non-constructive inevitability arguments using our current ignorant state of understanding of physical laws as the main premise b/c it's possible to reach any conclusion from that premise & convince yourself that the conclusion is inevitable.
I agree that the mere theoretical possibility isn’t sufficient for the argument, but you’re missing the much less refutable component: that the inevitability is actively driven by universal incentives of competition.
But as I alluded to earlier, we’re working towards plenty of other collapse scenarios, so who knows which we’ll realize first…
My current guess is ecological collapse & increasing frequency of system shocks & disasters. Basically Blade Runner 2049 + Children of Men type of outcome.
None of them.
Humans have always believed that we are headed for imminent total disaster. In my youth it was WW3 and the impending nuclear armageddon that was inevitable. Or not, as it turned out. I hear the same language being used now about a whole bunch of other things. Including, of course, the evangelist Rapture that is going to happen any day now, but never does.
You can see the same thing at work in discussions about AI - there's passion in the voices of people predicting that AI will destroy humanity. Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop.
This is human psychology at work.
If you look at timescales large enough you will find that plenty of extinction level events actually do happen (the anthropocene is right here).
We are living in a historically excepcional time of geological, environmental, ecological stability. I think that saying that nothing ever happens is like standing downrange to a stream of projectiles and counting all the near misses as evidence for your future safety. It's a bold call to inaction.
Obviously this is all true. There was an event in the 5th century that meant we had no summer and all crops failed for 5 years, we all almost starved then. And that was only the most recent of these types of events.
It's not that it can't happen. It obviously can. I'm more talking about the human belief that it will happen, and in our lifetime. It probably won't.
"nothing ever happens."
The observation is, humans tend to think that annihilation is inevitable, it hasn't happened yet so therefore it will never be inevitable.
In fact, _anything_ could happen. Past performance does not guarantee future results.
If you need cognitive behavioral therapy, fine.
But to casually cite nuclear holocaust as something people irrationally believed in as a possibility is dishonest. That was (and still is) a real possible outcome.
Whats somewhat funny here is is if youre wrong, it doesnt matter. But that isnt the same as being right.
> Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop
And yet there _will_ (eventually) be one generation that is right.
> And yet there _will_ (eventually) be one generation that is right.
Most likely outcome would be that humans evolve into something altogether different rather than go extinct.
The Fermi Paradox might want to have a word here...
Particularly considering the law of large numbers in play where incalculable large chances have so far shown only one sign of technologically-capable life —— ours, and zero signs of any other example of a tech species evolving into something else or even passing the Great Filter.
The Fermi Paradox overestimates the likelihood of intelligent life outside of earth. We haven't even found hard evidence of life anywhere outside of our planet. There's not even a verifiably hospitable planet for water-based lifeforms anywhere within dozens of lightyears from earth. Even if a hospitable planet exists within a range we can one day get to, unless it has the same volcanic properties and makeup as earth, it's most probable that life itself never even developed there.
Even where life may have developed, it's incredibly unlikely that sentient intelligence developed. There was never any guarantee that sentience would develop on Earth and about a million unlikely events had to converge in order for that to occur. It's not a natural consequence of evolution, it's an accident of Earth's unique history and several near-extinction level events and drastic climate changes had to occur to make it possible.
The "law of large numbers" is nothing when the odds of sentient intelligence developing are so close to zero. If such a thing occurred or occurs in the future at some location other than Earth, it's reasonably likely that it's outside of our own galaxy or so far from us that we will never meet them. The speed of light is a hell of a thing.
Irrelevant but I like this pattern of using Gemini (or AI outputs in general) as sources. Please continue to do so and I encourage any readers to also adopt this pattern. I will also try to implement this pattern.
The sources are in the report. Gemini provides actual references for all the claims made. You'd know that if you actually looked but lack of intellectual rigor is expected when people are afraid of actually scrutinizing their beliefs of non-constructive inevitability.
maybe you misread the post you‘re answering to here or are you suspecting sarcasm? the poster commended your usage of the footnote with the gemini convo as far as i can tell?
Laid it on a little too thick to be sincere & more generally I don't comment on internet forums to be complimented on my response style. Address the substance of my arguments or just save yourself the keystrokes.
It was a compliment and I was hoping to nudge the behavior of other HN comments.
If you really can't see the irony of using AI to make up your thoughts on AI then perhaps there's keystrokes to be saved on your end as well.
I recommend you address the content & substance of the argument in any further responses to my posts or if you can't do that then figure out a more productive way to spend your time. I'm sure there is lots of work to be done in automated theorem proving.
This isn't just an AI thing. There are a lot of of non-constructive ideologies like communism where simply getting rid of "oppressors" will magically unleash the promised utopia. When you give these people a constructive way to accomplish their goals, they will refuse, call you names and show their true colors. Their criticism is inherently abstract and can never have a concrete form, which also makes it untouchable by outside criticism.
I'm pretty sure a lot of work has gone into making institutions resistant to a potential future super-Hitler. Whether those efforts will be effective or not, it is a very real concern, and it would be absurd to ignore it on the grounds of "there is probably some limit to tyranny we're not yet aware of which is not too far beyond what we've previously experienced." I would argue a lot more effort should have gone into preventing the original Hitler, whose rise to power was repeatedly met with the chorus refrain "How much worse can it get?"
We’ll manage to make our own survival on this planet less probable, even without the help of “AI”.
I don't know what reality you're living in, but there are more people on this planet than ever in history and most of them are quite well fed.
And they have nuclear weapons and technology that may be destabilizing the ecosystem that supports their life.
It’s wrong to commit to either end of this argument, we don’t know how it’ll play out, but the potential for humans drastically reducing our own numbers is very much still real.
The cult of efficiency will end in the only perfectly efficient world--one without us.
I’m fed up of hearing that nonsense, no it won’t. Efficiency is a human-defined measure of observed outcomes versus desired outcomes. This is subject to change as much as we are. If we do optimize ourselves to death, it’ll be because it’s what we ultimately want to happen. That may be true for some people but certainly not everyone.
The equilibrium of ecology, without human interference, could be considered perfect efficiency. It's only when we get in there with our theories about mass production and consumption that we muss it up. We seem to forget that our well-being isn't self-determined, but dependent on the environment. But, like George Carlin said, "the Earth isn't going anywhere...WE ARE!"
It's quite telling how much faith you put in humanity though, you sound fully bought in.
I think the concern is that humans have very poor track record of defining efficiency let alone implementing solutions that serve it.
The singularity will involve quite a bit more complexity than binary counting, arbitrary words and images, and prediction. These were mirages that will be wiping out both Wall Street and our ecology.
At least within tech, there seem to have been explosive changes and development of new products. While many of these fail, things like agents and other approaches for handling foundation models are only expanding in use cases. Agents themselves are hardly a year old as part of common discourse on AI, though technologists have been building POCs for longer. I've been very impressed with the wave of tools along the lines of Claude Code and friends.
Maybe this will end up relegated to a single field, but from where I'm standing (from within ML / AI), the way in which greenfield projects develop now is fundamentally different as a result of these foundation models. Even if development on these models froze today, MLEs would still likely be prompted to start with feeding something to a LLM, just because it's lightning fast to stand up.
Its probably cliche but I think it's both overhyped and under hyped, and for the same reason. They hype comes from "leadership" types that don't understand what LLMs actually do and so imagine all sorts of nonsense (replacing vast swaths of jobs or autonomously writing code) but don't understand how valuable a productivity enhancer and automation tool to can be. Eventually hype and reality will converge, but unlike e.g. blockchain or even some of the less bullshit "big data" and similar trends, there's no doubt that access to an LLM is a clear productivity enhancer for many jobs.
AI was a colossal mistake. A lazy primate's total failure of imagination. It conflated the "conduit metaphor paradox" from animal behavior with "the illusion of prediction/error prediction/error minimization" from spatiotemporal dynamical neuroscience with complete ignorance of the "arbitrary/specific" dichotomy in signaling from coordination dynamics. AI is a short cut to nowhere. It's an abrogation of responsibility in progress of signaling that required we evolve our lax signals that instead doubles down on them. CS destroys society as a way of pretend efficiency to extract value from signals. It's deeply inferior thinking.
Let me use AI to translate this into plain English:
> AI was a huge mistake. It shows a lack of imagination and confuses ideas from different sciences. Instead of helping us improve how we communicate, it reinforces our weakest habits. Computer science pretends to make things more efficient, but really it just extracts value in shallow ways. This is poor, second-rate thinking.
It lacks references, it's garbage, advertising, cliff-notes for apes uninterested, devolving, asleep, bored, and needing to be told what to think without knowing why or how. The inertia in CS, and the inertia and entropy CS unleashed on the gen public will take years to cleanse from the system before we get back to imaginative progress and invention.
What new non-AI products do you think wouldn't have existed without current AI? Because I don't see the "explosive changes and development of new products" you'd expect if things like Claude Code were a major advance.
At the moment, LLM products are like Microsoft Office, they primarily serve as a tool to help solve other problems more efficiently. They do not themselves solve problems directly.
Nobody would ask, "What new Office-based products have been created lately?", but that doesn't mean that Office products aren't a permanent, and critical, foundation of all white collar work. I suspect it will be the same with LLMs as they mature, they will become tightly integrated into certain categories of work and remain forever.
Whether the current pricing models or stock market valuations will survive the transition to boring technology is another question.
Where are the other problems that are being solved more efficiently? If there's an "explosive change" in that, we should be able to see some shrapnel.
Let's take one component of Microsoft Office. Microsoft Word is seen as a tool for people to write nicely formatted documents, such as books. Reports produced with Microsoft Word are easy to find, and I've even read books written in it. Comparing reports written before the advent of WYSIWYG word processing software like Microsoft Word with reports written afterwards, the difference is easy to see; average typewriter formatting is really abysmal compared to average Microsoft Word formatting, even if the latter doesn't rise to the level of a properly typeset book or LaTeX. It's easy to point at things in our world that wouldn't exist without WYSIWYG word processors, and that's been the case since Bravo.
LLMs are seen as, among other things, a tool for people to write software with.
Where is the software that wouldn't exist without LLMs? If we can't point to it, maybe they don't actually work for that yet. The claim I'm questioning is that, "within tech, there seem to have been explosive changes and development of new products."
What new products?
I do see explosive changes and development of new spam, new YouTube videos, new memes (especially in Italian), but those aren't "within tech" as I understand the term.
I do agree that there's a lot of garbage and navel-gazing that is directly downstream from the creation of LLMs. Because it's easier to task and evaluate an LLM [or network of LLMs] with generation of code, most of these products end up directly related to the production of software. The professional production of software has definitely changed, but sticky impact outside of the tech sector is still brewing.
I think there is a lot of potential, outside of the direct generation of software but still maybe software-adjacent, for products that make use of AI agents. It's hard to "generate" real world impact or expertise in an AI system, but if you can encapsulate that into a function that an AI can use, there's a lot of room to run. It's hard to get the feedback loop to verify this and most of these early products will likely die out, but as I mentioned, agents are still new on the timeline.
As an example of something that I mean that is software-adjacent, have a look at Square AI, specifically the "ask anything" parts: https://squareup.com/us/en/ai
I worked on this and I think that it's genuinely a good product. An arbitrary seller on the Square platform _can_ do aggregation, dashboarding, and analytics for their business, but that takes time and energy, and if you're running a business it can be hard to find that time. Putting an agent system in the backend that has access to your data, can aggregate and build modular plotting widgets for you, and can execute whenever you ask it a question is something that objectively saves a seller's time. You could have made such a thing without modern LLMs, but it would be substantially more expensive in terms of engineering research, time, and effort to put together a POC and bring it production, making it a non-starter before [let's say] two years ago.
AI here is fundamental to the product functioning, but the outcome is a human being saving time while making decisions about their business. It is a useful product that uses AI as a means to a productive end, which, to me, should be the goal of such technologies.
Yes, but I'm asking about new non-AI products. I agree that lots of people are integrating AI into products, which makes products that wouldn't have existed otherwise. But if the answer to "where's the explosive changes and development of new products?" is 100% composed of integrating AI into their products, that means current AI isn't actually helping people write software, much. It's just giving them more software to write.
That doesn't entail that current AI is useless! Or even non-revolutionary! But it's a different kind of software development revolution than what I thought you were claiming. You seem to be saying that the relationship of AI to software development is similar to the relationship of the Japanese language, or raytracing, or early microcomputers to software development. And I thought you were saying that the relationship of AI to software development was similar to the relationship of compilers, or open source, or interactive development environments to software development.
It also doesn't entail that six months from now AI will still be only that revolutionary.
For better or for worse, AI enables more, faster software development. A lot of that is garbage, but quantity has a quality all its own.
If you look at, e.g. this clearly vibe-coded app about vibe coding [https://www.viberank.app/], ~280 people generated 444.8B tokens within the block of time where people were paying attention to it. If 1000 tokens is 100 lines of code, that's ~444M lines of code that would not exist otherwise. Maybe those lines of code are new products, maybe they're not, maybe those people would have written a bunch of code otherwise, maybe not. I'd call that an explosion either way.
Plausibly most of those lines of code don't exist now either, if people threw them away. And the others might not be any good. Or they might be things that already did exist—either because the AI generate them previously or because it memorized part of its training set.
I spent a lot of the morning talking to GPT-5o Mini about desiccants, passive solar collectors, and candidate approaches to 3-D printing of glass and ceramics, and it generated many pages of text, but most of those pages of text will get deleted without anyone else reading them; large parts of them are just wrong, and I'll need to check the non-wrong parts against the research literature and rewrite them from my own perspective so they don't sound like an impatient sales pitch.
It did give me some pretty good ideas, though:
- Nitrates (of magnesium, calcium, yttrium, lanthanum, etc.) are good precursors for metal oxides for bonding ceramics, and have special virtues for SHS.
- Zirconyl chloride is the usual water-soluble precursor for zirconia for this purpose.
- Titanium oxysulfate is the usual water-soluble precursor for titania for this purpose.
- Advection of supercritical steam through a crucible with salt may be a viable way to salt-glaze ceramics if you can mitigate the HCl problem.
- Acidification of an object molded from zirconia-filled waterglass may be able to leach out the alkali, making it possible to sinter the shape into a continuous zircon object.
- When acid-leaching iron out of a heap of crushed terra cotta, sulfuric acid has the problem that it can clog the heap with gypsum particles, if calcium is present.
- You can electrodeposit iron at an acidic pH as well as a basic pH.
Like, none of these are novel, right? But they were new to me, and they turn out to be correct.
> For better or for worse, AI enables more, faster software development.
So, AI is to software what muscle cars were to air emissions quality?
A whole lot of useless, unabated toxic garbage?
Where is the software that wouldn't exist without LLMs?
Where are the books that wouldn't exist without Microsoft Word?I've definitely read a lot of books that wouldn't exist without WYSIWYG word processors, although MacWrite would have done just as well. Heck, NaNoWriMo probably wouldn't.
I've been reading Darwen & Date lately, and they seem to have done the typesetting for the whole damn book in Word—which suggests they couldn't get anyone else to do it for them and didn't know how to do a good job of it. But they almost certainly couldn't have gotten a major publisher to publish it as a mimeographed typewriter manuscript.
Your turn.
My point is that these are accelerating technologies.
maybe they don't actually work for that yet.
So you're not going to see code that wouldn't exist without LLMs (or books that wouldn't exist without Word), you're going to see more code (or more books).There is no direct way to track "written code" or "people who learned more about their hobbies" or "teachers who saved time lesson planning", etc.
You must have failed to notice that you were replying to a comment of mine where I gave a specific example of a book that I think wouldn't exist without Word (or similar WYSIWYG word processors), because you're asserting that I'm never going to see what I am telling you I am currently seeing.
Generally, when there's a new tool that actually opens up explosive changes and development of new products, at least some of the people doing the exploding will tell you about it, even if there's no direct way to track it, such as Darwen & Date's substandard typography. It's easy to find musicians who enthuse about the new possibilities opened up by digital audio workstations, and who are eager to show you the things they created with them. Similarly for video editors who enthused about the Video Toaster, for programmers who enthused about the 80386, and electrical engineers who enthused about FPGAs. There was an entire demo scene around the Amiga and another entire demo scene around the 80386.
Do people writing code with AI today have anything comparable? Something they can point to and say, "Look! I wrote this software because AI made it possible!"?
It's easy to answer that question for, for example, visual art made with AI.
I'm not sure what you mean about "accelerating technologies". WYSIWYG word processors today are about the same as Bravo in 01979. HTML is similar but both better and worse. AI may have a hard takeoff any day that leaves us without a planet, who knows, but I don't think that's something it has in common with Microsoft Word.
I noticed.
Books written with WYSIWYG could have been written by hand just fine, it would have just been more painful and taken longer. What WYSIWYG unlocks is more books, not new kinds of books. And sure, you might argue that more books is new books, which is fair.
So it is with LLMs. We're going to get more code, more lesson plans, etc. Accelerating.
Do people writing code with AI today have anything comparable?
Like every fourth post on here is someone talking about their workflow with LLMs, so... I think they do?