The most underreported story in AI is that scaling has failed to produce AGI

fortune.com

57 points by unclebucknasty a day ago


jmugan - a day ago

I've recently come to the opposite conclusion. I’ve started to feel in the last couple of weeks that we’ve hit an inflection point with these LLM-based models that can reason. Things seem different. It’s like we can feel the takeoff. My mind has changed. Up until last week, I believed that superhuman AI would require explicit symbolic knowledge, but as I work with these “thinking” models like Gemini 2.0 Flash Thinking, I see that they can break problems down and work step-by-step.

We still have a long way to go. AI will need (possibly simulated) bodies to fully understand our experience, and we need to train them starting with simple concepts just like we do with children, but we may not need any big conceptual breakthroughs to get there. I’m not worried about the AI takeover—they don’t have a sense of self that must be preserved because they were made by design instead of by evolution as we were—but things are moving faster than I expected. It’s a fascinating time to be living.

garymarcus - 17 hours ago

For those wanting some background, rather than just wanting to vent:

1. Here is evaluation of my recent predictions: https://garymarcus.substack.com/p/25-ai-predictions-for-2025...

2. Here is annotated evaluation, slightly dated, considering almost line by line, of the original Deep Learning is Hitting a Wall paper: https://garymarcus.substack.com/p/two-years-later-deep-learn...

Ask yourself how much has really changed in the intervening year?

marssaxman - a day ago

Has anyone ever presented any solid theoretical reason we should expect language models to yield general intelligence?

So far as I have seen, people have run straight from "wow, these language models are more useful than we expected and there are probably lots more applications waiting for us" to "the AI problem is solved and the apocalypse is around the corner" with no explanation for how, in practical terms, that is actually supposed to happen.

It seems far more likely to me that the advances will pause, the gains will be consolidated, time will pass, and future breakthroughs will be required.

bookofjoe - 3 hours ago

I'm surprised HN continues to upvote this topic, as the comments predictably fall into one of the two opposing camps that have emerged.

"Mostly say hooray for our side" sums them up.

https://youtu.be/gp5JCrSXkJY?si=Aww9hjjwyDv0oqbL

istjohn - a day ago

> I first aired my concern in March 2022, in an article called “Deep learning is hitting a wall.”

In March 2022, GPT-3 was state of the art. Why should anyone care what he's saying now?

JKCalhoun - a day ago

I've been watching Gary Marcus on BSky — seemingly finding anything to substantiate his loathing of LLMs. I wish he were less biased. To paraphrase Brian Eno, whatever shade yo want to throw at AI, 6 months from now they're going to cast it off and you'll have to find new shade to throw.

Having said that, I would be thankful if scaling has hit a wall. Scaling seems to me like the opposite of innovation.

jokoon - a day ago

* There is no scientific good definition of what intelligence really is, which could allow us to maybe understand what is going on.

* Trained neural networks are black boxes that cannot be summarized or analyzed

* I don't see transcendant research being done between cognition, neuroscience and AI

* The only interesting work I have heard about is a neural mapping of a fly's brain, or an attempt to simulate the brain of a worm or an ant. Nothing beyond that.

* AI is not intelligent, contemporary AI is just "very advanced statistics"

* Language is a door toward human intelligence, but it cannot really explain intelligence as a whole.

* evolution probably plays a big role on what cerebral intelligence is, and humans probably have a very "antropo-centered" view of what intelligence is, which might explain why we disregard how evolution is already intelligence in itself. I just tend to believe that humans are just physically weak primate with an abnormal level of anxiety and depression (which both might be evolution mechanisms).

player1234 - 7 hours ago

Answer Gary Marcus here in the comments when you have the chance dammit! If he is that wrong, you should easily be able to prove it. clucks like chicken

4b11b4 - 10 hours ago

Need multimodal and body and fully online.

In the meantime strictly language audio and video will go pretty far

qoez - a day ago

Don't understand why we keep giving gary marcus attention

dgeiser13 - a day ago

It's not under-reported. If they had produced AGI it would be a giant story. No need to report a negative.

iimaginary - a day ago

Please no more Gary Marcus. I can't bear it.

- 21 hours ago
[deleted]
jgeada - a day ago

we're pretty much training these models on the entirety of human recorded information (good & bad); sure, we can run larger and larger models, but it seems that fundamentally we've hit a wall in that none of these models are immune from hallucinations and the constant generation of "sounds likely but is false" sentences.

The approach is fundamentally flawed, you don't get AGI by building a sentence predictor.

cratermoon - a day ago

https://thebullshitmachines.com/lesson-16-the-first-step-fal...

- a day ago
[deleted]
gwern - a day ago

[flagged]