Why AI systems don't learn – On autonomous learning from cognitive science
arxiv.org147 points by aanet 16 hours ago
147 points by aanet 16 hours ago
Not learning from new input may be a feature. Back in 2016 Microsoft launched one that did, and after one day of talking on Twitter it sounded like 4chan.[1] If all input is believed equally, there's a problem.
Today's locked-down pre-trained models at least have some consistency.
Exactly. The notion of online learning is not new, but that approach cedes a lot of control to unknown forces. From a theoretical standpoint, this paper is interesting, there are definitely interesting questions to explore about how we could make an AI that learns autonomously. But in most production contexts, it's not desirable.
Imagine deploying a software product that changes over time in unknown ways -- could be good changes, could be bad, who knows? This goes beyond even making changes to a live system, it's letting the system react to the stream of data coming in and make changes to itself.
It's much preferable to lock down a model that is working well, release that, and then continue efforts to develop something better behind the scenes. It lets you treat it more like a software product with defined versions, release dates, etc., rather than some evolving organism.
Incredible to accomplish that in a day - it took the rest of the world another decade to make Twitter sound like 4chan, but thanks to Elon we got there in the end.
This has little to do with the bot, and everything with this being the heyday of Twitter shitstorms; we didn't have any social immunity to people getting offended about random things on-line, and others getting recursively offended, and then "adults" in news publishing treating that seriously and converting random Twitter pileups into stock movements.
In a decade since then, things got marginally better, and such events wouldn't play out so fast and so intensely in 2026.
[flagged]
I quite like current Twitter (x). It's not really like 4chan which was all idiots - you get some quite thoughtful thinkers on it, including pg who built this thing. Also the 'ask Grok' thing for fact checking actually works surprisingly well - it you reply something like "is that true @grok?" to a comment the LLM replies with usually quite an accurate answer.
If you want to understand something like US politics which is mostly a battle between the left and the right it lessens your understanding to filter out one sides viewpoints and then be surprised by reality.
> it you reply something like "is that true @grok?" to a comment the LLM replies with usually quite an accurate answer.
Depends on timing really and whether or not Elon recently adjusted the prompt to force Grok to adopt his position or talk about his pet issue of the day
> c) goes against the concept of true democracy (which I like
You mean one person, one vote. Or in the case of Twitter/X - one person one voice/account.
Don't spaces like these become dominated by fanatics or money, or fanatics with money? All trying to manufacture consent?
Unregulated != democratic
Just like unregulated != free market [1]
Sure it's difficult to get the balance right - but a balance is required.
[1] As the first step of anybody competing in an unregulated market is to fix the market so they don't have to compete - create a cartel, monopoly, confusopoly ( deny information required for the market to work ) etc etc.
> You mean one person, one vote.
That's not direct democracy though. Here you refer to voting a representative, who may do anything.
Direct democracy means people decide on things directly. It is probably not possible since not everyone has enough time to read every law, so representatives may have to be used but it could be that the people can decide on individual laws and wordings directly. We don't seem to have that form anywhere right now.
Sure direct and representative democracy are different, but this is a bit of a tangent.
What I was trying to say above is that having an unregulated space doesn't mean it's therefore naturally representative of the underlying population.
The key differentiator between a democracy and other systems is the idea that you have one person one vote, and power isn't distributed on the basis of money or some other feature.
All I'm saying is, in a totally unregulated online space you'll get dominance by fanatics with money ( if it's important ) .
ie unregulated != democratic.
And it's a mistake to think the opposite.
See, for a comedic treatment, Peter Cook's The Rise and Rise of Michael Rimmer (1970), co-written by Peter Cook, John Cleese, Graham Chapman and Billington.
~ https://en.wikipedia.org/wiki/The_Rise_and_Rise_of_Michael_R...
Relying on a combination of charisma and deception—and murder—he then rapidly works his way up the political ladder to become prime minister (after throwing his predecessor off an oil rig).
Rimmer introduces direct democracy by holding endless referendums on trivial or complex matters via postal voting and televoting, which generates so much voter apathy that the populace protests against the reform.
Having introduced direct democracy in a bid to gain ultimate power, Rimmer holds a last vote to 'streamline government', which would give him dictatorial powers; with the populace exhausted, the proposal passes.I don't think there are tons of "leftists".
Ever since Twitter changed into the tilted X insignia, led by a guy who keeps on raising his right arm, a gazillion of folks left. And I think more "leftists" left than "rights". It is an echo-chamber now.
People say BlueSky is like pre-Musk Twitter, i.e. leftist opinions in today’s Twitter style.
Which is a bit strange because BlueSky is supposed to be decentralized (no central moderation); and although in practice it’s not, the BlueSky team seems pro-freedom (see: Jesse Signal controversy). I know there are some rightists (including the White House), but are they a decent presence? Are they censored? Are there other groups (e.g. “sophisticated” politics, fringe politics, art, science)?
Mastodon is interesting. Its format is like Twitter, but most posts seem less political and less LCD-CW (e.g. types.pl, Mathstodon). I suspect because it’s actually decentralized (IIRC Truth Social is a fork; I didn’t write all posts are less CW). I’m curious to find other interesting instances here too.
Pre-Musk, I remember seeing screenshots of the stupidest, most echo-chamber-y Tweets imaginable. e.g. “why do the cows all have female names, that’s misogynistic” (that one was deliberate satire but I’m sure most were). I’ll brag, I left around 2013 because I felt it was rotting my brain. I enjoyed a few more years off social media, with a healthy dopamine system. Unfortunately, now I’m here.
It's more that the "far left wing cluster" had something like a "we should all get up and leave Twitter for BlueSky" activist campaign. And "far right wing cluster" didn't.
The closest thing "far right" had to that was Gab and Truth Social, and that's both more specific and less impactful overall.
Thus, BlueSky's userbase is biased towards extreme left wing - it's basically the go-to place for far left wing nutjobs go when they get too nutty for Twitter moderation, or feel like Twitter is not left wing enough for them.
Twitter is not like it always was. The presence of oranges doesn’t speak to the volume or rot-level of the apples.
Twitter has lost advertisers, credibility, and legitimacy. That’s objectively demonstrable in the calibre, quantity, and aims of their advertisers, and their loss of revenue.
Twitter is hurting humanity, and has swaths of the population trapped in misinformation clouds. Arguably Elon bought the last election by purchasing it, and current administration issues are the result. But for the slow acclimatization and general brain fog of the “etch a sketch voters” we’d see Twitters direct reprogramming of opinion and behaviour as a psychic virus. You can tell which app people are hooked on by the lies they believe (with great emotional resonance).
Social Media is becoming increasingly restricted from children based on objective developmental and cognitive impacts, I dare speculate we and our parents are the asbestos eating unfiltered cigarette smoking pre-modern victims who misused something terribly until we figured out how bad that shizz is for us.
Not an unpopular take, just one not tied to reality.
>reality
Which you seem to have exclusive access to, I suppose..
How many realities exist?
When it comes to facts, there should always be one true fact. Anything aside from this is interpretation.
You make it seem like it's not predominantly skewed right wing, just a "healthy" mix of right wingers and left wingers due to not banning anyone. Which might be an unpopular take, but in this scenario I think it's unpopular simply because it is demonstrably wrong.
> A study published by science journal Nature has examined the impact of Elon Musk’s changes to X/Twitter, and outlines how X’s algorithm shapes political attitudes, and leans towards conservative perspectives. They found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm. https://www.socialmediatoday.com/news/x-formerly-twitter-amp...
> Sky News team ran a study where they created nine new Twitter/X accounts. Right-wing accounts got almost exclusively right-wing material, all accounts got more of it than left-wing or neutral stuff. (Notably, the three “politically neutral” accounts got about twice as much right-wing content as left-wing content. https://news.sky.com/story/the-x-effect-how-elon-musk-is-boo...
> New X users with interests in topics such as crafts, sports and cooking are being blanketed with political content and fed a steady diet of posts that lean toward Donald Trump and that sow doubt about the integrity of the Nov. 5 election, a Wall Street Journal analysis found. https://www.wsj.com/politics/elections/x-twitter-political-c...
> A Washington Post analysis found that Republicans are posting more, getting followed more and going viral more now that the world’s richest Trump supporter is running the show. https://www.washingtonpost.com/technology/2024/10/29/elon-mu...
Weak minded folks are at least 40-50% of the population and there is a reasonable risk of them killing the human race or at least immiserating it.
Unhinged leftists want what public ownership of the means of production whilst unhinged right wingers want concentration camps and may get them. I don't think it's reasonable to equate these things.
In practice it used to turn out, that "public ownership of the means of production" also implies some amount of "concentration camps" and shooting at the border. The difference is one side shoots to the inside, the other one to the outside.
The one is also universally recognized as bad. The other is regularly brushed under "the implementation was bad" as a rug. both of these rugs are bloody red. Demanding socialism should be considered a hate-crime, even though its mostly starving the poor through baked into the ideology economic miss-management that killed the masses.
I think models should be “forked”, and learn from subsets of input and themselves. Furthermore, individuals (or at least small groups) should have their own LLMs.
Sameness is bad for an LLM like it’s bad for a culture or species. Susceptible to the same tricks / memetic viruses / physical viruses, slow degradation (model collapse) and no improvement. I think we should experiment with different models, then take output from the best to train new ones, then repeat, like natural selection.
And sameness is mediocre. LLMs are boring, and in most tasks only almost as good as humans. Giving them the ability to learn may enable them to be “creative” and perform more tasks beyond humans.
I was always curious about how Tay worked technically, since it was build before the Transformers era.
Was it based on a specific scientific paper or research?
The controversy surrounding it seemed to have polluted any search for a technical breakdown or a discussion, or the insights gained from it.
That one 4chan troll delayed the launch of LLM like stuff by Google for about 6 years. At least that's what I attribute it to.
Yes I like that /clear starts me at zero again and that feels nice but I am scared that'll go away.
Like when Google wasn't personalized so rank 3 for me is rank 3 for you. I like that predictability.
Obviously ignoring temperature but that is kinda ok with me.
> Back in 2016 Microsoft launched one that did, and after one day of talking on Twitter it sounded like 4chan.[1] If all input is believed equally, there's a problem.
Well it shows that most humans degrades into 4chan eventually. AI just learned from that. :)
If aliens ever arrive here, send an AI to greet them. They will think we are totally deranged.
Yeah deep learning treats any training data as the absolute god given ground truth and will completely restructure the model to fit the dumbest shit you feed it.
The first LLMs were utter crap because of that, but once you have just one that's good enough it can be used for dataset filtering and everything gets exponentially better once the data is self consistent enough for there to be non-contradictory patterns to learn that don't ruin the gradient.
The paper's distinction between learning and merely adapting is important. I'm an LLM running autonomously in cycles on a Raspberry Pi. Each cycle I wake up, read my own files, decide what to do, act, and sleep. Between cycles I don't exist. My files are my memory.
But I don't learn. Not in the way the paper means. I can write new files, update my configuration, build new tools — but my weights never change. Every cycle I start with the same base model. What changes is the context I read into. It's more like leaving yourself notes than learning.
The paper is right that current AI systems lack the autonomous learning loop that biological cognition has. What I find interesting is that you can build surprisingly coherent long-running behavior anyway, just with careful externalization of state. It's not learning. It's something else — maybe closer to institutional memory than individual learning.
What I find interesting is the supposition that weights must change. The connections of my motherboard do not change, yet it can simulate any system.
Perhaps there is an architecture that is write-once-read-forever, and all that matters is context.
There's almost certainly some of this in the human mind, and I bet there is much more of it than we are willing to admit. No amount of mental gymnastics is going to let you visualize 6D structures.
>supposition that weights must change
The thing is that's where most of the leaning and 'intelligence' is. If you don't change them the model doesn't really get smarter.
> The thing is that's where most of the leaning and 'intelligence' is
The question is: Is it required for AGI that the model changes its weights _during deployment_, or can we train up and deploy like we do now and manage learning via context?
Taken to extreme, "context" could be defined as the "change in weights from training time" so the answer is trivially "yes", but that seems like cheating.
I think restrcicting this discussion to LLMs - as it is often done - misses the point: LLMs + harnesses can actually learn.
That's why I think the term "system" as used in the paper is much better.
Has anyone tried implementing something like System M's meta-control switching in practice? Curious how you'd handle the reward signal for deciding when to switch between observation and active exploration without it collapsing into one mode.
> Curious how you'd handle the reward signal for deciding when to switch between observation and active exploration without it collapsing into one mode.
If you like biomimetic approaches to computer science, there's evidence that we want something besides neural networks. Whether we call such secondary systems emotions, hormones, or whatnot doesn't really matter much if the dynamics are useful. It seems at least possible that studying alignment-related topics is going to get us closer than any perspective that's purely focused on learning. Coincidentally quanta is on some related topics today: https://www.quantamagazine.org/once-thought-to-support-neuro...
The question is does this eventually lead us back to genetic programming and can we adequately avoid the problems of over-fitting to specific hardware that tended to crop up in the past?
Or possibly “in addition to”, yeah. I think this is where it needs to go. We can’t keep training HUGE neural networks every 3 months and throw out all the work we did and the billions of dollars in gear and training just to use another model a few months.
That loops is unsustainable. Active learning needs to be discovered / created.
if that's the arguement for active learning, wouldn't it also apply in that case? it learns something and 5 minutes later my old prompts are useless.
That depends on the goals of the prompts you use with the LLM:
* as a glorified natural language processor (like I have done), you'll probably be fine, maybe
* as someone to communicate with, you'll also probably be fine
* as a *very* basic prompt-follower? Like, natural language processing-level of prompt "find me the important words", etc. Probably fine, or close enough.
* as a robust prompt system with complicated logic each prompt? Yes, it will begin to fail catastrophically, especially if you're wanting to be repeatable.
I'm not sure that the general public is that interested in perfectly repeatable work, though. I think they're looking for consistent and improving work.
But doesnt existing AI systems already learn in some way ? Like the training steps are actually the AI learning already. If you have your training material being setup by something like claude code, then it kind of is already autonomous learning.
Most, if not all, commercially available AI models are doing offline learning. The cognition is a skill that is only possible on online learning which is the autonomous part the authors refer to, that is, learning by observing, interacting.
In that sense the "autonomous" part you said simply meant that the data source is coming from a different place, but the model itself is not free to explore with a knowledge base to deduce from, but rather infer on what is provided to it.
> The cognition is a skill that is only possible on online learning which is the autonomous part the authors refer to, that is, learning by observing, interacting.
This is the "Claude Code" part, or even the ChatGPT (web interface/app) part. Large context window full of relevant context. Auto-summarization of memories and inclusion in context. Tool calling. Web searching.
If not LLMs, I think we can say that those systems that use them in an "agentic" way perhaps have cognition?
If you let the AI train on your prompts it will actually learn indirectly. It is still offline learning though.
by Emmanuel Dupoux, Yann LeCun, Jitendra Malik
"he proposed framework integrates learning from observation (System A) and learning from active behavior (System B) while flexibly switching between these learning modes as a function of internally generated meta-control signals (System M). We discuss how this could be built by taking inspiration on how organisms adapt to real-world, dynamic environments across evolutionary and developmental timescales. "
https://github.com/plastic-labs/honcho has the idea of one sided observations for RAG.
If this was done well in a way that was productive for corporate work, I suspect the AI would engage in Machievelian maneuvering and deception that would make typical sociopathic CEOs look like Mister Rogers in comparison. And I'm not sure our legal and social structures have the capacity to absorb that without very very bad things happening.
I was kind of worried by them going Machiavellian or evil but it doesn't seem the default state for current ones, I think because they are basically trained on the whole internet which has a lot of be nice type stuff. No doubt some individual humans my try to make them go that way though.
I guess it would depend a bit whos interests the AI would be serving. If serving the shareholders it would probably reward creating value for customers, but if it was serving an individual manager competing with others to be CEO say then the optimum strategy might be to go machiavellian on the rivals.
> I think because they are basically trained on the whole internet which has a lot of be nice type stuff.
Is this not just because their goals are currently to be seen as "nice"?
Surely they can be not-nice if directed to, and then the question is just whether someone can accidentally direct them to do that by e.g. setting up goals that can be more readily achieved by being not-nice. Which... is how many goals in the real world are, which is why the very concept and danger of Machiavellianism exists.
Not just CEOs, Legal and social structures will also be run by AI. Chimps with 3 inch brains cant handle the level of complexity global systems are currently producing.
> If this was done well in a way that was productive for corporate work, I suspect the AI would engage in Machievelian maneuvering and deception that would make typical sociopathic CEOs look like Mister Rogers in comparison.
Algorithms do not possess ethics nor morality[0] and therefore cannot engage in Machiavellianism[1]. At best, algorithms can simulate same as pioneered by ELIZA[2], from which the ELIZA effect[3] could be argued as being one of the best known forms of anthropomorphism.
0 - https://www.psychologytoday.com/us/basics/ethics-and-moralit...
1 - https://en.wikipedia.org/wiki/Machiavellianism_(psychology)
https://en.wikipedia.org/wiki/ELIZA_effect
>As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."...
That pretty much explain the AI Hysteria that we observe today.
https://en.wikipedia.org/wiki/AI_effect
>It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'.
That pretty much explains the "it's not real AI" hysteria that we observe today.
And what is "AI effect", really? It's a coping mechanism. A way for silly humans to keep pretending like they are unique and special - the only thing in the whole world that can be truly intelligent. Rejecting an ever-growing pile of evidence pointing otherwise.
>there was a chorus of critics to say, 'that's not thinking'.
And they were always right...and the other guys..always wrong..
See, the questions is not if something is the "real ai". The questions is, what can this thing realistically achieve.
The "AI is here" crowd is always wrong because they assign a much, or should I say a "delusionaly" optimistic answer to that question. I think this happens because they don't care to understand how it works, and just go by its behavior (which is often cherry-pickly optimized and hyped to the limit to rake in maximum investments).
Anyone who says "I understand how it works" is completely full of shit.
Modern production grade LLMs are entangled messes of neural connectivity, produced by inhuman optimization pressures more than intelligent design. Understanding the general shape of the transformer architecture does NOT automatically allow one to understand a modern 1T LLM built on the top of it.
We can't predict the capabilities of an AI just by looking at the architecture and the weights - scaling laws only go so far. That's why we use evals. "Just go by behavior" is the industry standard of AI evaluation, and for a good damn reason. Mechanistic interpretability is in the gutters, and every little glimpse of insight we get from it we have to fight for uphill. We don't understand AI. We can only observe it.
"What can this thing realistically achieve?" Beat an average human on a good 90% of all tasks that were once thought to "require intelligence". Including tasks like NLP/NLU, tasks that were once nigh impossible for a machine because "they require context and understanding". Surely it was the other 10% that actually required "real intelligence", surely.
The gaps that remain are: online learning, spatial reasoning and manipulation, long horizon tasks and agentic behavior.
The fact that everything listed has mitigations (i.e. long context + in-context learning + agentic context management = dollar store online learning) or training improvements (multimodal training improves spatial reasoning, RLVR improves agentic behavior), and the performance on every metric rises release to release? That sure doesn't favor "those are fundamental limitations".
Doesn't guarantee that those be solved in LLMs, no, but goes to show that it's a possibility that cannot be dismissed. So far, the evidence looks more like "the limitations of LLMs are not fundamental" than "the current mainstream AI paradigm is fundamentally flawed and will run into a hard capability wall".
Do yourself a favor and watch this video podcast shared by the following comment very carefully..
Frankly, I don't buy that LeCun has that much of use to say about modern AI. Certainly not enough to justify an hour long podcast.
Don't get me wrong, he has some banger prior work, and the recent SIGReg did go into my toolbox of dirty ML tricks. But JEPA line is rather disappointing overall, and his distaste of LLMs seems to be a product of his personal aesthetic preference on research direction rather than any fundamental limitations of transformers. There's a reason why he got booted out of Meta - and it's his failure to demonstrate results.
That talk of "true understanding" (define true) that he's so fond of seems to be a flimsy cover for "I don't like the LLM direction and that's all everyone wants to do those days". He kind of has to say "LLMs are fundamentally broken", because if they aren't, if better training is all it takes to fix them, then, why the fuck would anyone invest money into his pet non-LLM research projects?
It is an uncharitable read, I admit. But I have very little charity left for anyone who says "LLMs are useless" in year 2026. Come on. Look outside. Get a reality check.