Metacognitive laziness: Effects of generative AI on learning motivation
bera-journals.onlinelibrary.wiley.com294 points by freddier 20 hours ago
294 points by freddier 20 hours ago
This stands to reason. If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former. You may be disciplined enough to do more research if the answer is directly presented to you, but most people will not do that, and most companies are not interested in that, they want quick 'efficient', 'competitive' solutions. They aren't considering the long term downside to this.
We have accounts from the ancient Greeks of the old-school's attitude towards writing. In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
We stand now at the edge of a new epoch, reading now being replaced by AI retrieval. There is concern that AI is a crutch, the youth will be weakened.
My opinion: valid concern. No way to know how it turns out. No indication yet that use of AI is harming business outcomes. The meta argument "AGI will cause massive social change" is probably true.
SOCRATES: Do you know how you can speak or act about rhetoric in a manner which will be acceptable to God? PHAEDRUS: No, indeed. Do you? SOCRATES: I have heard a tradition of the ancients, whether true or not they only know; although if we had found the truth ourselves, do you think that we should care much about the opinions of men? PHAEDRUS: Your question needs no answer; but I wish that you would tell me what you say that you have heard. SOCRATES: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
“The ratio of literacy to illiteracy is constant, but nowadays the illiterates can read and write.” Alberto Moravia, London Observer, 14 Oct. 1979
It’s a pretty interesting point.
If a large fraction of the population can’t even hold five complex ideas in their head simultaneously, without confusing them after a few seconds, are they literate in the sense of e.g. reading Plato?
I hope they're literate to understand we're only reading about that alleged exchange because Plato wrote it down.
Median literacy in the US is famously somewhere around the 6th grade level, so it's unlikely most of the population is much troubled by the thoughts of Plato.
> can’t even hold five complex ideas in their head
As an aside, my observation of beginning programmers is that even two (independent) things happening at the same time is a serious cognitive load.
Amusingly enough, I remember having the same trouble on the data structures final in college, so “people in glass houses”.
What makes an "idea" atomic/discrete/cardinal? What makes an idea "complex" vs simple or merely true? Over what finite duration of time does it count as "simultaneously" being held?
Whatever you want them to be?
I don’t care about enforcing any specific interpretation on passing readers…
Just keep in mind that Plato and (especially) Socrates made a living by going against commonly held wisdom at the time, so this probably wasn't an especially widely held belief in ancient greece.
Perhaps we're going technologically backwards.
Oral tradition compared to writing is clearly less accurate. Speakers can easily misremember details.
Going from writing/documentation/primary sources to AI to be seems like going back to oral tradition, where we must trust the "speaker" - in this case the AI, whether they're truthful with their interpretation of their sources.
Walter J. Ong's Orality and Literacy is an illuminating read.
One benefit of orality is that the speaker can defend or clarify their words, whereas once you've written something, your words are liable to be misinterpreted by readers without the benefit of your rebuttal.
Consider too that courts (in the US at least) prefer oral arguments than written, perhaps we consider it more difficult to lie in person than in writing. PhD defenses are another holdover of tradition, to be able to demonstrate your competence and not receive your credentials merely from your written materials.
AI, I disagree it's more like oral tradition, AI is not a speaker, it has no stake in defending its claims, I would call it hyperliterate, an emulation of everything that has been written.
I can definitely attempt to clarify something I've already said in writing! But yes, interactivity is vital for healthy communication.
> Oral tradition compared to writing is clearly less accurate.
I used to think this. Then I moved to New Mexico 6 years and had to confront the reality that the historical cultures and civilizations of this area (human habitation goes back at least 20k years) never had writing and so all history was oral.
It seemed obvious to me that writing was superior, but I reflected on the way in which even written news stories or movie reviews or travelogues are not completely accurate and sometimes actually wrong. The idea that the existence of a written historical source somehow implies (better) fidelity has become less and less convincing.
On the other hand, even if the oral histories have degenerated into actual fictions, there's that old line about "the best way to tell the truth is with fiction", and I now feel much more favorably inclined towards oral histories as perhaps at least as good, if not better, as their written cousins.
Am I the only one to expect a S curve regarding progress and not an eternal exponential ?
People moving away from prideful principle to leverage new tech in the past doesn't guarantee that the same idea in the current context will pan out.
But as you say.. we'll see.
> Am I the only one to expect a S curve regarding progress and not an eternal exponential ?
To LLMs specifically as they're now? Sure.
To LLMs in general, or generative AI in general? Eventually, in some distant future, yes.
Sure, progress can't ride the exponent forever - observable universe is finite, as far as we can tell right now, we're fundamentally limited by the size of our light cone. And while in any field narrow enough, progress too follows an S-curve, new discoveries spin off new avenues with their own S-curves. If you zoom out a little those S-curves neatly add up to an exponential function.
So no, for the time being, I don't expect LLMs or generative AIs to slow down - there's plenty of tangential improvements that people are barely beginning to explore. There's more than enough to sustain exponential advancement for some time.
I think the parent’s main point is that even if LLMs sustain exponential advancement, that doesn’t guarantee that humanity’s advancement will mimic technology’s growth curve.
In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
> In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
This is certainly true in many ways already.
On the other hand, it's also complicated, because society/culture seems to be downstream of technology; we might not be able to advance humanity in lock step or ahead of technology, simply because advancing humanity is a consequence of advancing technology.
If the constraint is computation in a light cone, the theoretical bound is time cubed, not exponential. With a major decrease in scaling as we hit the bounds of our galaxy.
Intergalactic travel is, of course, rather slow.
Oh, you mean an S curve on the progress of the AI?
Most of the discussion on the thread is about LLMs as they are right now. There's only one odd answer that throws an "AGI" around as if those things could think.
Anyway, IMO, it's all way overblown. People will learn to second-guess the LLMs as soon as they are hit by a couple of bad answers.
hmm yeah sorry, I meant the benefits of humans using current AI.
by that I mean, leveraging writing was a benefit for humans to store data and think over longer term using a passive technique (stones, tablets, papyrus).. but an active tool might not have a positive effect on usage and brains.
if you give me shoes, i might run further to find food, if you give me a car i mostly stop running and there might be no better fruit 100 miles away than what I had on my hill. (weak metaphor)
Yeah, I agree. Those things have a much smaller benefit over hypertext and search engines than hypertext and search engines had over libraries.
But I don't know if it fits an S-curve or if they are just bellow the trend.
Even if progress stops:
1. Current reasoning models can do a -lot- more than skeptics give them credit for. Typical human performance even among people who do something for employment is not always that high.
2. In areas where AI has mediocre performance, it may not appear that way to a novice. It often looks more like expert level performance, which robs novices of the desire to practice associated skills.
Lest you think I contradict myself: I can get good output for many tasks from GPT4 because I know what to ask for and I know what good output looks like. But someone who thinks the first, poorly prompted dreck is great will never develop the critical skills to do this.
This is a good point, forums are full of junior developers bemoaning that LLMs are inhumanly good at writing code -- not that they will be, but that they are. I've yet to see even the best produce something that makes me worry I might lose my job today, they're still very mediocre without a lot of handholding. But for someone who's still learning and thinks writing a loop is a challenge, they seem magical and unstoppable already.
Information technology has grown exponentially since the first life form created a self-sustaining, growing loop.
You can see evolution speeding up rapidly, the jumbled information inherent in chemical metabolisms evolved to centralize their information in DNA, and then as DNA evolved to componentize body plans.
RATE: over billions of years.
Nerves, nervous systems, brains, all exponentially drove individual information capabilities forward.
RATE: over hundreds of millions, tens of millions, millions, 100s of thousands.
Then the human brains enabled information to be externalized. Language allowed whole cultures to "think", and writing allowed cultures ability to share, and its ability to remember to explode.
RATE: over tens of thousands, thousands.
Then we developed writing. A massive improvement in recording and sharing of information. Progress sped up again.
RATE: over hundreds of years.
We learned to understand information itself, as math. We learned to print. We learned how to understand and use nature so much more effectively to progress, i.e. science, and science informed engineering.
RATE: over decades
Then the processing of information got externalized, in transistors, computers, the Internet, the web.
RATE: every few years
At every point, useful information accumulated and spread faster. And enabled both general technology and information technology to progress faster.
Now we have primitive AI.
We are in the process of finally externalizing the processing of all information. Getting to this point was easier than expected, even for people who were very knowledgable and positive about the field.
RATE: every year, every few months
We are rapidly approaching complete externalization of information processing. Into machines that can understand the purpose of their every line of code, every transistor, and the manufacturing and resource extraction processes supporting all that.
And can redesign themselves, across all those levels.
RATE: It will take logistical time for machine centric design to takeover from humans. For the economy to adapt. For the need for humans as intermediaries and cheap physical labor to fade. But progress will accelerate many more times this century. From years, to time scales much smaller.
Because today we are seeing the first sparks of a Cambrian explosion of self-designed self-scalable intelligence.
Will it eventually hit the top of an "S" curve? Will machines get so smart that getting smarter no longer helps them survive better, use our solar systems or the stars resources, create new materials, or advance and leverage science any further?
Maybe? But if so, that would be an unprecedented end to life's run. To the acceleration of the information loop, from some self-reinforcing chemical metabolism, to the compounding progress of completely self-designed life, far smarter than us.
But back to today's forecast: no, no the current advances in AI we are seeing are not going to slow down, they are going to speed up, and continue accelerating in timescales we can watch.
First because humans have insatiable needs and desires, and every advance will raise the bar of our needs, and provide more money for more advancement. Then second, because their general capability advances will also accelerate their own advances. Just like every other information breakthrough that has happened before.
Useful information is ultimately the currency of life. Selfish genes were just one embodiment of that. Their ability to contribute new innovations, on time scales that matter, has already been rendered obsolete.
> Grown exponentially since the first life form
Not really. The total computing power available to humanity per person has likely gone down as we replaced “self driving” horses with cars.
People created those curve by fitting definitions to the curve rather than data.
You can't disprove global warming by pointing out an extra cool evening.
But I don't understand your point even as stated. Cars took over from horses as technology provided transport with greater efficiencies and higher capabilities than "horse technology".
Subsequently transport technology continued improving. And continues, into new forms and scales.
How do you see the alternative, where somehow horses were ... bred? ... to keep up?
Cars do not strictly have higher capabilities than horses. GP was pointing out that horses can think. On a particularly well-trained horse, you could fall asleep on it and wake up back at your house. You can find viral videos of Amish people still doing this today.
Ah, good point. Then the global warming point applies, but in a much less trivial way.
There is turbulence in any big directed change. Better overall new tech often creates inconveniences, performs less well, than some of the tech it replaces. Sometimes only initially, but sometimes for longer periods of time.
A net gain, but we all remember simpler things whose reliability and convenience we miss.
And some old tech retains lasting benefits in niche areas. Old school, inefficient and cheap light bulbs are ironically, not so inefficient when used where their heat is useful.
And horses fit that pattern. They are still not obsolete in many ways, tied to their intelligence. As companions. As still working and inspiring creatures.
--
I suspect the history of evolution is filled with creatures getting that got wiped out by new waves, that were more generally advanced, but less advanced in a few ways.
And we have a small percentage of remarkable ancient creatures still living today, seemingly little changed.
The issue is more than just a local cold snap. When the fundamental graph you’re basing a theory on is wrong it’s worth rejecting the theory.
The total computing power of life on earth the fact it’s fallen over the last 1,000 years. Ants alone represent something like 50x the computing power of all humans and all computers on the planet and we’ve reduced the number of insects on earth more than we’ve added humans or computing power.
The same is true through a great number of much longer events. Periods of ice ages and even larger scale events aren’t just an afternoon even across geological timescales.