A definition of AGI
arxiv.org264 points by pegasus 20 hours ago
264 points by pegasus 20 hours ago
> defining AGI as matching the cognitive versatility and proficiency of a well-educated adult
I don't think people really realize how extraordinary accomplishment it would be to have an artificial system matching the cognitive versatility and proficiency of an uneducated child, much less a well-educated adult. Hell, AI matching the intelligence of some nonhuman animals would be an epoch-defining accomplishment.
I think the bigger issue is people confusing impressive but comparatively simpler achievements (everything current LLMs do) with anything remotely near the cognitive versatility of any human.
But the big crisis right now is that for an astonishing number of tasks that a normal person could come up with, chatgpt.com is actually a good at or better than a typical human.
If you took the current state of affairs back to the 90s you’d quickly convince most people that we’re there. Given that we’re actually not, we’re now have to come up with new goalposts.
I don't know. People in the 90s were initially fooled by Eliza, but soon understood that Eliza was a trick. LLMs are a more complex and expensive trick. Maybe it's time to overthrow the Turing Test. Fooling humans isn't necessarily an indicator of intelligence, and it leads down a blind alley: Language is a false proxy for thought.
Consider this. I could walk into a club in Vegas, throw down $10,000 cash for a VIP table, and start throwing around $100 bills. Would that make most people think I'm wealthy? Yes. Am I actually wealthy? No. But clearly the test is the wrong test. All show and no go.
> LLMs are a more complex and expensive trick
The more I think about this, the more I think the same is true for our own intelligence. Consciousness is a trick and AI development is lifting the veil of our vanity. I'm not claiming that LLMs are conscious or intelligent or whatever. I'm suggesting that next token prediction has scaled so well and cover so many use cases that the next couple breakthroughs will show us how simple intelligence is once you remove the complexity of biological systems from the equation.
Animals are conscious, (somewhat) intelligent and have no verbal language.
It is an entirely different thing to language,which was created by humans to communicate between us.
Language is the baseline to collaboration - not intelligence
> Animals are conscious
All we know about animal consciousness is limited to behaviour, e.g. the subset of the 40 or so "consciousness" definitions which are things like "not asleep" or "responds to environment".
We don't know that there's anything like our rich inner world in the mind of a chimpanzee, let alone a dog, let alone a lobster.
We don't know what test to make in order to determine if any other intelligence, including humans and AI, actually has an inner experience — including by asking, because we can neither be sure if the failure to report one indicates the absence, nor if the ability to report one is more than just mimicking the voices around them.
For the latter, note that many humans with aphantasia only find out that "visualisation" isn't just a metaphor at some point in adulthood, and both before and after this realisation they can still use it as a metaphor without having a mind's eye.
> Language is the baseline to collaboration - not intelligence
Would you describe intercellular chemical signals in multicellular organisms to be "language"?
> and have no verbal language
How do you define verbal language? Many animals emit different sounds that others in their community know how to react to. Some even get quite complex in structure (eg dolphins and whales) but I wouldn’t also rule out some species of birds, and some primates to start with. And they can collaborate; elephants, dolphins, and wolves for example collaborate and would die without it.
Also it’s completely myopic in terms of ignoring humans who have non verbal language (eg sign language) perfectly capable of cooperation.
TLDR: just because you can’t understand an animal doesn’t mean it lacks the capability you failed to actually define properly.
MW defines verbal as "of, relating to, or consisting of words".
I don't think anyone would argue that animals don't communicate with each other. Some may even have language we can't interpret, which may consist of something like words.
The question is why we would model an AGI after verbal language as opposed to modeling it after the native intelligence of all life which eventually leads to communication as a result. Language and communication is a side-effect of intelligence, it's a compounding interest on intelligence, but it is not intelligence itself, any more than a map is the terrain.
> The question is why we would model an AGI after verbal language as opposed to modeling it after the native intelligence of all life which eventually leads to communication as a result.
Because verbal/written language is an abstracted/compressed representation of reality, so it's relatively cheap to process (a high-level natural-language description of an apple takes far fewer bytes to represent than a photo or 3D model of the same apple). Also because there are massive digitized publicly-available collections of language that are easy to train on (the web, libraries of digitized books, etc).
I'm just answering your question here, not implying that language processing is the path towards AGI (I personally think it could play a part, but can't be anything close to the whole picture).
This is one of the last bastions of anthropocentric thinking. I hope this will change in this century. I believe even plants are capable of communication. Everything that changes over time or space can be a signal. And most organisms can generate or detect signals. Which means they do communicate. The term “language” has traditionally been defined from an anthropocentric perspective. Like many other definitions about the intellect (consciousness, reasoning etc.).
That’s like a bird saying planes can’t fly because they don’t flap their wings.
LLMs use human language mainly because they need to communicate with humans. Their inputs and outputs are human language. But in between, they don’t think in human language.
> LLMs use human language mainly because they need to communicate with humans. Their inputs and outputs are human language. But in between, they don’t think in human language.
You seem to fundamentally misunderstand what llms are and how they work, honestly. Remove the human language from the model and you end up with nothing. That's the whole issue.
Your comment would only make sense if we had real artificial intelligence, but LLMs are quite literally working by predicting the next token - which works incredibly well for a fascimlie of intelligence because there is an incredible amount of written content on the Internet which was written by intelligent people