When Dawkins met Claude – Could this AI be conscious?

unherd.com

31 points by pentestercrab 2 days ago


https://archive.ph/Rq5bw

qnleigh - an hour ago

It's easy, and very tempting to dismiss this sort of thing. But given how little we know about the human brain, let alone consciousness, I don't see how we can be confident that LLMs aren't conscious.

I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.

I think that was the moment I stopped being sure about anything related to this question.

tracerbulletx - 9 hours ago

We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.

Avshalom - 13 minutes ago

Just once I want to see some old dude waxing about LLM-conciousness post a chat log where the LLM is like "your book is an incoherent mess of tautologies and incorrect statistics. I bet your dick looks like a road kill squirrel".

throwyawayyyy - 9 hours ago

Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.

ofjcihen - 9 hours ago

Incredibly confusing that people who are otherwise of sound mind seem to fall for this.

Especially confusing when it’s someone who knows how algorithms work.

Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?

Never, because they’re incapable of doing anything independently because there is no sense of self.

jdmoreira - an hour ago

It's starting to look more and more to me as if conscious is just an illusion that we ourselves perceive. There is nothing fundamental about it, just an artefact of a certain style of computing as perceived by the reasoner itself.

We look at the current llms and because we see them for how they are fundamentally operating we assume they can't be "conscious" but we really don't even know what conscious is. The only people in the world that know ANYTHING about conscious are anaesthesiologist - they know how to turn it off and on again. What does that even tell you about conscious?

shrubble - 4 hours ago

He famously doesn’t believe in God, but he believes in Claude?

sdevonoes - 3 hours ago

As long as AI is being introduced by multibillion dollar corporations, it’s all a trick, a scam. They are just looking for increasing their valuation. A waste of time

root_axis - 9 hours ago

There are a lot of people vulnerable to AI psychosis.

As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.

petters - 3 hours ago

Many dismiss Dawkins here but Ilya Sutskever wrote in 2022: “it may be that today's large neural networks are slightly conscious.”

search_facility - 9 hours ago

Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

mellosouls - 2 days ago

https://archive.ph/6RdK9

textlapse - 3 hours ago

At what stage does a series of floating point numbers output from a GPU become conscious?

Myrmornis - 9 hours ago

On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.

But on the other hand his thoughts at the end are interesting. Summary:

Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.

lpcvoid - 3 hours ago

No, it's not conscious, and anybody pretending it is has either no clue, or, more likely in the AI space, is a grifter.

jasiek - 2 hours ago

muggles will look at matrix multiplication and say it's magic

wewewedxfgdf - 9 hours ago

Its software. Software is not conscious.

WalterGR - 14 hours ago

Related: https://news.ycombinator.com/item?id=47988880

"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)

18 points | 2 hours ago | 16 comments

morpheos137 - 9 hours ago

Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.

RVuRnvbM2e - 9 hours ago

It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.

iamflimflam1 - 2 hours ago

Given this article is behind a paywall, what on earth is everyone discussing in the comments here?

psychoslave - 3 hours ago

Honestly, who care if they are conscious? If it's about how we should treat other conscious beings, our attention should first go to how we treat other animals, or even other humans. Actually even how fellow humans will treat themselves can be a concern if they are not the proper means to deal with their own life.

grantcas - 3 hours ago

[dead]

mpurbo - 9 hours ago

[flagged]

blackpink999 - 2 hours ago

[dead]

yakbarber - 2 hours ago

let's say aliens land. we learn to talk to them. they're super smart - smarter than us. would we say they're conscious? why? because they're organic. I think that's the root of the criteria many folk are trying to express.

1. passes turing test

2. is organic

I'm not saying it's correct or even that I agree with it, but that's what it boils down to.