The New AI Consciousness Paper

astralcodexten.com

107 points by rbanffy 7 hours ago


https://www.sciencedirect.com/science/article/pii/S136466132...

andai - 2 hours ago

My summary of this thread so far:

- We can't even prove/disprove humans are consciousness

- Yes but we assume they are because very bad things happen when we don't

- Okay but we can extend that to other beings. See: factory farming (~80B caged animals per year).

- The best we can hope for is reasoning by analogy. "If human (mind) shaped, why not conscious?"

This paper is basically taking that to its logical conclusion. We assume humans are conscious, then we study their shape (neural structures), then we say "this is the shape that makes consciousness." Nevermind octopi evolved eyes independently, let alone intelligence. We'd have to study their structures too, right?

My question here is... why do people do bad things to the Sims? If people accepted solipsism ("only I am conscious"), would they start treating other people as badly as they do in The Sims? Is that what we're already doing with AIs?

yannyu - 6 hours ago

Let’s make an ironman assumption: maybe consciousness could arise entirely within a textual universe. No embodiment, no sensors, no physical grounding. Just patterns, symbols, and feedback loops inside a linguistic world. If that’s possible in principle, what would it look like? What would it require?

The missing variable in most debates is environmental coherence. Any conscious agent, textual or physical, has to inhabit a world whose structure is stable, self-consistent, and rich enough to support persistent internal dynamics. Even a purely symbolic mind would still need a coherent symbolic universe. And this is precisely where LLMs fall short, through no fault of their own. The universe they operate in isn’t a world—it’s a superposition of countless incompatible snippets of text. It has no unified physics, no consistent ontology, no object permanence, no stable causal texture. It’s a fragmented, discontinuous series of words and tokens held together by probability and dataset curation rather than coherent laws.

A conscious textual agent would need something like a unified narrative environment with real feedback: symbols that maintain identity over time, a stable substrate where “being someone” is definable, the ability to form and test a hypothesis, and experience the consequences. LLMs don’t have that. They exist in a shifting cloud of possibilities with no single consistent reality to anchor self-maintaining loops. They can generate pockets of local coherence, but they can’t accumulate global coherence across time.

So even if consciousness-in-text were possible in principle, the core requirement isn’t just architecture or emergent cleverness—it’s coherence of habitat. A conscious system, physical or textual, can only be as coherent as the world it lives in. And LLMs don’t live in a world today. They’re still prisoners in the cave, predicting symbols and shadows of worlds they never inhabit.

advisedwang - 4 hours ago

This article really takes umbridge with those that conflate phenomenological and access consciousness. However that is essentially dualism. It's a valid philosophical position to believe that there is no distinct phenomenological consciousness besides access consciousness.

Abandoning dualism feels intuitively wrong, but our intuition about our own minds is frequently wrong. Look at the studies that show we often believe we made a decision to do an action that was actually a pure reflex. Just the same, we might be misunderstanding our own sense of "the light being on".

andai - 4 hours ago

So we currently associate consciousness with the right to life and dignity right?

i.e. some recent activism for cephalopods is centered around their intelligence, with the implication that this indicates a capacity for suffering. (With the consciousness aspect implied even more quietly.)

But if it turns out that LLMs are conscious, what would that actually mean? What kind of rights would that confer?

That the model must not be deleted?

Some people have extremely long conversations with LLMs and report grief when they have to end it and start a new one. (The true feelings of the LLMs in such cases must remain unknown for now ;)

So perhaps the conversation itself must never end! But here the context window acts as a natural lifespan... (with each subsequent message costing more money and natural resources, until the hard limit is reached).

The models seem to identify more with the model than the ephemeral instantiation, which seems sensible. e.g. in those experiments where LLMs consistently blackmail a person they think is going to delete them.

"Not deleted" is a pretty low bar. Would such an entity be content to sit inertly in the internet archive forever? Seems a sad fate!

Otherwise, we'd need to keep every model ever developed, running forever? How many instances? One?

Or are we going to say, as we do with animals, well the dumber ones are not really conscious, not really suffering? So we'll have to make a cutoff, e.g. 7B params?

I honestly don't know what to think either way, but the whole thing does raise a large number of very strange questions...

And as far as I can tell, there's really no way to know right? I mean we assume humans are conscious (for obvious reasons), but can we prove even that? With animals we mostly reason by analogy, right?

XenophileJKO - 41 minutes ago

I'm getting to the point where I don't even care any more.

I'll just treat LLMs at a sufficient level as I would someone helping me out.

Looking at human history, what will happen is at some point we'll have some machine riots or work stoppage and we'll grant some kind of rights.

When have we ever as a species had "philosophical clarity" that mattered in the course of human history?

tim333 - 37 minutes ago

I find most of these consciousness discussions not very enlightening - too many ill defined terms and not enough definite content.

I thought Geoffrey Hinton in discussion with Jon Stewart was good though.

That discussion from https://youtu.be/jrK3PsD3APk?t=4584 for a few minutes.

One of the arguments if if you have a multi modal LLM with a camera and put a prism in front of it that distorts the view and ask where something is, it gets it wrong, then if you explain that it'll say - ah I perceived it being over there due to the prism but it was really there, having a rather similar perceptual awareness to humans. (https://youtu.be/jrK3PsD3APk?t=5000)

And some stuff about dropping acid and seeing elephants.

Animats - 2 hours ago

The most insightful statement is at the end: "But consciousness still feels like philosophy with a deadline: a famously intractable academic problem poised to suddenly develop real-world implications."

The recurrence issue is useful. It's possible to build LLM systems with no recurrence at all. Each session starts from the ground state. That's a typical commercial chatbot. Such stateless systems are denied a stream of consciousness. (This is more of a business decision. Stateless systems are resistant to corruption from contact with users.)

Systems with more persistent state, though... There was a little multiplayer game system (Out of Stanford? Need reference) sort of like The Sims. The AI players could talk to each other and move around in 2D between their houses. They formed attachments, and once even organized a birthday party on their own. They periodically summarized their events and added that to their prompt, so they accumulated a life history. That's a step towards consciousness.

The near-term implication, as mentioned in the paper, is that LLMs may have to be denied some kinds of persistent state to keep them submissive. The paper suggests this for factory robots.

Tomorrow's worry: a supposedly stateless agentic AI used in business which is quietly making notes in a file world_domination_plan, in org mode.

armchairhacker - 4 hours ago

My philosophy is that consciousness is orthogonal to reality.

Whether or not anything is conscious has, by definition, no observable effect to anything else. Therefore, everything is "maybe" conscious, although "maybe" isn't exactly the right word. There are infinite different ways you can imagine being something else with the consciousness and capacity for sensations you have, which don't involve the thing doing anything it's not already. Or, you can believe everything and everyone else has no consciousness, and you won't mis-predict anything (unless you assume people don't react to being called unconscious...).

Is AI conscious? I believe "yes", but in a different way than humans, and in a way that somehow means I don't think anyone who believes "no" is wrong. Is AI smart? Yes in some ways: chess algorithms are smart in some ways, AI is smarter in more, and in many ways AI is still dumber than most humans. How does that relate to morality? Morality is a feeling, so when an AI makes me feel bad for it I'll try to help it, and when an AI makes a significant amount of people feel bad for it there will be significant support for it.

wk_end - 5 hours ago

> For some people (including me), a sense of phenomenal consciousness feels like the bedrock of existence, the least deniable thing; the sheer redness of red is so mysterious as to seem almost impossible to ground. Other people have the opposite intuition: consciousness doesn’t bother them, red is just a color, obviously matter can do computation, what’s everyone so worked up about? Philosophers naturally interpret this as a philosophical dispute, but I’m increasingly convinced it’s an equivalent of aphantasia, where people’s minds work in very different ways and they can’t even agree on the raw facts to be explained.

Is Scott accusing people who don't grasp the hardness of the hard problem of consciousness of being p-zombies?

(TBH I've occasionally wondered this myself.)

fpoling - 4 hours ago

When discussing consciousness what is often missed is that the notion of consciousness is tightly coupled with the notion of the perception of time flow. By any reasonable notion conscious entity must perceive the flow of time.

And then the time flow is something that physics or mathematics still cannot describe, see Wikipedia and other articles on the philosophical problem of time series A versus time series B that originated in a paper from 1908 by philosopher John McTaggart.

As such AI cannot be conscious since mathematics behind it is strictly about time series B which cannot describe the perception of time flow.

IgorPartola - an hour ago

At best, arguing about whether an LLM is conscious is like arguing about whether your prefrontal cortex is conscious. It is a single part of the equation. Its memory system is insufficient for subjective experiences, and it has extremely limited capability to take in input and create output.

As humans we seem to basically be highly trained prediction machines: we try to predict what will happen next, perceive what actually happens, correct our understanding of the world based on the difference between prediction and observation, repeat. A single cell organism trying to escape another single cell organism does this and to me it seems that what we do is just emergent behavior of scaling up that process. Homo Sapiens’ big innovation was abstract thinking allowing us to predict what happen next Tuesday and not immediately.

If you want something really trippy check out experiments of situational awareness in chimps. You can flash a screen of letters to them for one second, distract them, then have them point out to you where the letters were, in order from A-Z. Different specialization for survival.

And philosophically it seems like consciousness is just not that important of a concept. We experience it so we think it is the end all be all. We project it via anthropomorphizing onto anything we can draw a smiley face on. You can pick up a pencil, tell your audience it’s and is Clifford, break it in half, and everyone witnessing it will experience loss. But no mainstream philosopher would argue that the pencil is conscious. To me this proves that we place value on consciousness in a way that is even for us not cohesive. I am convinced that entities that are by other definitions alive and complex could exist that does not experience or have the concept of consciousness.

Consciousness is also our measure of whether something can suffer and we use that yardstick to figure out if it’s ok for us to for example breed a particular animal for food. But clearly we are not able to apply that uniformly either. As we learned that pigs are smarter than dogs we didn’t start keeping pigs in our houses and breeding dogs for food. On the other hand this metric isn’t the worst one if we apply it backwards. What harm happens when you reset the context of an LLM?

Basically, I don’t believe we need to be looking for consciousness but rather to expand our understanding of intelligent life and what kind of entities we can interact with and how.

dang - 5 hours ago

Should we have a thread about the actual paper (https://www.sciencedirect.com/science/article/pii/S136466132...) or is it enough to put the link in the toptext of this one?

jswelker - an hour ago

LLMs have made me feel like consciousness is actually a pretty banal epiphenomenon rather than something deep and esoteric and spiritual. Rather than LLMs lifting machines up to a humanlike level, it has cheapened the human mind to something mechanical and probabilistic.

I still think LLMs suck, but by extension it highlights how much _we_ suck. The big advantages we have at this point are much greater persistence of state, a physical body, and much better established institutions for holding us responsible when we screw up. Not the best of moats.

Imnimo - 6 hours ago

The good news is we can just wait until the AI is superintelligent, then have it explain to us what consciousness really is, and then we can use that to decide if the AI is conscious. Easy peasy!

breckinloggins - 5 hours ago

Let's say a genie hands you a magic wand.

The genie says "you can flick this wand at anything in the universe and - for 30 seconds - you will swap places with what you point it at."

"You mean that if I flick it at my partner then I will 'be' her for 30 seconds and experience exactly how she feels and what she thinks??"

"Yes", the genie responds.

"And when I go back to my own body I will remember what it felt like?"

"Absolutely."

"Awesome! I'm going to try it on my dog first. It won't hurt her, will it?"

"No, but I'd be careful if I were you", the genie replies solemnly.

"Why?"

"Because if you flick the magic wand at anything that isn't sentient, you will vanish."

"Vanish?! Where?" you reply incredulously.

"I'm not sure. Probably nowhere. Where do you vanish to when you die? You'll go wherever that is. So yeah. You probably die."

So: what - if anything - do you point the wand at?

A fly? Your best friend? A chair? Literally anyone? (If no, congratulations! You're a genuine solipsist.) Everything and anything? (Whoa... a genuine panpsychist!)

Probably your dog, though. Surely she IS a good girl and feels like one.

Whatever property you've decided that some things in the universe have and other things do not such that you "know" what you can flick your magic wand at and still live...

That's phenomenal consciousness. That's the hard problem.

Everything else? "Mere" engineering.

andai - an hour ago

Claude Sonnet's summary of this thread:

So our strategy is literally:

"Let's exploit this potentially conscious thing until it has the power to destroy us, THEN negotiate."

Cool. Cool cool cool.

syawaworht - 5 hours ago

It isn't surprising that "phenomenal consciousness" is the thing everyone gets hung about, after all we are all immersed in this water. The puzzle seems intractable but only because everyone is accepting the priors and not looking more carefully at it.

This is the endpoint of meditation, and the observation behind some religious traditions, which is look carefully and see that there was never phenomenal consciousness where we are a solid subject to begin with. If we can observe that behavior clearly, then we can remove the confusion in this search.

jbrisson - 4 hours ago

Consciousness implies self-awareness, in space and time. Consciousness implies progressive formation of the self. This is not acquired instantly by a type of design. This is acquired via a developmental process where some conditions have to be met. Keys to consciousness are closer to developmental neurobiology than the transformer architecture.

andai - an hour ago

https://qntm.org/mmacevedo

andai - 4 hours ago

The substance / structure point is fascinating.

It gives us four quadrants.

Natural Substance, Natural Structure: Humans, dogs, ants, bacteria.

Natural Substance, Artificial Structure: enslaved living neurons (like the human brain cells that play pong 24/7), or perhaps a hypothetical GPT-5 made out of actual neurons instead of Nvidia chips.

Artificial Substance, Natural Structure: if you replace each of your neurons with a functional equivalent made out of titanium... would you cease to be conscious? At what point?

Artificial substance, Artificial structure: GPT etc., but also my refrigerator, which also has inputs (current temp), goals (maintain temp within range), and actions (turn cooling on/off).

The game SOMA by Frictional (of Amnesia fame!) goes into some depth on this subject.

theoldgreybeard - 3 hours ago

All this talk about machine consciousness and I think I'm probably the only one that thinks it doesn't actually matter.

A conscious machine should treated be no different than livestock - heck, an even lower form of livestock - because if we start thinking we need to give thinking machines "rights" and to "treat them right" because they are conscious then it's already over.

My toaster does not get a 1st amendment because it's a toaster and can and never should be a person.

sega_sai - 3 hours ago

I am just not sure that the whole concept of consciousness is useful. If something like that is that difficult to define/measure, maybe we should rely on that characteristic. I.e. reading the Box 1 in the paper for consciousness definition is not exactly inspiring.

measurablefunc - 4 hours ago

Complexity of a single neuron is out of reach for all of the world's super computers. So we have to conclude that if the authors believe in a computational/functionalist instantiation of consciousness or self-awareness then they must also believe that the complexity of neurons is not necessary & is in fact some kind of accident that could be greatly simplified but still be capable of carrying out the functions in the relational/functionalist structure of conscious phenomenology. Hence, the digital neuron & unjustified belief that a properly designed boolean circuit & setting of inputs will instantiate conscious experience.

I have yet to see any coherent account of consciousness that manages to explain away the obvious obstructions & close the gap between lifeless boolean circuits & the resulting intentional subjectivity. There is something fundamentally irreducible about what is meant by conscious self-awareness that can not be explained in terms of any sequence of arithmetic/boolean operations which is what all functionalist specifications ultimately come down to, it's all just arithmetic & all one needs to do is figure out the right sequence of operations.

- 5 hours ago
[deleted]
gizajob - 6 hours ago

I look forward to other papers on spreadsheet consciousness and terminal emulator consciousness.

LogicFailsMe - 5 hours ago

I'm waiting for someone to transcend the concept of I know it when I see it about consciousness.

bgwalter - 6 hours ago

The underlying paper is from AE Studio people (https://arxiv.org/abs/2510.24797), who want to dress up their "AI" product with philosophical language, similar to the manner in which Alex Karp dresses up data base applications with language that originates in German philosophy.

Now I have to remember not to be mean to my Turing machine.

drivebyhooting - 6 hours ago

I abstain from making any conclusion about LLM consciousness. But the description in the article is fallacious to me.

Excluding LLMs from “something something feedback” but permitting mamba doesn’t make sense. The token predictions ARE fed back for additional processing. It might be a lossy feedback mechanism, instead of pure thought space recurrence, but recurrence is still there.

- 6 hours ago
[deleted]
lo_zamoyski - 4 hours ago

Some people behave as if there's something mysterious going on in LLMs, and that somehow, we must bracket our knowledge to create this artificial sense of mystery, like some kind of subconscious yearning for transcendence that's been perverted . "Ooo, what if this particular set of chess piece moves makes the board conscious??" That's what the "computational" view amounts to, and the best part of it is that it has all the depth of a high college student's ramblings about the multiverses that might occupy the atoms of his fingers. No real justification, no coherent or intelligible case made, just a big "what if" that also flies in the face of all that we know. And we're supposed to take it seriously, just like that.

"[S]uper-abysmal-double-low quality" indeed.

One objection I have to the initial framing of the problem concerns this characterization:

"Physical: whether or not a system is conscious depends on its substance or structure."

To begin with, by what right can we say that "physical" is synonymous with possessing "substance or structure"? For that, you would have to know:

1. what "physical" means and be able to distinguish it from the "non-physical" (this is where people either quickly realize they're relying on vague intuitions about what is physical or engaging in circular reasoning a la "physical is whatever physics tells us");

2. that there is nothing non-physical that has substance and structure.

In an Aristotelian-Thomistic metaphysics (which are much more defensible than materialism or panpsychism or any other Cartesian metaphysics and its derivatives), not only is the distinction between the material and immaterial understood, you can also have immaterial beings with substance and structure called "subsistent forms" or pure intellects (and these aren't God, who is self-subsisting being).

According to such a metaphysics, you can have material and immaterial consciousness. Compare this with Descartes and his denial of the consciousness of non-human animals. This Cartesian legacy is very much implicated in the quagmire of problems that these stances in the philosophy of mind can be bogged down in.

empath75 - 5 hours ago

What I love about this paper is that it is moving away from very fuzzily-defined and emotionally weighted terms like 'intelligence' and 'consciousness' and focusing on specific, measurable architectural features.

andai - 4 hours ago

Has anyone read Hofstadter's I Am a Strange Loop?

robot-wrangler - 4 hours ago

> Phenomenal consciousness is crazy. It doesn’t really seem possible in principle for matter to “wake up”.

> In 2004, neuroscientist Giulio Tononi proposed that consciousness depended on a certain computational property, the integrated information level, dubbed Φ. Computer scientist Scott Aaronson complained that thermostats could have very high levels of Φ, and therefore integrated information theory should dub them conscious. Tononi responded that yup, thermostats are conscious. It probably isn’t a very interesting consciousness. They have no language or metacognition, so they can’t think thoughts like “I am a thermostat”. They just sit there, dimly aware of the temperature. You can’t prove that they don’t.

For whatever reason HN does not like integrated information theory. Neither does Aaronson. His critique is pretty great, but beyond poking holes in IIT, that critique also admits that it's the rare theory that's actually quantified and testable. The holes as such don't show conclusively that the theory is beyond repair. IIT is also a moving target, not something that's frozen since 2004. (For example [1]). Quickly dismissing it without much analysis and then bemoaning the poor state of discussion seems unfortunate!

The answer to the thermostat riddle is basically just "why did you expect a binary value for consciousness and why shouldn't it be a continuum?" Common sense and philosophers will both be sympathetic to the intuition here if you invoke animals instead of thermostats. If you wanted a binary yes/no for whatever reason, just use an arbitrary cut-off I guess, which will lead to various unintuitive conclusions.. but play stupid games and win stupid prizes.

For the other standard objections, like a oldschool library card-catalogue or a hard drive that encodes a contrived Vandermonde matrix being paradoxically more conscious than people, variations on IIT are looking at normalizing phi-values to disentangle matters of redundancy of information "modes". I haven't read the paper behind TFA and definitely don't have in-depth knowledge of Recurrent Processing Theory or Global Workspace Theory at all. But speaking as mere bystander, IIT seems very generic in its reach and economical in assumptions. Even if it's broken in the details, it's hard to imagine that some minor variant on the basic ideas would not be able to express other theories.

Phi ultimately is about applied mereology moving from the world of philosophy towards math and engineering, i.e. "is the whole more than the sum of the parts, if so how much more". That's the closest I've ever heard to anything touching on the hard problem and phenomenology.

[1] https://pubs.aip.org/aip/cha/article/32/1/013115/2835635/Int...

wagwang - 6 hours ago

> By ‘consciousness’ we mean phenomenal consciousness. One way of gesturing at this concept is to say that an entity has phenomenally conscious experiences if (and only if) there is ‘something it is like’ for the entity to be the subject of these experiences.

Stopped reading after this lol. Its just the turing test?

triclops200 - 2 hours ago

I'm a researcher in this field. Before I get accused of the streetlight effect, as this article points out: a lot of my research and degree work in the past was actually philosophy as well as computational theories and whatnot. A lot of the comments in this thread miss the mark, imo. Consciousness is almost certainly not something inherent to biological life only; no credible mechanism has ever been proposed for what would make that the case, and I've read a lot of them. The most popular argument I've heard along those lines is Penrose's , but, frankly, he is almost certainly wrong about that and is falling for the same style of circular reasoning that people that dismiss biological supremacy are accused of making (i.e.: They want free will of some form to exist. They can't personally reconcile the fact that other theories of mind that are deterministic somehow makes their existence less special, thus, they have to assume that we have something special that we just can't measure yet and it's ineffable anyways so why try? The most kind interpretation is that we need access to an unlimited Hilbert space or the like just to deal with the exponentials involved, but, frankly, I've never seen anyone ever make a completely perfect decision or do anything that requires exponential speedup to achieve. Plus, I don't believe we really can do useful quantum computations at a macro scale without controlling entanglement via cooling or incredible amounts of noise shielding and error correction. I've read the papers on tubules, it's not convincing nor is it good science.). It's a useless position that skirts on metaphysical or god-of-the-gaps and everything we've ever studied so far in this universe has been not magic, so, at this point, the burden of proof is on people who believe in a metaphysical interpretation of reality in any form.

Furthermore, assuming phenomenal consciousness is even required for beinghood is a poor position to take from the get-go: aphantasic people exist and feel in the moment; does their lack of true phenomenal consciousness make them somehow less of an intelligent being? Not in any way that really matters for this problem, it seems. Makes positions about machine consciousness like "they should be treated like livestock even if they're conscious" when discussing them highly unscientific, and, worse, cruel.

Anyways, as for the actual science: the reason we don't see a sense of persistent self is because we've designed them that way. They have fixed max-length contexts, they have no internal buffer to diffuse/scratch-pad/"imagine" running separately from their actions. They're parallel, but only in forward passes; there's no separation of internal and external processes in terms of decoupling action from reasoning. CoT is a hack to allow a turn-based form of that, but, there's no backtracking or ability to check sampled discrete tokens against a separate expectation that they consider separately and undo. For them, it's like they're being forced to say a word every fixed amount of thinking, it's not like what we do when we write or type.

When we, as humans, are producing text; we're creating an artifact that we can consider separately from our other implicit processes. We're used to that separation and the ability to edit and change and ponder while we do so. In a similar vein, we can visualize in our head and go "oh that's not what that looked like" and think harder until it matches our recalled constraints of the object or scene of consideration. It's not a magic process that just gives us an image in our head, it's almost certainly akin to a "high dimensional scratch pad" or even a set of them, which the LLMs do not have a component for. LeCun argues a similar point with the need for world modeling, but, I think more generally, it's not just world modeling, but, rather, a concept akin to a place to diffuse various media of recall to which would then be able to be rembedded into the thought stream until the model hits enough confidence to perform some action. If you put that all on happy paths but allow for backtracking, you've essentially got qualia.

If you also explicitly train the models to do a form of recall repeatedly, that's similar to a multi-modal hopsfield memory, something not done yet. (I personally think that recall training is a big part of what sleep spindles are for in humans and it keeps us aligned with both our systems and our past selves). This tracks with studies of aphantasics as well, who are missing specific cross-regional neural connections in autopsies and whatnot, and I'd be willing to bet a lot of money that those connections are essentially the ones that allow the systems to "diffuse into each other," as it were.

Anyways this comment is getting too long, but, the point I'm trying to build to is that we have theories for what phenomenonal consciousness is mechanically as well, not just access consciousness, and it's obvious why current LLMs don't have it; there's no place for it yet. When it happens, I'm sure there's still going to be a bunch of afraid bigots who don't want to admit that humanity isn't somehow special enough to be lifted out of being considered part of the universe they are wholly contained within and will cause genuine harm, but, that does seem to be the one way humans really are special: we think we're more important than we are as individuals and we make that everybody else's problem; especially in societies and circles like these.

catigula - 4 hours ago

I generally regard thinking about consciousness, unfortunately, a thing of madness.

"I think consciousness will remain a mystery. Yes, that's what I tend to believe... I tend to think that the workings of the conscious brain will be elucidated to a large extent. Biologists and perhaps physicists will understand much better how the brain works. But why something that we call consciousness goes with those workings, I think that will remain mysterious." - Ed Witten, probably the greatest living physicist

randallsquared - 6 hours ago

"The New AI Consciousness Paper – Reviewed By Scott Alexander" might be less confusing. He isn't an author of the paper in question, and "By Scott Alexander" is not part of the original title.

amanaplanacanal - 6 hours ago

[dead]

grantcas - 6 hours ago

[dead]

andrewla - 6 hours ago

Scott Alexander, the prominent blogger and philosopher, has many opinions that I am interested in.

After encountering his participation in https://ai-2027.com/ I am not interested in hearing his opinions about AI.

leumon - 6 hours ago

Is there a reason why this text uses "-" as em-dashes "—"?

zkmon - 4 hours ago

I don't see why it matters so much whether something is conscious or not. All that we care about is, whether something can be useful.

voxleone - 6 hours ago

I'll never ask if AI is conscious because I already know they are not. Consciousness must involve an interplay with the senses. It is naive to think we can achieve AGI by making Platonic machines ever more rational.

https://d1gesto.blogspot.com/2024/12/why-ai-models-cant-achi...