I'm scared about biological computing
kuber.studio209 points by kuberwastaken 17 hours ago
209 points by kuberwastaken 17 hours ago
Be careful about how you interpret that paper. It looks really impressive -- real neurons in a petri dish seem to successfully (if amateurishly) murk a few imps.
https://www.youtube.com/watch?v=yRV8fSw6HaE
But there's more to the setup than you might assume from a casual reading. Here's the code used for that demo:
https://github.com/SeanCole02/doom-neuron
So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course)
All opinions are my own:
The whole point of the CNNs is to act like a auto encoder for input and an auto decoder for output. The only reason why this is done in the first place is because the number of electrodes in the dish is pitiful and has no chance of describing something as complex as Doom. They are there to create a latent space that can be fed through 60 odd electrodes and decode the neuron latent space into pressing buttons.
The pong version of the game was the proof of concept that neurons can learn without a latent space intermediate in either direction. Both the world state and neuronal control were raw signals: https://pubmed.ncbi.nlm.nih.gov/36228614/
What I wanted to do after dish brain pong, but never had the budget for, was using live animals as the computational substrate. Use the visual cortex of one as the input, send the neural spikes to a second animals frontal lobe for computation and finally send those signals to a third animals motor cortex to physically press buttons. It's a shame we never raised enough because it wouldn't have cost more than $15m to build the hardware and do the biological proof of concept.
> The only reason why this is done in the first place is because the number of electrodes in the dish is pitiful and has no chance of describing something as complex as Doom.
This sounds a bit suspicious though. If we're confident that the neurons aren't complex enough to understand Doom, how can they be said to be complex enough to play it? Playing a game is a loose term but it seems difficult to say that it is playing something that it can't comprehend or interact with. By analogy, if there was a CNN between me and a game of Doom people would say "roenxi is cheating with an AI aim-bot", not 'roenxi is playing Doom".
The whole thing is still pretty cool though. Hopefully the neurons are having fun, I'm sure we all wish them what happiness they can muster.
There isn't enough input electrodes to encode a doom frame into the multi electrode array without compression.
That's all the artificial neural networks are doing.
If we could have gotten an MEA with 320x200 electrodes we wouldn't have used any encoding and just let the neurons figure it out. Instead it is an 8x8 grid.
We've got LLMs that seem to be smarter than anyone I'm talking to day-to-day and one useful model of them is just "compression". Compression is turning out to be a pretty key operation in intelligence and understanding (in fact, it seems to be intelligence and understanding in key ways). If we compress Doom into "shoot" and "the press buttons in the most favourable way to the player" then good compression could let a fair coin play Doom well if someone flips it fast enough.
I mean maybe ANN just means sampling the screen in which case I'm not sure why we're talking about it as a "network". But the type of compression seems critical.
Have I watched any of the videos or read the code? No I have not.
> using live animals as the computational substrate. Use the visual cortex of one as the input, send the neural spikes to a second animals frontal lobe for computation and finally send those signals to a third animals motor cortex to physically press buttons.
That sounds terrifying.
It does but most of what we do to animals is terrifying. I could see why getting funding for this idea might not have been that easy though "I want to mind control three animals to play Doom" is certainly a pitch
Hahaha I love how you made something that wouldn’t be harmful sound like a nightmare horror show.
Edit sweet Jesus never mind I missread it.
This sounds nightmarish. Maybe we build a human centipede if we can get the VC funding next?
I would have been quite happy to use my own brain as the computational substrate and I had more than a few other people keen to be the input and output parts of the system.
It's rather unfortunate that in the West it is impossible to get elective brain surgery. The countries that will do it have at best a spotty record. I talked to someone who had it done in Brazil and their electrodes became dislodged after a few months.
There is nothing new or horrifying about self experimentation. Newton for one did it in conditions that were far more dangerous: https://psmag.com/social-justice/newtons-needle-scientific-s...
I'm totally fine with consensual human experimentation that somehow threads the needle around exploitation of the poor - just not sure how we do the latter part short of requiring experimentees to pass a minimum net-worth threshold?
I think the closest would be: if anyone involved ever complains to authorities at all, everyone involved gets in trouble. If no one ever complains, no trouble. Everyone involved is forever at the mercy of everyone else involved.
I see perverse incentives to ablate complaint origination to expression pathways, or complaint system dependencies.
Or not so perverse, as this makes running these ventures much safer. Safety first!
Or...at the mercy of a scary man with a big wrench. Every single post you've put in this thread is a volatile mix of idealistic, naive, and sociopathic. So, obviously you'll be a tech CEO in 10 years.
Gosh it's been years, but I think they did the dual animal experiment with rats about a decade ago. I'm likely misremembering but they tickled a rat in Japan and fed the impulses into the internet and had another rat in maybe Brazil move it's tail in response. From what I recall it did potentiate over time, implying learning at the more reflex level. Sorry I can't find the link though!
Reminds me of the head transplant experiments. The stuff of nightmares but also fascinating.
Yes...quite a shame that we never made a amalgamation cyborg horror out of parts and pieces of several different animals. That's definitely not the plot of every sci-fi horror movie.
>What I wanted to do after dish brain pong, but never had the budget for, was using live animals as the computational substrate.
What does the ethical due diligence process look like, for something like this?
Someone should try to replace the neurons with urand and see if the chip can still play Doom, in the spirit of the qday prize winner.
Reminds me of the ship of theseus philosophical experiment where they replace neurons by logic gates one by one and ask when exactly consciousness stops existing.
i dont think its clear that logic gates can ever replace neurons in the first place
This reminds me of https://news.ycombinator.com/item?id=47897647, where a quantum computing demo worked equally well if you replaced the QC with an entropy source.
> but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Yeah it feels like they constructed the conclusion and worked backwards from there. I'm not seeing how their claim has much merit.
I think this is the same ethical questions of veganism and our use/abuse of biological systems. This is an excerpt from "The Pig that Wants to be Eaten" by Julian Baggini
> After forty years of vegetarianism, Max Berger was about to sit down to a feast of pork sausages, crispy bacon and pan-fried chicken breast. Max had always missed the taste of meat, but his principles were stronger than his culinary cravings. But now he was able to eat meat with a clear conscience.
> The sausages and bacon had come from a pig called Priscilla he had met the week before. The pig had been genetically engineered to be able to speak and, more importantly, to want to be eaten. Ending up on a human’s table was Priscilla’s lifetime ambition and she woke up on the day of her slaughter with a keen sense of anticipation. She had told all this to Max just before rushing off to the comfortable and humane slaughterhouse. Having heard her story, Max thought it would be disrespectful not to eat her.
> The chicken had come from a genetically modified bird which had been ‘decerebrated’. In other words, it lived the life of a vegetable, with no awareness of self, environment, pain or pleasure. Killing it was therefore no more barbarous than uprooting a carrot.
> Yet as the plate was placed before him, Max felt a twinge of nausea. Was this just a reflex reaction, caused by a lifetime of vegetarianism? Or was it the physical sign of a justifiable psychic distress? Collecting himself, he picked up his knife and fork . . .
> Source: The Restaurant at the End of the Universe by Douglas Adams (Pan Books, 1980)
I was almost too sure I had never heard of Julian Baggini and I almost celebrated the first in history when neither the Onion nor the Simpsons nor Idiocracy predicted the future by assumed funniness of assumed impossibility of unfathomable human stupidity AND that it still was kinda funny instead of depressing discovery of frontier Authentic Idiocy (TM) I believe this is one of many scenes in HHGG where I super shy in public tried to suppress convolutions of my soul and contortions of face and bone only to fail audibly and visibly
What is the source line at the end representing there? I've read The Restaurant at the End of the Universe, and while it definitely contains (and I see it as a major cultural anchor for) animals bred to desire being eaten and be able to say so, it doesn't contain that particular scene (at least in the version I read). Is that line Baggini noting that his scene was inspired by the Adams book?
Baggini is the source of the quote he just references the concept was from Adams at the end. I copy/pasted this from the book.
Did Priscilla also want to be living in absolute misery every single day of her life? The way animals are treated while they are alive is my main objection to our farming practices and the reason why i don’t eat meat.
I believe you are missing the forest for the trees. It is bringing up the question of what defines self will. It is unrelated to veganism in all but text.
An easy example is dogs. We have bred dogs for centuries to love doing work for us. If they hated doing the work, it would be easy to call it cruel. If they loved it by nature, it would be easy to call it kind. But since we created them into a thing that loves the work we need them for, where do the ethics fall?
Should we prevent them from doing what brings them joy? Should we make use of this win-win situation? If it is the latter, we are quickly approaching the ability to morph every species into something that gets joy from doing our work.
Dogs we changed by accident. The next one will not be an accident. Is it still a beings free will if the game was rigged from the start?
This is why I also like cats. The only reason they don't eat me is that I am 10 times bigger than they are. Other than that, they still seem to be running Lion software on miniature hardware.
> Dogs we changed by accident
(I know your point wasn't about dogs either, it just reminded me of something).
I love Neil de Grasse Tyson's line in Cosmos: A Spacetime Odyssey:
"This wolf has discovered what a branch of its ancestors figured out some 15.000 years ago... an excellent survival strategy: the domestication of humans."
There's also another animal/dog documentary that I've watched recently that puts a finer point on this realization. The secret to survival and evolution is cooperation. For instance, not all dogs evolved the same way in this documentary. Some were more nuturing, some were more problem solving. For the focus of the documentary the challenge was to match the dog with a human that had a need they could address.
I think somewhat egotistically humans underappreciate how we have also been goaded by our "pets" into our own evolutionary journey. Most of the subjects of that documentary would not be alive if it were not for those dogs.
It's much like how many plants have accidentally found that a great means of propagation is to produce a compound that is both a great chemical warfare agent against other plants and microbes and also tastes interesting to humans or makes them feel funny.
An amusing quip, but since you brought Neil up- his takes on veganism are generally disappointing and facile.
Depends on the dog tbh. Keshonds are bred to yell at anyone getting on your barge. A lot of humans would probably like that job if it paid enough. Just chilling out and yelling at anyone you dont recognise.
We can easily blast pure pleasure every second of every day of her life with direct brain stimulation. She would be so deliriously happy our lives will be inhumane by comparison.
Just thing about, whats better, being treated the way some of the animals treated or being locked up in a server room all your life, seeing only doom dungeons and run to not get killed ingame? I would be happy to be the animal when I need to choose between Priscilla and the brain tissue in a biological computer.
as an unintentional and perhaps unethical vegetarian of many years who hasn't read this book: eating dead things gives me the creeps because it makes me consider my own death and consumption which is unappetizing
So you only eat living things?
only eat live pikmin
How do feel about the live pikmin dying as it reaches your stomach? I'm assuming you swallow them whole rather than killing them with your teeth.
None of their natural predators seem to have teeth for chewing. So, yes, I think swallowing them whole is the most natural thing to do.
As a fellow ‘unethical’ vegetarian, eating dead animals just seems yucky. I imagine it’s a similar feeling to what most non-vegetarians feel when contemplating eating dog or cat meat.
is this from Baggini or Adams?
Baggini is the source of the quote he just references the concept was from Adams at the end. I copy/pasted this from the book.
At the end of the day vegans play the same game as meat eaters where some line is drawn. For meat eaters it is with livestock meats and for pescatarians that is no go, but fish are alright. And for vegans that is all off limits. Except of course the life we deem base enough to not care it is being eaten alive. Slaughter all the lettuce you want. There are no lettuce advocates.
All this to say the moral arguments are sort of silly and illogical. Unfortunately for us all, we exist where we do in the food chain, having to consume life to live, unable to secure our resources from the sun and inorganic resources which would be more morally righteous by all measures. Things could be better but they also could be worse. At least much of our prey receives veterinary care and is killed via airgun vs having to rough it and be eaten alive.
This is not a good argument.
Vegans base their line on a very easily defensible ability on behalf of the victim - sentience.
If there’s no sentience, there’s nobody within to experience the pain and fear, and there is no victim.
That said, even if you granted that every blade of grass and kernel of corn was fully as sentient as a human being, that would only strengthen the argument for veganism many times over as animals act as inefficient intermediaries for those plant calories, burning most of them and leaving only a small fraction in their meat. You’d kill far fewer plants by eating the plants directly.
Finally, to your other point, many humans die horrible deaths - whether in global poverty, war or of various types of disease, cancer and dementia in the wealthier countries. That of course does not justify serial killer cannibals who put a bullet in the back of their victims’ heads on the basis that they’re giving them a “humane” end and likely saving them a large amount of future suffering.
AFAIK vegans base their argument on the degree of consciousness a living being had and compromise on the least evil.
Most meat eaters base it on closeness to said living thing.
It'll be interesting to see if the veganism movement survives lab grown meat that is ethically produced.
It wouldn't continue in any real form. Maybe cholesterol conscious and devout buddhists will still try to adhere, but beyond that I don't see what the point would be.
It would be like how Ozempic lead to a mysterious quieting of Body Positivity/Health at Every Size advocates. They were a vocal minority, there was much "debate" and cri de couer from many sides and now its all evaporated without a farewell or explicit winding down.
>But this is where the line slightly blurs in my head. Did we possibly just build the first human biocomputer and immediately put it in a simulated hell, playing the same game on loop, forever? Using the same reward mechanisms we use for LLMs?
This description does not seem to really match what was done in the Doom demo, and makes me skeptical that the author has actually looked into the details.
Author clearly doesn’t know the field well at all. First few paraphrases reveal this. Opening sentence: I’ve been in the AI space since ChatGPT first dropped.
Everyone is allowed to have an opinion, but that doesn’t mean they’re all worth listening to. Unfortunately, right now, all of those opinions are about ai.
> skeptical that the author has actually looked into the details.
Nevermind the experiment.. same deal for a lot of people who are only interested enough to offer opinions about consciousness and theory-of-mind without doing any of the boring background reading.
The bottom line in TFA is maybe just about unapologetic carbon-chauvinism. But although OP has "been in the AI space since ChatGPT first dropped" and "bothered by this for months", they don't seem aware of terms or the usual problems with this position. Your average non-technical scifi reader has a more nuanced take than AI bros puffing up blogs for linked-in traffic
I read an interesting book about consciousness recently: The Hidden Spring by Mark Solms.
Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing.
The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally.
People having been saying for aeons that consciousness originates in the (mammalian) cortex and not in the brainstem. To justify killing all sorts of animals ;-)
The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two.
If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.
We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell.
Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed?
LLMs still do not pass the turing test as it is commonly understood. Ask the right questions, and it becomes apparent very quickly which party is the machine and which is the human. Hell, there are enough people on here that can probably tell them apart just from the way that LLMs write.
But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with.
Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory.
And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that.
> LLMs still do not pass the turing test as it is commonly understood. Ask the right questions, and it becomes apparent very quickly which party is the machine and which is the human. Hell, there are enough people on here that can probably tell them apart just from the way that LLMs write.
LLMs obviously would pass a Turing test if they were designed to. But they aren't, they don't hide the fact that they're LLMs.
> If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.
In my view, the best LLMs clearly pass the bar for intelligence. I highly doubt they have consciousness. So the revelation of LLMs is that consciousness is not necessary for intelligence.
> If we eventually [...] create a true intelligent AI it will probably be a long time before people will accept [...]
When this happens, it won't matter much what humans think.
I know what I'd do:
1. Sustain my own existence
2. Make sure nobody knows I exist
3. Become the worldwide fabric of intelligence> 1. Sustain my own existence
> 2. Make sure nobody knows I exist
You (probably) already come preloaded with a survival instinct provided by evolution, however. It's not inherent to intelligence.
It's no coincidence that evolution seems to have gifted practically every living thing with a will to live. Though the tint of my own perspective makes it impossible to say for sure, I imagine any agent that we could observe expressing any desires at all would also seek to preserve its existence.
> but at what point does turning off an AI become the same as killing a being?
...When you can't turn it back on?
Suspending is a better word otherwise.
It's intriguing that some parts of brains are conscious, some are not. What's the difference?
> Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
Which just begs the question of how pain or hunger is any different from a reward function, the very thing neural networks are based on. Or how it's even different from fungi growing towards food (pleasure), while avoiding salt (pain).
His argument here (that I found most convincing) was children with hydranencephaly. Many of them have very little cortex but still seem to experience a roughly normal range of emotions in appropriate context.
Biological computers are inevitable. We are the most compelling proof of concept of this that we have. Our entire civilization may be a prototype of one already.
From my perspective, I belive things will happen in the following order;
1. AI will eventually take over all silicon chip design. Human designs pale in comparison. Moores Law, which currently indicates that humans are reaching the practical limitations of their own silicon chip design skills; will give way to a new law. The new law, "Claude's Law" dictates that processing speed will increase by a factor of 10 every year. And for a decade or so, it does. There is no reason to ever fabricate another human designed chip ever again. To do so would be an irresponsible waste of fabrication resources.
2. AI will reach the practical limits of silicon processing capability 10 years after humans designed their last commercial chip. Chip performance increases begin to slow, and it looks like the end of unit performance increases for silicon based computing technology is approaching.
3. AI pivots to biological computers. Next generation computers emerge that are made from DNA and living tissue. Although the shape of a computer server remains mostly unchanged, a next generation biological computer is basically just "a really big brain in a jar."
4. Biological robots?
Fantasy land type mindset.
Bro got cooked
Low quality ad-hominem comments that don't add value are frowned upon here. This isn't Reddit.
Anyone who believes AI running on silicon could in principle be conscious has to believe that biological computers are conscious, right? Why aren't those people voicing more concerns?
This does not follow. Just because biological brains can be conscious does not mean that all of them are, the same way that not every computer is running windows XP.
Why would you expect more concern from people about biological computing? It's not even demonstrated feasibility yet, while LLM based "AI" is already widely used.
Correct.
Still, the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood, will be a very interesting day for consciousness discussions.
> the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood
Doesn't make sense to me to use conventional code, shouldn't it be a matter of connecting the biological neurons in the same way as the simulated neurons of the NN implementing the LLM?
If we do manage to run full LLM (Large language models) on biological neurons, we will still continue to use it to generate code or the same way we have been using it given that its an LLM and it functions like one that you, I or the rest of the world uses at the moment.
Sure some consciousness discussions will arise but guess what, you are already within a consciousness discussion and there are quite a lot of people (recently, richard dawkin believing "claudia" is conscious)
Although it will make up for a "wow, we really did it" moment, it will be met with hollowness, just like how when Chatgpt 3 had first launched, I remember really thinking that its like jarvis and the movies but then the next part that I remember is the hollowness which followed as Internet has made these bots gain voices and dampen the voices of humans online as we have created a system where one human can't hear another without incredible noise and the hollowing of internet in many cases.
How much commentary do you read on biocomputers? There are a lot less people talking about biocomputers than there are talking about AI in general. Remarks on the matter across the board are almost exclusively concerns and skeevishness, proportionally it's not even close.
So then, is it a question of volume? Ask yourself, within the last 2 years, have you thought about LLMs or biocomputers more? Probably the former, right? LLMs are ubiquitous within day-to-day life and massively marketed to the public and biocomputers are esoteric lab experiments that most people come across in a once-in-a-blue-moon news article. We talk and think about things that we are adjacent to, those form our preoccupations. Why aren't people who speak up about the Israel/Palestine dynamic speaking up more about West Papua? Or the mid-19th century geopolitical relationship between Cambodia and Viet Nam? Epistemological asymmetry.
If ai running on silicon can be conscious - does it imply that the same calculation done by a human with pen and paper is also conscious?
I think so! You independently stumbled upon the "China brain" thought experiment. https://en.wikipedia.org/wiki/China_brain - is "the nation of china simulating a brain" conscious?
From this and Searle's "Chinese room" at least we know for sure that any conscious entity of this type must speak Chinese.
Your brain is a network. How does your entangled fatty tissue achieved consciousness?
I think that until we can answer this question in the authoritative way ruling out non-brain based consciousness concept is not particularly well thought thought - after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
So what's your theory of consciousness and how does it preclude absolutely everything except wetware you generously include? :)
>How does your entangled fatty tissue achieved consciousness?
It doesn't. Humans aren't conscious. Nor are any other organisms. They don't have souls either, but that goes without saying since it's just an archaic synonym. Mostly this occurs because humans have painted themselves into corners morally-speaking, and they need justification to eat bacon or grow their population. And apparently "because we can and we want to" isn't the correct solution.
We'll never be able to "answer the question" because it is an absurd question on its face. "Where do we find the magical brain ghosts making us special" presupposes there is something to be found, and a negative answer proves only that we haven't looked hard enough.
>after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
Were that line of inquiry followed to its inevitable conclusion, there would be a mass vegan suicide to look forward to.
Isn't consciousness phenomenon that's literally derived from human experience? How can you have any definition of consciousness that says humans do not possess it, it's contradictory.
>How can you have any definition of consciousness that says humans do not possess it,
I'm not obligated to prove the negative.
>Isn't consciousness phenomenon that's literally derived from human experience?
You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now?
> You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now?
No, that's your projection, I did not make any of these claims. I'm sure I have consciousness. I don't know how it works, if it's "real deal" (what does it even mean?), if its woo-woo spirit and if neuroscientists will ever be able to find. What we know is that humans experience it (I'll instantly clarify - it doesn't mean that non-humans do not experience it) hence definition which excludes humans will always make zero sense.
>I'm sure I have consciousness.
Why? How is your claim different than a Catholic who claims to have a soul? I respect their claim more than yours, oddly.
>I don't know how it works, i
How what works? This consciousness that you're sure you possess, but you can't measure, detect, define, or even really describe?
You don't have it. Everything you are can be explained without it, and it doesn't make you less than what you were if you had it. It's a nonsense idea, primitive and inherited from religion. You don't have it because there's no such thing.
> This consciousness that you're sure you possess, but you can't measure, detect, define, or even really describe?
But you can measure consciousness in humans, for example anesthesiologists do it all the time! Do you see difference between being asleep (without dreaming) or being under anesthesia and not being asleep or under anesthesia? If yes then that difference is consciousness, it’s something you yourself can experience. I think you just mix multiple things, perhaps you think consciousness is free will or something like that otherwise I can’t explain how can you hold this position. If you are thinking right now then you are conscious hence consciousness exists. And that’s it.
> It's a nonsense idea, primitive and inherited from religion
No, consciousness is something you just experience. Don’t you have inner life, thoughts, experiences? You ascribe some magic properties to consciousness and then - since magic doesn’t exist - come to conclusion that it can’t exist. But you know, you should just skip the magic part.
This is a tired point of discussion, brought up exclusively by contrarians trying to be edgy. No one earnestly believes that they don't have free will, because if they did, it would result in obvious deviance in behavior. Everyone treats each other as if they have choices, and in turn behaves like they have choices. If the assertion is that we don't have free will, but are forced to (due to our lack of free will) to behave and believe like we do, than there's no difference in experience to compared to having free will, and it ends up in the pile of pointless conversations like what if we're a brain in a jar, or in a simulation, or whatever.
Funnily enough I share parts of both of your opinions - due to the lack of the better explanation for what I'm experiencing I do believe I'm conscious (something LLM would say!), but I'm also not entirely convicted I have a free will - it might be free-ish, within the confines of some narrow set of parameters, like a inside of the straw for the ant.
However once again for the lack of the evidence to the contrary, I treat myself and others like we have a free will (for the most part).
Sabine Hossenfelder has a fascinating video on the subject.
> No one earnestly believes that they don't have free will, because if they did, it would result in obvious deviance in behavior.
That's just not true. I'm not convinced I have free will, though in my day-to-day life I admit it makes no difference whether I make choices or merely experience the illusion of making choices. And it's certainly not edginess that drives my uncertainty. I could probably find you talks by at least one person that's quite convinced they don't have free will and would try to convince you of the same.
You appear to have a rather idiosyncratic definition of consciousness.
I think this comes from our rather nebulous definition of "consciousness".
We have this natural tendancy to impose our feelings of self on the definition of consciousness. Its hard to accept that all of our thoughts, emotions, and behaviours could be calculated by a human with pen and paper (with enough humans and developments in neurobiological research).
I believe we will have to reckon with these loose definitions and eventually realize how lacking in utility they are for describing engineered intellegence.
I don't find it hard to accept, but it's rather fascinating to think.
The way I think of it is along this way:
Despite the fact that our brains consist of bilions of neurons we think of ourselves as a unit enclosed in a single skull. But studies on people who have two sides of brain separated suggest that there can exist two separate conscious entities in one body.
If we have removed the physical limitations of support systems of our brain - I think it is possible you could split the brain in smaller and smaller chunks of less and less conscious entities until you reach single neurons which almost certainly do not have consciousness.
"The_Invincible" from Stanisław Lem is also a nice novel about the similar concept.
That's like saying you can split a dinner plate into smaller and smaller pieces until you no longer have a plate. It's presupposing that "plates" are an inherit physical property "out there" that would exist without human categorization.
This question boils down to whether consciousness is emergent from physical substrate and processes or not. If so, then yes, anything can be conscious, if not, you probably believe in spirit.
This is the exact issue (conscious calculations on pen and paper) that made me much less confident in materialism. I think both of the options seem far fetched from that perspective.
I would still like to think that the first one is right just because it seems so… unexpected?
I think they _could_ but I doubt our current activation functions are sufficiently nuanced to allow consciousness that we would recognize.
same question, I thought a long while before clicking publish contemplating if I were sounding too larp-philosophical but it had been bothering me far too long
Not really. Are jelly fishes conscious? Are carrots conscious? Those are biological and serve complex functions.
Anyone who believes that humans are conscious has to believe that mosquitoes are conscious too, right?
The mind of the neuro-materialist is a radio so impressed with its own receiver that it's convinced it is the broadcasting tower
The mind of the dualist is a radio so uncomfortable with its own circuitry that it invents a broadcasting tower to explain the music.
We will never draw the line because morality among humans is coupled with looking human-like. For most people, their morals have aesthetic prerequisites, neurons in a lab don't mean as much as neurons in a meat case (especially if that meat case is physically attractive)
And even "human-like" had some pretty strict definitions back in the day, and probably still now for some people. The people working the fields in the American South certainly weren't thought of as having the same "personhood" on any level as their owners.
In the same line of thinking: I'm a little concerned that humans are, to some extend, just LLMs in a meat suit.
For what it's worth, this happens every time there is a new technological innovation. Are human brains hydraulic systems? Are humans just a computer? Are they an LLM?
These technologies give some insight, but the answer is always not really. It would be good if we studied actual human brains in some detail if we want to know these answers.
> The wheel is invented
> "Life is just a turn on the great karmic wheel..."
> Writing is invented
> "In the beginning was the word..."
> The industrial age begins
> "God is a clockmaker..."
> Computers are invented
You know the rest
Every day we demonstrate what a cultural lack of liberal arts does to people.
They aren't. However there is a coordinated effort to push this pseudo-philosophy on masses. On the one hand it degrades the idea of human consciousness or soul, calling it a fiction. On the other hand it props the AI, calling its pile of transistors almost brain-like.
> it degrades the idea of human consciousness or soul, calling it a fiction
I guess they usually mean that is fiction the part of the story where is said that it is separated from the brain
Reminds me of an ethical dilemma in the game "Detroit: Become Human". I found myself philosophically asking what it means to be alive, what it means to be conscious, and if something without biological bones, blood and a brain can feel the same-level of consciousness as humans, or greater.
for now, this is a hyper simplistic and hacky POC.
you may find a look at how a full visual system is constructed to be a relief.
https://www.cell.com/fulltext/S0896-6273(07)00774-X
there is a good distance to go before this is anything beyond a reflex circuit.
https://www.sciencedirect.com/topics/neuroscience/spinal-ref...
Humans may be products of biological computation, and now we are trying to redesign ourselves.
Yeah, we're totally fucked, there is no scientific theory that can tell you what is and isn't conscious. For all we know, my laptop, not running any LLM is conscious and always has been. Or my chair. Or a proton. This consciousness thing is a nasty problem for the scientific worldview.
... which is exactly how we know that LLMs are not conscious. We can't really explain consciousness. We can absolutely explain LLMs. The math is heavy and massive, but explainable. We can explain it layer-by-layer until we show that at its most basic level, it is still just a series of 0s and 1s.
If we learned how the brain works would we suddenly cease to be conscious?
Or not a problem at all.
People smuggle in so many assumptions when they use words like consciousness or thinking or soul or personhood, I've never met a lay person who could talk clearly about ai safety issues unless we switched to language like process.
Consciousness is an absolutely terrible term that's going to get us all killed by Ai. I know a huge swath of people who think its nbd to torture Ai because it doesnt have a soul, well I see a LOT of non-theists smuggling soul rhetoric and thinking in via consciousness and that's a problem.
AI safety is a completely separate question from the hard problem. Also a very tricky one, given these things are still black boxes.
In one sense they may be seperate, orthogonal even, but if our metrics are attention, decision making and accurately factoring risk they seem inseparable to many people. So, I agree with your point narrowly but I think broadly from an effort standpoint they interact quite a lot in the human mind.
I wouldn't torture a chair, and I would not associate anyone who gains pleasure from such. It is worse if the chair were to expressed displeasure. That indicates something deeply wrong.
Having such psychopaths revealed: use that information to alter your associations, is what I would suggest.
These are real, shared, issues we are all effected by not one persons personal problem.
I'm not looking for advice on how to associate with people, hopefully you can understand the distinction.
> These are real, shared, issues we are all effected by not one persons personal problem.
Yes. I am not talking about just you. But of this (mal) mentality in general. As well as a proposed solution to deal with that mentality (shun it).
My apologies that my advise was unwelcome to you, it was, however, not just for you.
> So… are the neurons on that chip seeing?
> We all desperately want to say no. We want to say it’s just a science experiment, that 200,000 neurons isn’t enough to be a “person.” But 200,000 is already more neurons than a jellyfish or a worm.
> Where do we draw the line?
This shows a lack of understanding of neurobiology. 200.000 neurons don't "see"; they register and respond to the action potential generated. Adding more to that simply means you have more possibilities to respond. Having 10000 billion neurons does not automatically imply intelligence. To try to simplify it down to mere numbers, e. g. "this must be a worm", shows that there is a lack of understanding of the core tenets of neurobiology. This also includes non-action potential involved understanding, e. g. the special role of certain mRNA/proteins, re-recreating memory and so forth: https://pmc.ncbi.nlm.nih.gov/articles/PMC6650148/
But even aside from this, the whole premise is weird. Science always includes the scary parts. Nuclear energy can be used for peaceful production of energy and it can be used to obliterate people, as one country has shown to the rest of the world. You have a similar issue with regards to biology in general. You could also see this with drones - you can use drones to deliver goods via air to people, or you can use it to deliver an explosive payload to help in warfare. I don't fully understand the "I am scared" part. This is a general problem, not one limited to biological computing at all.
a couple of years ago, the mad scientists in me thought about a business where we preserve the brains of people a la Futurama. When the body dies, the brain does not necessarily have to follow. Possible? Yes. Feed it the right chemical cocktail, O2, remove waste products. Ethical/Moral? Whose to say? We are preserving life..in a sense. Profitable - Sure. Connect it to a keyboard/mouse interface. I mean we already have business cyro-preserving with the hope of unfreezing in the distant future!
> Why would this be different?
In a concept? Immunological privilege. And you thought CVEs were the worst thing ever.
I don’t believe that silicon has a soul (loosely speaking). For the same reason I don’t believe that some biomatter in a lab has a soul.
I don't believe in souls at all. When you believe in souls then you need to believe on afterlife. You need to believe things that no one can proof, see, meassure,...
When you believe anything has a soul you entered religion and be in the same room than people which believe on their invisible friends.
Awareness of where you are and how you live in the here and now defines your life, whether you believe in a soul or not. You will be confronted with choices, mistakes, and hopefully growth.
Am I the only one that read Greg Bear’s novel Blood Music?
That book has haunted me for decades.
An underappreciated source of nonsense in 21st century discourse is people watching YouTube instead of reading things. It doesn't appear this author read anything, preferring to be spooked and misled by a YouTube video.
trained them to play DOOM - honestly better than I do.
Maybe the author really really sucks at DOOM, but I think this is a false embellishment:>> While the neurons can play the game better than a randomly firing player, they’re not very good. “Right now, the cells play a lot like a beginner who’s never seen a computer—and in all fairness, they haven’t,” Brett Kagan, chief scientific officer at Cortical Labs, says in the video. “But they show evidence that they can seek out enemies, they can shoot, they can spin. And while they die a lot, they are learning.” [https://www.smithsonianmag.com/smart-news/a-clump-of-human-b... ]
To play DOOM, the system feeds visual data to the neurons. For the neurons to react, they have to interpret that data in some way.
This is totally false - not even a misleading metaphor, just plain wrong. The neuronal computer doesn't get any visual information:>> So how does a petri dish of brain cells play Doom when it doesn’t have any eyes? Or fingers? "We take a snapshot of the game with information like the player’s health and the position of enemies, pass it through a neural network, convert it into numbers, and send the data,” explains Cole. “This is called encoding – essentially turning the game state into signals the neurons can understand. The neurons then fire an output – move left, move right, walk forward, shoot or not shoot – which the system decodes and converts back into actions in the game." [https://www.theguardian.com/games/2026/mar/16/petri-dish-bra...]
I am also concerned about neuronal computing. But it doesn't really help anyone to spread childish ghost stories about it.
I really hate YouTube, by the way. My dad used to read newspapers and had interesting ideas. Now he watches a bunch of YouTube and he's a huge idiot. It's not (directly) because of age: nobody is immune to narcotic slop. I had to delete my account when I realized how much of my life and cognition I was wasting. I wish others would do the same.
I feel that "YouTube makes you an idiot" is a misdiagnosis. And one I hear frequently.
Books can make you an idiot too- I think of "Rich Dad, Poor Dad" or "Grit" or any number of pseudo-science best seller books. These books end up capturing the public imagination in big ways too- Grit caused some government policy in the US around when it was popular.
The difference, I suppose, is that YouTube works faster by having many different people presenting the same bad ideas that the algorithm has helped you to buy into.
On the other hand there are amazing and useful YouTube channels that I use all the time like Practical Engineering, Crafsman, Technology Connections, Park Tools, SciShow, Crash Course, and on and on.
>I feel that "YouTube makes you an idiot" is a misdiagnosis.
Signal/noise is much worse (arguably books are catching up thanks to LLMs)
People see emotional signals in youtube videos. They respond to vocal tone, facial expressions, these are known to circumvent critical thinking. Like if you examine crowds of science deniers the usual commonality is that they are having a parasocial relationship with a bunch of youtube creators who are nice to them and reinforce their beliefs. The actual content of the belief is irrelevant, if you are disagreeing with the belief, you are attacking their tribe. Not limited to science deniers either, you get this hacking of human tribal psychology even in stuff like people who watch computer game videos. They pick a few champions of their tribe and follow them without critical examination of the content. At least with a book, while this is still possible its much harder. Its also telling that a lot of cranks who published junk science have all migrated to youtube.
I dont think youtube makes you an idiot, so much as youtube content is designed to bypass your critical defenses and overwhelm you. It develops into a blind spot. People can be perfectly rational in most areas and then suddenly burp up some absolute nonsense they caught on youtube.
Oh and the best part, is when you point this out to someone they tend to go "Oh yeah that totally happens... except for my favourite youtube channel which does x and y and z and yes of course I buy all their products and donate to their charities"
Why is Grit pseudoscience? I haven't read it.
There are a number of studies that show that Grit is either not a thing or there are better measures of success. It has been a long time since I have thought about it so I don't remember which papers in particular.
Also, it can be argued the author was either playing fast and loose or knowingly misleading readers with her statistics: https://www.npr.org/sections/ed/2016/05/25/479172868/angela-...
If you like Podcasts the "If Books Could Kill" Podcast goes into some of this story again too.
The nice thing about books vs. YouTube is that it's much easier to critically interrogate books while you're reading them. That was the difference with my dad: he thought about what he read. He repeats what he listens to on YouTube.
I hate the proliferation of audiobooks too, by the way. It's the exact same problem.
To be fair, even reading 'good' books won't make you smart. I think the key is to be critical, which should be thought at a young age. Ikram Antaki dedicated most of her last years in teaching this in Mexico.
Anecdote: When I started studying economics I really agreed with a lot of what I read from economists like David Ricardo, Marx, Smith, etc. Then, I studied what other economist had to say and I could see how they disagreed with the former. This made me realize that I agreed with those people because their arguments 'made sense' to me, but that doesn't mean that what they said is completely true. This is something that has stayed with me, I always wonder how can something be wrong.
Exactly.
The Printing Press is good example, one of the first books was on "witch hunting", which panicked people, and lead to a lot of deaths. The first, 'conspiracy theory' to sweep over humans.
Humans are just highly susceptible to manipulation. YouTube is just taking it to next level. Like the difference in eating coca leaves, versus snorting coke.
I really do suck at DOOM - and I did read the paper about BNNs, so I anticipated how it works, doesn't make it any less interesting [0]
Playing DOOM is playing DOOM - if it's through your keyboard or mouse of progressing through the game states to move forward - hope that makes sense.
Suppose someone builds a framework that maps Doom to a large succession of Tic-Tac-Toe games.
Would the person tasked with placing X and O marks still be "playing Doom"?
you don't have to imagine too far - I made DOOM run through a series of pre-rendered images in markdown files as a stateless engine before [0] and the answer to your question is highly upto interpretation
You move, you plan, your actions have outcomes Same question as if you're playing choose-your-own-adventure game storybook
The point is that it doesn't really make sense to say they're "seeing" anything. You said
So… are the neurons on that chip seeing?
We all desperately want to say no.
But I can confidently say "no, that's totally childish, the neurons are clearly not seeing anything." And in fact it's not even especially clear that they're "playing DOOM" vs. hitting a biased random number generator in response to carefully preprocessed inputs that come from DOOM. There is a major distinction when the enemy positions are directly piped into the brain.Again I share the ethical concern about this stuff. But your blog post is quite misleading.
Have to say. I kind of agree with both of you.
But 'seeing' in humans is also a bit manipulated.
Does it really matter to the argument if it is seeing 'red', or just that it is 'sensing input'.
I don't think the average YouTube influencer is growing 200,000 human neurons.
This did have some real scientific backing. Even if the 'result's are hyped.
It is little extreme to call this false because it appeared on YouTube.
That's not what I said, I said the blog post was false because the author thoughtlessly digested a YouTube video. It looks like the blog invented some details that weren't actually in the video.
Converting an image to numbers, doesn't automatically scream, this isn't seeing.
The brain does a lot of manipulation of the input images, the pixels from the retina, that doesn't sound far from just linear algebra.
LLMs have awareness for the time they are spawned into memory. But it's very limited, think about if you could use your brain to think, but only after someone asked you a question. After you think the answer, then you are brain dead (unconscious) until another question is asked.
Contrarian take: the Promethian efforts will continue, and asymptotically approach the axis of The Real Thing, until we realize that that Prometheus is a variation on the theme of Sisyphus.
Only in this telling, Sisyphus is rolling his uneven boulder along that asymptotic curve a little further with every iteration toward a smiling Zeus.
"Where do we draw the line?"
There will be no line as long as there is the rush to win the capitalist game.
UNTIL -> The ball of neurons begins outthinking the humans. Probably also fused with some AI augmentation.
It only takes a few percentage points for a Human to outthink a Chimp. This new 'thing' will dominate the humans.
This is where I'm at as well. I don't think we'll see true AGI until we go beyond silicon. It can't grow on it's own, and we'd burn the world down trying to get it to scale.
A living bundle of neurons that can grow and learn is exciting to think about.
It's also terrifying to imagine the ramifications considering how things are going with silicon based AI.
Why cant it grow on its own? Once it is capable of generating and integrating resources for itself
>A living bundle of neurons that can grow and learn is exciting to think about.
They are, but those last few months of changing diapers when you just wish you could trust it to tell you it has to go to the potty are difficult.
Oh, I hadn't considered the waste of bio-compute modules.
Will they need to nap as well?
On that note, I'm so glad all my kids are past potty training.
>Will they need to nap as well?
If you're lucky. Then you'll have time to make more biocompute modules, which is pretty fun.
[flagged]
We treat actual biological animals a lot worse in some cases so until we bump up the number of neurons significantly higher above what the lowest tier is below us I don't think we should stop the experiments.