I'm scared about biological computing

kuber.studio

209 points by kuberwastaken 17 hours ago


pjs_ - 15 hours ago

Be careful about how you interpret that paper. It looks really impressive -- real neurons in a petri dish seem to successfully (if amateurishly) murk a few imps.

https://www.youtube.com/watch?v=yRV8fSw6HaE

But there's more to the setup than you might assume from a casual reading. Here's the code used for that demo:

https://github.com/SeanCole02/doom-neuron

So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.

Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course)

philips - 15 hours ago

I think this is the same ethical questions of veganism and our use/abuse of biological systems. This is an excerpt from "The Pig that Wants to be Eaten" by Julian Baggini

> After forty years of vegetarianism, Max Berger was about to sit down to a feast of pork sausages, crispy bacon and pan-fried chicken breast. Max had always missed the taste of meat, but his principles were stronger than his culinary cravings. But now he was able to eat meat with a clear conscience.

> The sausages and bacon had come from a pig called Priscilla he had met the week before. The pig had been genetically engineered to be able to speak and, more importantly, to want to be eaten. Ending up on a human’s table was Priscilla’s lifetime ambition and she woke up on the day of her slaughter with a keen sense of anticipation. She had told all this to Max just before rushing off to the comfortable and humane slaughterhouse. Having heard her story, Max thought it would be disrespectful not to eat her.

> The chicken had come from a genetically modified bird which had been ‘decerebrated’. In other words, it lived the life of a vegetable, with no awareness of self, environment, pain or pleasure. Killing it was therefore no more barbarous than uprooting a carrot.

> Yet as the plate was placed before him, Max felt a twinge of nausea. Was this just a reflex reaction, caused by a lifetime of vegetarianism? Or was it the physical sign of a justifiable psychic distress? Collecting himself, he picked up his knife and fork . . .

> Source: The Restaurant at the End of the Universe by Douglas Adams (Pan Books, 1980)

Imnimo - 12 hours ago

>But this is where the line slightly blurs in my head. Did we possibly just build the first human biocomputer and immediately put it in a simulated hell, playing the same game on loop, forever? Using the same reward mechanisms we use for LLMs?

This description does not seem to really match what was done in the Doom demo, and makes me skeptical that the author has actually looked into the details.

slibhb - 13 hours ago

I read an interesting book about consciousness recently: The Hidden Spring by Mark Solms.

Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).

If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing.

The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally.

zelon88 - 3 hours ago

Biological computers are inevitable. We are the most compelling proof of concept of this that we have. Our entire civilization may be a prototype of one already.

From my perspective, I belive things will happen in the following order;

1. AI will eventually take over all silicon chip design. Human designs pale in comparison. Moores Law, which currently indicates that humans are reaching the practical limitations of their own silicon chip design skills; will give way to a new law. The new law, "Claude's Law" dictates that processing speed will increase by a factor of 10 every year. And for a decade or so, it does. There is no reason to ever fabricate another human designed chip ever again. To do so would be an irresponsible waste of fabrication resources.

2. AI will reach the practical limits of silicon processing capability 10 years after humans designed their last commercial chip. Chip performance increases begin to slow, and it looks like the end of unit performance increases for silicon based computing technology is approaching.

3. AI pivots to biological computers. Next generation computers emerge that are made from DNA and living tissue. Although the shape of a computer server remains mostly unchanged, a next generation biological computer is basically just "a really big brain in a jar."

4. Biological robots?

- 4 hours ago
[deleted]
lukasb - 15 hours ago

Anyone who believes AI running on silicon could in principle be conscious has to believe that biological computers are conscious, right? Why aren't those people voicing more concerns?

marjipan200 - 13 hours ago

The mind of the neuro-materialist is a radio so impressed with its own receiver that it's convinced it is the broadcasting tower

atleastoptimal - 13 hours ago

We will never draw the line because morality among humans is coupled with looking human-like. For most people, their morals have aesthetic prerequisites, neurons in a lab don't mean as much as neurons in a meat case (especially if that meat case is physically attractive)

mrweasel - 14 hours ago

In the same line of thinking: I'm a little concerned that humans are, to some extend, just LLMs in a meat suit.

mr-footprint - 15 hours ago

Reminds me of an ethical dilemma in the game "Detroit: Become Human". I found myself philosophically asking what it means to be alive, what it means to be conscious, and if something without biological bones, blood and a brain can feel the same-level of consciousness as humans, or greater.

rolph - 15 hours ago

for now, this is a hyper simplistic and hacky POC.

you may find a look at how a full visual system is constructed to be a relief.

https://www.cell.com/fulltext/S0896-6273(07)00774-X

there is a good distance to go before this is anything beyond a reflex circuit.

https://www.sciencedirect.com/topics/neuroscience/spinal-ref...

fhub - 6 hours ago

Economist article on it https://archive.is/ddS6X

bibin765 - 6 hours ago

Humans may be products of biological computation, and now we are trying to redesign ourselves.

yegortk - 15 hours ago

ICML paper about that: https://proceedings.mlr.press/v235/tkachenko24a.html

AntiDyatlov - 14 hours ago

Yeah, we're totally fucked, there is no scientific theory that can tell you what is and isn't conscious. For all we know, my laptop, not running any LLM is conscious and always has been. Or my chair. Or a proton. This consciousness thing is a nasty problem for the scientific worldview.

shevy-java - 2 hours ago

> So… are the neurons on that chip seeing?

> We all desperately want to say no. We want to say it’s just a science experiment, that 200,000 neurons isn’t enough to be a “person.” But 200,000 is already more neurons than a jellyfish or a worm.

> Where do we draw the line?

This shows a lack of understanding of neurobiology. 200.000 neurons don't "see"; they register and respond to the action potential generated. Adding more to that simply means you have more possibilities to respond. Having 10000 billion neurons does not automatically imply intelligence. To try to simplify it down to mere numbers, e. g. "this must be a worm", shows that there is a lack of understanding of the core tenets of neurobiology. This also includes non-action potential involved understanding, e. g. the special role of certain mRNA/proteins, re-recreating memory and so forth: https://pmc.ncbi.nlm.nih.gov/articles/PMC6650148/

But even aside from this, the whole premise is weird. Science always includes the scary parts. Nuclear energy can be used for peaceful production of energy and it can be used to obliterate people, as one country has shown to the rest of the world. You have a similar issue with regards to biology in general. You could also see this with drones - you can use drones to deliver goods via air to people, or you can use it to deliver an explosive payload to help in warfare. I don't fully understand the "I am scared" part. This is a general problem, not one limited to biological computing at all.

fhn - 10 hours ago

a couple of years ago, the mad scientists in me thought about a business where we preserve the brains of people a la Futurama. When the body dies, the brain does not necessarily have to follow. Possible? Yes. Feed it the right chemical cocktail, O2, remove waste products. Ethical/Moral? Whose to say? We are preserving life..in a sense. Profitable - Sure. Connect it to a keyboard/mouse interface. I mean we already have business cyro-preserving with the hope of unfreezing in the distant future!

themafia - 4 hours ago

> Why would this be different?

In a concept? Immunological privilege. And you thought CVEs were the worst thing ever.

- 15 hours ago
[deleted]
- 13 hours ago
[deleted]
keybored - 13 hours ago

I don’t believe that silicon has a soul (loosely speaking). For the same reason I don’t believe that some biomatter in a lab has a soul.

ChicagoDave - 9 hours ago

Am I the only one that read Greg Bear’s novel Blood Music?

That book has haunted me for decades.

LeCompteSftware - 15 hours ago

An underappreciated source of nonsense in 21st century discourse is people watching YouTube instead of reading things. It doesn't appear this author read anything, preferring to be spooked and misled by a YouTube video.

   trained them to play DOOM - honestly better than I do.
Maybe the author really really sucks at DOOM, but I think this is a false embellishment:

>> While the neurons can play the game better than a randomly firing player, they’re not very good. “Right now, the cells play a lot like a beginner who’s never seen a computer—and in all fairness, they haven’t,” Brett Kagan, chief scientific officer at Cortical Labs, says in the video. “But they show evidence that they can seek out enemies, they can shoot, they can spin. And while they die a lot, they are learning.” [https://www.smithsonianmag.com/smart-news/a-clump-of-human-b... ]

  To play DOOM, the system feeds visual data to the neurons. For the neurons to react, they have to interpret that data in some way. 
This is totally false - not even a misleading metaphor, just plain wrong. The neuronal computer doesn't get any visual information:

>> So how does a petri dish of brain cells play Doom when it doesn’t have any eyes? Or fingers? "We take a snapshot of the game with information like the player’s health and the position of enemies, pass it through a neural network, convert it into numbers, and send the data,” explains Cole. “This is called encoding – essentially turning the game state into signals the neurons can understand. The neurons then fire an output – move left, move right, walk forward, shoot or not shoot – which the system decodes and converts back into actions in the game." [https://www.theguardian.com/games/2026/mar/16/petri-dish-bra...]

I am also concerned about neuronal computing. But it doesn't really help anyone to spread childish ghost stories about it.

I really hate YouTube, by the way. My dad used to read newspapers and had interesting ideas. Now he watches a bunch of YouTube and he's a huge idiot. It's not (directly) because of age: nobody is immune to narcotic slop. I had to delete my account when I realized how much of my life and cognition I was wasting. I wish others would do the same.

AISnakeOil - 13 hours ago

LLMs have awareness for the time they are spawned into memory. But it's very limited, think about if you could use your brain to think, but only after someone asked you a question. After you think the answer, then you are brain dead (unconscious) until another question is asked.

futuresoonpast - 12 hours ago

see also https://en.wikipedia.org/wiki/Beyond_Lies_the_Wub

smitty1e - 15 hours ago

Contrarian take: the Promethian efforts will continue, and asymptotically approach the axis of The Real Thing, until we realize that that Prometheus is a variation on the theme of Sisyphus.

Only in this telling, Sisyphus is rolling his uneven boulder along that asymptotic curve a little further with every iteration toward a smiling Zeus.

FrustratedMonky - 15 hours ago

"Where do we draw the line?"

There will be no line as long as there is the rush to win the capitalist game.

UNTIL -> The ball of neurons begins outthinking the humans. Probably also fused with some AI augmentation.

It only takes a few percentage points for a Human to outthink a Chimp. This new 'thing' will dominate the humans.

Etoro1942 - 14 hours ago

[flagged]

qoez - 15 hours ago

We treat actual biological animals a lot worse in some cases so until we bump up the number of neurons significantly higher above what the lowest tier is below us I don't think we should stop the experiments.