Concept cells help your brain abstract information and build memories

quantamagazine.org

104 points by headalgorithm 17 hours ago


froh - 44 minutes ago

I wonder how concepts are "learned" or how they evolve.

in the 1990s I studied "Women, Fire and dangerous Things" https://scholar.google.com/scholar?cluster=11854940822538766...

And among other things they observed and pointed out how different concepts are between cultures.

Ezku - 15 hours ago

An interesting piece featured in the article: “Concept and Location Neurons in the Human Brain Provide the ‘What’ and ‘Where’ in Memory Formation”, Nature Communications 2024 (https://doi.org/10.1038/s41467-024-52295-5)

This wasn’t in the article, but I feel it makes for good background reading: “Universal Principles Justify the Existence of Concept Cells”, Scientific Reports 2020 (https://doi.org/10.1038/s41598-020-64466-7)

observationist - 10 hours ago

This is really cool - it's a relatively well known idea, but it's great to see it get refined and better understood. It's amazing how sparse the brain is; a single neuron can trigger a profound change in contextual relations, and play a critical role in how things get interpreted, remembered, predicted, or otherwise processed.

That single cell will have up to 10,000 features, and those features are implicitly processed; they're only activated if the semantic relevance of a particular feature surpasses some threshold of contribution to whatever it is you're thinking at a given moment. Each of those features is binary, either off or on at a given time t in processing. Compare this to artificial neural networks, where a particular notion or concept or idea is an embedding; if you had 10,000 features, each of those is activated and processed every pass. Attention and gating and routing and MoE get into sparsity and start moving artificial networks in the right direction, but they're still enormously clunky and inefficient compared to bio brains.

Implicit sparse distributed representation is how the brain can get to ~2% sparse activations, with rapid, precise, and deep learning of features in realtime, where learning one new thing can recontextualize huge swathes of knowledge.

These neurons also allow feats of memory, like learning the order of 10 decks of cards in 5 minutes, or reciting 1 million digits of pi, or cabbies learning "The Knowledge" and learning every street, road, sidwalk, alley, bridge, and other feature in London, able to traverse the terrain in their mind. It's wonderful that this knowledge is available to us, that the workings of our minds are becoming unveiled.

isaacfrond - an hour ago

LLM can concept cells as well. Claude artificially amplified neuron responsible for the Golden Gate Bridge, the resulting LLM, called 'Golden Gate Claude', would mention the Golden Gate Bridge bridge when ever it could.

https://www.anthropic.com/news/golden-gate-claude

AIorNot - 12 hours ago

Same concept in LLMs as referenced in this video by Chris Olah at Anthropic:

https://www.reddit.com/r/OpenAI/comments/1grxo1c/anthropics_...

also see: https://distill.pub/2021/multimodal-neurons/

westurner - 14 hours ago

skos:Concept RDFS Class: https://www.w3.org/TR/skos-reference/#concepts

schema:Thing: https://schema.org/Thing

atomspace:ConceptNode: https://wiki.opencog.org/w/Atom_types .. https://github.com/opencog/atomspace#examples-documentation-...

SKOS Simple Knowledge Organization System > Concepts, ConceptScheme: https://en.wikipedia.org/wiki/Simple_Knowledge_Organization_...

But temporal instability observed in repeat functional imaging studies indicates that functional localization constant: the regions of the brain that activate for a given cue vary over time.

From https://news.ycombinator.com/item?id=42091934 :

> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :

>> Future work should characterize drift across brain regions, cell types, and learning.

- 13 hours ago
[deleted]