A Theory of Deep Learning

elonlit.com

75 points by elonlit a day ago


arolihas - an hour ago

Idk to me this is just redescribing what deep neural networks do without actually explaining why anything happens. I guess it "unifies" things but I am kinda over most unifying theories. Everything is Bayesian, everything is a graph or a group or some other fancy geometric structure, everything is a category. Ultimately the best framework is whatever is useful enough to explain what's happening in such a way that a practitioner can manipulate the model towards a desired outcome. In other words, where is the knob? The tool they share may be interesting and I hope to play with it to see what happens at different levels of noise applied to the labels.

prideout - 2 hours ago

This is a fascinating mathematical framework, but the post title might be a bit of an overreach. I often wonder if "a theory of deep learning" could exist that could be stated succinctly and that could predict (1) scaling laws and (2) the surprising reliability of gradient descent.

Note that I said "predict" not "describe". It feels like we're still in the era of Kepler, not Newton.

ks2048 - 2 hours ago

The relevant paper: "A Theory of Generalization in Deep Learning". https://arxiv.org/abs/2605.01172

hashta - 36 minutes ago

Interesting read. I remember the grokking paper when it came out but I don't think I've ever seen that classic grokking loss curve in my own hands on real data. Curious if others have seen it more often in practice

smokel - 2 hours ago

This essay seems to be related to the paper "There Will Be a Scientific Theory of Deep Learning" [1] which was discussed here recently [2].

[1] https://arxiv.org/pdf/2604.21691

[2] https://news.ycombinator.com/item?id=47893779

airza - 2 hours ago

A very fascinating read.

As a fellow tufte css enjoyer, Why is user select turned off on the sidenotes? I would like to be able to copy paste them quite badly.

jdw64 - 2 hours ago

Does anyone happen to know what font this site is using? It looks really elegant.

- 2 hours ago
[deleted]
refulgentis - 3 hours ago

This is a beautifully written way of saying “Some parts of what the network memorizes affect test behavior, and some don’t.” But that’s not a theory of deep learning, the grand unified theory would explain that.

We're given a signal channel and a reservoir. Signal lives in the channel, noise lives in the reservoir, and the reservoir supposedly doesn’t show up at test time.

Okay, but then we have: why would SGD put the right things in the right bucket?

If the answer is “because the reservoir is defined as the stuff that doesn’t transfer to test,” then this is close to circular.

The Borges/Lavoisier stuff is a tell. "We have unified the field” rhetoric should come after nontrivial predictions and results. Claiming to solve benign overfitting, double descent, grokking, implicit bias, risk of training on population, how to avoid a validation set, and last but not least, skipping training by analytically jumping to the end is 6 theory papers, 3 NeurIPS winners, and a $10B startup. Let's get some results before we tell everyone we unified the field. :) I hope you're right.