The unreasonable effectiveness of the Fourier transform
joshuawise.com296 points by voxadam a day ago
296 points by voxadam a day ago
People go all dopey eyed about "frequency space", that's a red herring. The take away should be that a problem centric coordinate system is enormously helpful.
After all, what Copernicus showed is that the mind bogglingly complicated motion of planets become a whole lot simpler if you change the coordinate system.
Ptolemaic model of epicycles were an adhoc form of Fourier analysis - decomposing periodic motions over circles over circles.
Back to frequencies, there is nothing obviously frequency like in real space Laplace transforms *. The real insight is that differentiation and integration operations become simple if the coordinates used are exponential functions because exponential functions remain (scaled) exponential when passed through such operations.
For digital signals what helps is Walsh-Hadamard basis. They are not like frequencies. They are not at all like the square wave analogue of sinusoidal waves. People call them sequency space as a well justified pun.
My suspicion is that we are in Ptolemaic state as far as GPT like models are concerned. We will eventually understand them better once we figure out what's the better coordinate system to think about their dynamics in.
* There is a connection though, through the exponential form of complex numbers, or more prosaically, when multiplying rotation matrices the angles combine additively. So angles and logarithms have a certain unity, or character.
All these transforms are switching to an eigenbasis of some differential operator (that usually corresponds to a differential equation of interest). Spherical harmonics, Bessel and Henkel functions, which are the radial versions of sines/cosines and complex exponential, respectively, and on and on.
The next big jumps were to collections of functions not parameterized by subsets of R^n. Wavelets use a tree shapes parameter space.
There’s a whole, interesting area of overcomplete basis sets that I have been meaning to look into where you give up your basis functions being orthogonal and all those nice properties in exchange for having multiple options for adapting better to different signal characteristics.
I don’t think these transforms are going to be relevant to understanding neural nets, though. They are, by their nature, doing something with nonlinear structures in high dimensions which are not smoothly extended across their domain, which is the opposite problem all our current approaches to functional analysis deal with.
You may well be right about neural networks. Sometimes models that seem nonlinear turns linear if those nonlinearities are pushed into the basis functions, so one can still hope.
For GPT like models, I see sentences as trajectories in the embedded space. These trajectories look quite complicated and no obvious from their geometrical stand point. My hope is that if we get the coordinate system right, we may see something more intelligible going on.
This is just a hope, a mental bias. I do not have any solid argument for why it should be as I describe.
> Sometimes models that seem nonlinear turns linear if those nonlinearities are pushed into the basis functions, so one can still hope.
That idea was pushed to its limit by the Koopman operator theory. The argument sounds quite good at first, but I’ve heard critiques from reputable sources in the field that basically say that it can’t really work in its current formulation.
Note that I'm not great at math so it's possible I've entirely misunderstood you.
Here's an example of directly leveraging a transform to optimize the training process. ( https://arxiv.org/abs/2410.21265 )
And here are two examples that apply geometry to neural nets more generally. ( https://arxiv.org/abs/2506.13018 ) ( https://arxiv.org/abs/2309.16512 )
From the abstract and skimming a few sections of the first paper, imho it is not really the same. The paper is moving the loss gradient to the tangent dual space where weights reside for better performance in gradient descent, but as far as I understand neither the loss function nor the neural net are analyzed in a new way.
The Fourier and Wavelet transforms are different as they are self-adjoint operators (=> form an orthogonal basis) on the space of functions (and not on a finite dimensional vector space of weights that parametrize a net) that simplify some usually hard operators such as derivatives and integrals, by reducing them to multiplications and divisions or to a sparse algebra.
So in a certain sense these methods are looking at projections, which are unhelpful when thinking about NN weights since they are all mixed with each other in a very non-linear way.
Thanks a bunch for the references. Reading the abstract these used a different idea compared to what Fourier analysis is about, but nonetheless should be a very interesting read.
I feel like this is the way we should have learned Fourier and Laplace transforms in my DSP class. Not just blindly applying formulas and equations.
I’d argue that most if not all of the math that I learned in school could be distilled down to analyzing problems in the correct coordinate system or domain! The actual manipulation isn’t that esoteric once you get in the right paradigm. And those professors never explained things at that kind of higher theoretical level, all I remember was the nitty gritty of implementation. What a shame. I’m sure there’s higher levels of mathematics that go beyond my simplistic understanding, but I’d argue it’s enough to get one through the full sequence of undergraduate level (electrical) engineering, physics, and calculus.
It’s kind of intriguing that predicting the future state of any quantum system becomes almost trivial—assuming you can diagonalize the Hamiltonian. But good luck with that in general. (In other words, a “simple” reference frame always exists via unitary conjugation, but finding it is very difficult.)
Indeed.
It's disconcerting at times, the scope of finite and infinite dimensional linear algebra, especially when done on a convenient basis.
My favorite story about the Fourier Transform is that Carl Friedrich Gauss stumbled upon the algorithm for the Fast Fourier Algorthim over a century before Cooley and Tukey’s publication in 1965 (which itself revolutionized digital signal processing).[1] He was apparently studying the motion of the asteroids Pallas and Juno and wrote the algorithm down in his notes but it never made it into public knowledge.
[1] https://www.cis.rit.edu/class/simg716/Gauss_History_FFT.pdf
There is a saying about Gauss: when another mathematician came to show him a new result, Gauss would remark that he had already worked on it, open a drawer in his desk, and pull out a pile of papers on the same topic.
One of the things I admire about many top mathematicians today like Terence Tao is that they are clearly excellent mentors to a long list of smart graduate students and are able to advance mathematics through their students as well as on their own. You can imagine a half-formed thought Terence Tao has while driving to work becoming a whole dissertation or series of papers if he throws it to the right person to work on.
In contrast, Gauss disliked teaching and also tended to hoard those good ideas until he could go through all the details and publish them in the way he wanted. Which is a little silly, as after a while he was already widely considered the best mathematician in the world and had no need to prove anything to anyone - why not share those half-finished good ideas like Fast Fourier Transforms and let others work on them! One of the best mathematicians who ever lived, but definitely not my favorite role model for how to work.
Someone blew my mind by convincing me to read Bush’s “As we may think” which was published in 1945. Then I started digging into him and discovered he was also the second president of the ACM, was instrumental in shaping the formation of the National Science foundation (mainly by critiquing their initial plans as unworkable) and also Claude Shannon’s doctoral advisor. Because of course he was.
Well, in that time it was more or less how mathematics worked. It was a way of showing off, and often it would be a case of "Hey I've solved this problem, bet no-one else can". It was only later it became a lot more collaborative (and a bit more focused on publishing proofs).
You're correct that the culture of mathematics has changed a lot, and has become much more collaborative. The rise of the modern doctoral training system in Germany later in the 19th century is also relevant. So really Gauss's example points primarily to how much mathematics has changed. But at the same time, I think you could reasonably take Gauss to task even applying the standards of his own era - compare him with Euler, for example, who was much more open with publication and generous with his time and insights, frequently responding to letters from random people asking him mathematical questions, rather like Tao responding to random comments on his blog (which he does). I admire Euler more, and he was born 70 years before Gauss.
Of course, irascible brilliance and eccentricity has an honorable place in mathematics too - I don't want to exclude anyone. (Think Grigori Perelman and any number of other examples!)
There's also this notion of holding themselves to their own standards.
They, Newton included, would often feel that their work was not good enough, that it was not completed and perfected yet and therefore would be ammunition for conflict and ridicule.
Gauss did not publicize his work on complex numbers because he thought he would attacked for it. To us that may seem weird, but there is no dearth of examples of people who were attacked for their mostly correct ideas.
Deadly or life changing attacks notwithstanding, I can certainly sympathize. There's not in figuring things out, but the process of communicating that can be full of tediousness and drama that one maybe tempted to do without.
Weird typo in what I wrote. It's past the edit window. This is what I had meant to type:
There's joy in figuring things out, but the process of communicating what has been so figured can be tedious and full of drama -- the kind of drama that one maybe tempted to do without.
not sharing something you had no time to properly check is entirely understandable
> There is a saying about Gauss: when another mathematician came to show him a new result, Gauss would remark that he had already worked on it, open a drawer in his desk, and pull out a pile of papers on the same topic.
As if phd students need more imposter syndrom to deal with. Ona serious side, I wonder what conditions allow such minds to grow. I guess a big part is genetics, but I am curious if the "epi" is relevant and how much.
Imposter syndrome? If I was a PhD-level student (back then) and had an idea - and it turned out that Gauss had also thought of the idea, then written it out, and he kept the notes right in his desk - yeah. I'd take that as proof that I was one of the world's top mathematicians.
When I interned at Chevron someone said they (or some other oil company) were using Fourier transforms in the 1950's for seismic analysis but kept it a secret for obvious reasons. I think you couldn't (can't?) patent math equations.
Gauss's notes and margins is riddled with proofs he didn't bother to publish - he was wild.
Not sure if true, but allegedy he insisted his son not go into maths, as he would simply end up in his father's shadow as he deemed it utterly Impossible to surpass his brilliance in maths :'D
> as he would simply end up in his father's shadow as he deemed it utterly Impossible to surpass his brilliance in maths
Definitely true but also bad parenting. Gauss was somewhat of a freak of nature when it came to math. Him and Euler are two of the most unreasonably productive mathematicians of all time.
But what he deemed being posited as true, was this really bad parenting? It could be to head off competition or it could be brutal realism to head off future depression.
Nepotism existed since time immemorial but for a mathematical genius, what was the nepotistic deliverable for the child? A sinecure placement at university?
> But what he deemed being posited as true
Implicit in the "correctness" of this motive is the idea that unless you're #1 in your field, you are nothing (depression implies strong feelings of worthlessness).
I don't know if you think that's a great lesson to teach your kids as a parent, but I don't.
Not wanting your child to be permanently compared to you for their entire career is entirely understandable
It’s unusual to tell others not to do something because you’re projecting they’re secretly doing it to compete with you, or that they’ll be depressed when they don’t do what you did.
Doubly so when the rationale is “I’m so fucking awesome”
Triply so when it’s something you’re passionate about, presumably inherently.
Quadruply so when it’s your child. Its tough as a kid hearing your parents come up with elongated excuses why you can’t dream and work towards a future.
When you let people find their own way, you might even learn something from it (ex. 70 yo Gauss learns he didn’t need to tie his mental state to his work because his son doesn’t suddenly become depressed from not matching dads output)
Re: second half, sounds about right, confused at relevancy though (is the idea the child would only do it to pursue nepotistic spoils and an additional reason is the spoils aren’t even good?)
I posit Gauss knew he was a GOAT and had ego. But I also posit he loved his children.
So, a nepotistic delivery was beneficial for his family, and advising his son to seek excellence outside the shadow cast by Gauss himself wasn't stamping on dreams (in my view) it was seeking the happiest outcome.
Without overdoing it, the suicide rate for rich kids with famous parents isn't nothing. There are positive examples, Stella McCartney comes to mind. She isn't wings.
What does "She isn't wings" mean?
Paul McCartney started a band called Wings and she was also in it. I think the idea is "she received nepotistic spoils, lived in the shadow of dad even in his backup projects that 0.01% of people who know the Beatles even recognize." (This elides a very successful career as a fashion designer, as well as the awkward question of what _would_ have guaranteed her more “success”, as well as a lack of understanding of how you feel after grand success you were chasing for it’s own sake (empty))
For what it’s worth, his children were quite successful by all accounts. Two of the boys became successful businessmen after emigrating to the US and one of the boys became a director of the railway network in Hannover. Seems as though they weren’t harmed by their upbringing.
I mean just like most scientists at the time Gauss was rather wealthy, so it is unsurprising they were fine.
How was Gauss so productive with 6 children?
He wasn't as productive as he could have been. This seems like Chuck Norris and Jeff Dean territory after all.
I'd hazard by not dealing with them until they'd been schooled enough to operate as computers and GI (general intelligence) assistants.
There's a spread of farmers, railroad and telegraph directors, high level practical infomation management skills in the children.
I only have 5 kids, and I am also not nearly as productive as Gauss but to a certain degree, it feels to me like responsibility kind of tries to force me to be more effective.
My biggest missing feature for Grafana is that I want a Fourier transform that can identify epicycles in spikes of traffic. Like the first Monday of the month, or noon on tuesdays.
I had a couple charts that showed a trend line of the last n days until someone in OPs noticed that three charts were fully half of our daily burn rate for Grafana. Oops. So I started showing a -7 days line instead, which helped me but confused everyone else.
A signal cannot be both time and frequency band limited. Many years ago I was amazed when I read that this fact I learned in my undergraduate is equivalent to the Uncertainty Principle!
On a more mundane note: my wife and I always argue whose method of loading the dishwasher is better: she goes slow and meticulously while I do it fast. It occurred to me we were optimizing for frequency and time domains, respectively, ie I was minimizing time so spent while she was minimizing number of washes :-)
Signals can be approximately frequency and time bandlimited, though, meaning the set of values such that the absolute value exceeds any epsilon is compact in both domains. A Gaussian function is one example.
Another example: ears are excellent at breaking down the frequency of sounds, but are imprecise about where the sound is coming from; whereas eyes are excellent at telling you where light is coming from, but imprecise about how its frequencies break down.
that's mostly due to light waves being FAR shorter and many orders of magnitude more "sensors"
Ears are essentially 2 "pixels" of sound sensing; and for that limitation they are ABSOLUTELY AMAZING at pointing out the sound source.
It’s literally the Heisenberg uncertainty principle, applied to signal processing.
For those who don't get this comment, the Heisenberg uncertainty principle applies to any two quantities that are connected in QM via a Fourier transform. Such as position and momentum, or time and energy. It is really a mathematical theorem that there is a lower bound on the variance of a function times the variance of its Fourier transform.
That lower bound is the uncertainty principle, and that lower bound is hit by normal distributions.
thank you for that reminder/clarification. I forget sometimes how much we think we have clear pictures of how things like that work when really we're just listening to someone trying to explain what the math is doing and we're adding in detail.
Thats. I always assumed it was more a quirk of the universe than something driven by pure mathematics. Amazing.
Yes that’s fair to say. The tradeoff is mathematically inevitable. Physics just dictates the constants.
It’s also the kind of thinking that can throw a wet blanket on the “beauty” of e.g. Eulers identity (not being critical, I genuinely appreciate the replies I got)
> I was minimizing time so spent while she was minimizing number of washes
I'm probably just slow, but I'm not following. Do you mean because you went fast, you had to run another cycle to clean everything properly?
If you haven't already, you should watch the Technology Connections series on dishwashers.
Since I’m rushing to load it as fast as possible the packing is not as good as hers so some dishes are left out. Overall this leads to more loads.
The self loading dishwasher would be the greatest marriage saving invention since car navigation systems.
you just need to go "if you want it loaded your way, you do it" and all is solved
And if loading dishwasher is on top of your marital issues you're probably in very happy marriage.
The constant small degree of conflict and strife is key to happiness, people can't be permanently happy, they just find ways to sabotage when they do
Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple. I have some demo code that uses fourier transform on webcam video to read a heartrate off a person's face, basically looking for what frequency holds peak energy.
It's effectively the underpinning of all modern lossy compression algorithms. The DCT which underlies codecs like Jpeg, h264, mp3, is really just a modified FFT.
Inter/intra-prediction is more important than the DCT. H264 and later use simpler degenerate forms of it because that's good enough and they can define it with bitwise accuracy.
>Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple.
Not the first time I've heard this on HN. I remember a user commenting once that it was one of the few perspective shifts in his life that completely turned things upside down professionally.
There is also a loose analogy with finance: act (trade) when prices cross a certain threshold, not after a specific time.
I don't think pulsing skin (due to blood flow) is visible from a webcam though.
Plenty of sources suggest it is:
https://github.com/giladoved/webcam-heart-rate-monitor
https://medium.com/dev-genius/remote-heart-rate-detection-us...
The Reddit comments on that second one have examples of people doing it with low quality webcams: https://www.reddit.com/r/programming/comments/llnv93/remote_...
It's honestly amazing that this is doable.
My dumb ass sat there for a good bit looking at the example in the first link thinking "How does a 30-60 Hz webcam have enough samples per cycle to know it's 77 BPM?". Then it finally clicked in my head beats per minute are indeed not to be conflated with beats per second... :).
Non-paywalled version of the second link https://archive.is/NeBzJ
MIT was able to reconstruct voice by filming a bag of chips on a 60FPS camera. I would hesitate to say how much information can leak through.
https://news.mit.edu/2014/algorithm-recovers-speech-from-vib...
I befriended the guy in high school who built a Tesla coil. For his next trick he was building a laser to read sound off of plate glass. The decoder was basically an AM radio. Which high school me found slightly disappointing.
Sure it is. Smart watches even do it using the simplest possible “camera” (an LED).
It totally is. Look for motion-magnification in the literature for the start of the field, and then remote PPG for more recent work.
You will be surprised of The Unreasonable Effectiveness of opencv.calcOpticalFlowPyrLK
It is, I've done it live on a laptop and via the front camera of a phone. I actually wrote this thing twice, once in Swift a few years back, and then again in Python more recently because I wanted to remember the details of how to do it. Since a few people seem surprised this is feasible maybe it's worth posting the code somewhere.
It is, but there's a lot of noise on top of it (in fact, the noise is kind of necessary to avoid it being 'flattened out' and disappearing). The fact that it covers a lot pixels and is relatively low bandwidth is what allows for this kind of magic trick.
The frequency resolution must be pretty bad though. You need 1 minute of samples for a resolution of 1/60 Hz. Hopefully the heartrate is staying constant during that minute.
I have seen apps that use the principle for HRV. Finger pushed on phone cam.
You can do it with infrared and webcams see some of it, but I'm not sure if they're sensitive enough for that.
I would heartily recommend Sebastian Lague's latest video, which covers this in a very approachable way: https://www.youtube.com/watch?v=08mmKNLQVHU
Okay, who's gonna write the story
> The unreasonable effectiveness of The Unreasonable Effectiveness title?
Unreasonable effectiveness is all you need.
"Unreasonable effectiveness is all you need" considered harmful
Don’t let unreasonable effectiveness become the enemy of reasonable effectiveness.
A closer look at unreasonable effectivenessness.
A rigorous study of the "unreasonable effectiveness" method.
I did some analysis on top title patterns. Both of these make the list pretty handily: https://projects.peercy.net/projects/hn-patterns/index.html
Funny. You might want to do that modulo capitalization, and perhaps some other common substitutions (LLM/LLMs/Large Language Model/Large Language Models, it's/ it is, what's/what is, I am/I'm), but they change the number of words, so better opt for the shortest alternative.
Given how much of the talk is about the original paper the title references, and how the Fourier transform turns out to be unreasonably effective at allowing communication over noisy channels, I'd say it's a reasonable reference.
It's a play on the famous essay 1960 "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".
I agree this is getting old after 75 years. Not least because it seems slightly manipulative to disguise a declarative claim ("The Fourier transform is unreasonably effective."), which could be false, as a noun phrase ("The unreasonable effectiveness of the Fourier transform"), which doesn't look like a thing that can be wrong.
Also how most of the articles with this kind of title (those posted on HN at least) are about computation/logical processes, which are by definition, reasonable.
Agreed, these kind of titles are very silly.
FTs are actually very reasonable, in the sense that they are a easy to reason about conceptually and in practice.
There's another title referenced in that link which is equally asinine: "Eugene Wigner's original discussion, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". "
Like, wtf?
Mathematics is the language of science, science would not compound or be explainable, communicable, or model-able in code without mathematics.
It's actually both plainly obvious for mathematics then to be extremely effective (which it is) and also be evidently reasonable as to why, ergo it is not unreasonably effective.
Also the slides are just FTs 101 the same material as in any basic course.
Hi, original presenter here :) The beginning is FTs 101. The end gets more application-centric around OFDM and is why it feels 'unreasonably effective' to me. If it feels obvious, there's a couple of slides at the end that are food for thought jumping off points. And if that's obvious to you too, let's collab on building an open source LTE modem!
If one wants to contribute to an open-source LTE modem, the best place to start may be OpenLTE: https://en.wikipedia.org/wiki/OpenLTE The core of any LTE modem is software, even if it is written for DSPs or other low-level software.
> Mathematics is the language of science
So, biology and medicine are not sciences? Or are only sciences to the extent they can be mathematically described?
The scientific method and models are much more than math. Equating the reality with the math has let to myriad misconceptions, like vanishing cats.
And silly is good for a title -- descriptive and enticing -- to serve the purpose of eliciting the attention without which the content would be pointless.
They are still capable of being described with math, we are just not capable of doing the math, or probably better put is there is a diminishing return of doing the formalisation of those systems as our cognitive abilities are limited and couldn't reason about those models. It leaves them using very approximate models based on human language descriptions that can be reasoned about.
Which means, the language of some fields can’t be math.
However, I don’t think the original presenter was asserting those fields aren’t science, that’s an unreasonable interpretation. More so , ideally they would be use math as it is a language that would help prevent the silly argument “so, Y is not X?, or is Y only X provided Y is in the subset of X that excludes Z? “
(Even in Engineering, we hit this cognitive limit, and all sorts of silliness emerges about why things are or are not formalised)
I find it hard to parse the middle of your post. Are you saying Wigner's article, which is what all the "unreasonable effectiveness" titles reference, is silly?
If that is what you are saying I suggest that you actually go back and read it. Or at least the Wiki article:
https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...
By means of contrast: I think it's clear that mathematics is, for example, not unreasonably effective in psychology. It's necessary and useful and effective at doing what it does, but not surprisingly so. Yet in the natural sciences it often has been. This is not a statement about mathematics but about the world.
(As Wittgenstein put it some decades earlier: "So too the fact that it can be described by Newtonian mechanics asserts nothing about the world; but this asserts something, namely, that it can be described in that particular way in which as a matter of fact it is described. The fact, too, that it can be described more simply by one system of mechanics than by another says something about the world.")
Yeah it's silly, I don't mean it in any mean spirited way.
> Wigner's first example is the law of gravitation formulated by Isaac Newton. Originally used to model freely falling bodies on the surface of the Earth, this law was extended based on what Wigner terms "very scanty observations"[3] to describe the motion of the planets, where it "has proved accurate beyond all reasonable expectations."
So despite 'very scant observations' they yielded a very effective model. Okay fine. But deciding they should be 'unreasonably' so is just a pithy turn of phrase.
That mathematics can model science so well, is reductive and reduces to the core philosophy of mathematics question of whether it is invented or discovered. https://royalinstitutephilosophy.org/article/mathematics-dis...
Something can be effective, and can be unreasonably so if it's somehow unexpected, but I basically disagree that FTs or mathematics in general are unreasonably so since we have so much prior information to expect that these techniques actually are effective, almost obviously so.
I am not discussing the FT case. But as regards Wigner's article, the core thing he points out is that while we are used to the effectiveness of maths, centuries after Newton, there in fact is not any prior grounds to expect this effectiveness.
And no, this is unrelated to whether math is invented or discovered. If anything this is related to the extreme success of reductionism in physics.
As a general point of reflection: If an influential article by a smart person seems silly to you, it's good practice to entertain the question if you missed something, and to ask what others are seeing in it that you're missing.
It is likewise unreasonable to look down on any kind of world model from the past. Remember that you, in 2026, are benefitting from millions of aggregate improvements to a world model that you've absorbed passively through participation in society, and not through original thought. You have a different vantage point on many things as a result of the shoulders of giants you get to stand on.
It is pretty funny to flippantly call an influential paper by someone who received a Nobel Prize in Physics 'asinine'.
I mean... this one's actually a pretty good paper, but we also had Linus Pauling pontificate on Vitamin C, so maybe we should cool it with the appeals to Nobel authority alone.
He did have a very long life, so there's that.
It's not easy to separate cause and effect from direct and strong correlations that we experience.
The job of a scientist is not to give up on a hunch with a flippant "correlation is not causation" but pursue such hunches to prove it this way or that (that is, prove it or disprove it). It's human to lean a certain way about what could be true.
> FTs are actually very reasonable, in the sense that they are a easy to reason about conceptually and in practice.
ok but it's not the FTs that are unreasonable, it's the effectiveness
I think we all understand at this point that "unreasonable effectiveness" just means "surprisingly useful in ways we might not have immediately considered"
How about The unreasonable effectiveness title considered harmful?
The unreasonable effectiveness of considering something harmful.
Lies, Damned lies, and Unreasonable Effectiveness
Lies, Damned lies, and Unreasonable Effectiveness For Fun and Profit
Lies, Damned Lies, and Unreasonable Effectiveness: How Lies in Titles are Damn Near Unreasonably Effective
The Unreasonable Effectiveness of LLMs.
Ironically a very relevant and accurate title.
If you are from ML/Data science world, the analogy that finally unlocked FFT for me is feature size reduction using Principal Component Analysis. In both cases, you project data to a new "better" co-ordinate system ("time to frequency domain"), filter out the basis vectors that have low variance ("ignore high-frequency waves"), and project data back to real space from those truncated dimension ("Ifft: inverse transform to time domain").
Of course some differences exist (e.g. basis vectors are fixed in FFT, unlike PCA).
I dont like the Fourier Transform. It is infinite which makes it coarse and rough and it it gets everywhere.
Anybody who does anything in the real world with Fourier transforms uses the fast Fourier transform operating on windowed data. This eliminates all of that infinite support and infinite resolution of frequencies.
To be more precise, when working with sampled data with uniform sample rate you use the Discrete Time Fourier Transform (DTFT), not the Fourier Transform!. None the less, you still end up with an approximate spectrum which is the signal spectrum convolved with the window function's spectrum.
In my view the Fourier Transform is still useful in the real world. For example you can use it to analytically derive the spectrum of a given window.
But I think the parent is hinting at wavelet basis.
Yes but they commonly dont end up as FT/FFT. For example wavelets and DCT.
So he explains OFDM in a way that implicitly does Amplitude shift keying.
I guess if you want to use different modulations you treat the complex number corresponding to the subcarrier as an IQ point in quadrature. So you take the same symbols, but read them off in the frequency domain instead of the time domain.
And I guess this works out quite equivalently to normally modulating these symbols at properly offset frequencies (just by the superposition principle)
This talk was given at crowd supply’s 2025 teardown convention which after going for the first time last year I highly recommend it to anyone interested in hardware development. Met a lot of super cool people and managed to get my ticket price back 4x in the amount of free dev boards I got lol
Moreover, The Unreasonable Effectiveness of Linear, Orthogonal Change of Basis.
Learning about the Fourier Transform in my Signals and Systems class was mind opening. The idea you can represent any cycling function with sinusoidal functions would not only never occur to me but I would have said it wasn't possible.
At the time of his death by a Roman soldier the ancient mathematician Archimedes is said to yell: Don't disturb my circles, while he was calculating on sand. Much later, a few years ago, one of his handbooks, an overwritten palimpsest, was found to contain elements of modern calculus. If both these concepts where saved and spread through the middle ages, human civilisation might have been developed 1000 years earlier.
Too bad -- the article doesn't mention Gauss. The Fourier transform is best presented to students in its original mathematical form, then coded in the FFT form. It also serves as a practical introduction to complex numbers.
As to the listed patent, it moves uncomfortably close to being a patent on mathematics, which isn't permitted. But I wouldn't be surprised to see many outstanding patents that have this hidden property.
Pretty sure in the USA you can patent mathematics if it is an integral part of the realisation of a physical system.* There is a book "Math you Can't Use" that discusses this.
* not a legal definition, IANAL.
> Pretty sure in the USA you can patent mathematics if it is an integral part of the realisation of a physical system.
Yes, that's true. In that example, you're not patenting mathematics, you're patenting a specific application, which can be patented. In my reading I see that mathematics per se is an abstract intellectual concept, thus not patentable (reference: https://ghbintellect.com/can-you-patent-a-formula/).
There is plenty of case law in modern times where the distinction between an abstract mathematical idea, and an application of that idea, were the issues under discussion.
An obligatory XKCD reference: https://xkcd.com/435/
And IANAL also.