Google Quantum AI
quantumai.google187 points by segasaturn 13 days ago
187 points by segasaturn 13 days ago
The fact this prize exists is admitting that no one has figured out a use for quantum computers.
I have heard this mentioned several times in the last decade or so : "The only thing a quantum computer definitively does better than a classical computer is simulating a quantum computer."
Whether this capability is useful is up in the air.
Note that in practice, classical computers are going to be better at factoring numbers for the foreseeable future.
There have been XPRIZE competitions for vehicle efficiency, oil spill technology, more efficient rockets, health sensors, AI systems, genomics, etc.
Whether or not quantum computers have practical applications, the prize itself is not evidence of that.
> There have been XPRIZE competitions for vehicle efficiency, oil spill technology, more efficient rockets, health sensors, AI systems, genomics, etc.
All of which are based on existing technologies that have been delivering for decades if not an entire century (vehicle efficiency). Even something as nebulous as "AI systems" has been around for twenty years in the form of Google's original semantic search capabilities.
This "Quantum AI" prize, however, is a solution in search of a problem.
Ther are plenty of well documented uses for quantum computers, the hardware is just too nascent to fully accommodate them. The most powerful quantum computers today still only have just over 1,000 qbits.
I don't think this is totally accurate.
If you have significantly better quantum computers, you can solve realistic problems, yes.
But what's not being spelled out here is that as far as we know classical computers will still totally smoke them unless you allow a large probability of inaccurate results.
And if you are fine with inaccurate results, classical randomized algorithms make it a much more difficult deadline to beat.
What is the benchmark to have something useful for real-world use in number of qbits?
20 million physical qubits to break RSA 2048: https://quantum-journal.org/papers/q-2021-04-15-433/.
Physicist here. It highly depends on a bunch of factors (the type of qubits, the error correcting code, the error rate, the algorithm…), but a ballpark number for practical usefulness is 1 million physical qubits.
Keep in mind that qubit requirements keep tumbling down as people work hard to squeeze out as much as possible from a limited number of qubits.
Assuming what the public knows about is the state of the art, of course, which I doubt is a good assumption to make. I'm sure major governments have been funneling billions for years into secret projects to be the first to be able to break the (non-post-quantum) communications of everyone else.
Einstein did not have GPS in mind when he was developing his theories of relativity.
The theory of relativity does not in any way enable GPS. GPS is subject to (some) relativistic effects, but that is merely a source of bias, which could be corrected for with just an experience-based correction factor even if we did not understand relativity. If relativity did not exist as a physical concept, GPS would be easier, not harder or impossible. (I guess this misconception comes from xkcd in some form?)
A perhaps more relevant example: Einstein did not have the cell phone camera in mind when developing his theory of the photoelectric effect.
Heh. This is an interesting comment. Imagine if we didn't know about relativity - we would have discovered it as an annoyance/weird quirk instead as we ran into it.
Reminds me of that story about the self-evolving chip that was tasked to learn how to classify tones and instead took advantage of specific flaws in its own package.
A more relevant example would be that Einstein did not predict how to make a laser when he discovered the theory of the stimulated emission of radiation (the "SER" in "LASER").
The photoelectric effect had been well known for decades, Einstein has just given a good explanation of its behavior that was already known from experiments. It would have been equally easy for the designers of the first video camera vacuum tubes, which were used in the early television, to design them based only on the known experimental laws, ignoring Einstein's explanation.
On the other hand, the formulae of the stimulated emission of radiation, complementing the previously known phenomena of absorption and spontaneous emission, were something new, published for the first time by Einstein in 1917. They are the most original part of Einstein's work, together with the general relativity, but their practical applications are immensely more important for now than the applications of general relativity, which are limited to extremely small corrections in the results of some measurements with very high resolutions.
The inventions of the masers and lasers after WWII would not have been possible without knowing Einstein's theory of radiation.
I’m pretty sure humans still knew about the speed of light/radio waves being limited to c which is all you need to know to develop GPS. Time running slower on GPS would be an issue eventually though. Relativity does make it easier
How could he? there were no cell phones.
Yes, that was the point of the parent comment.
Thanks for reaffirming Poe's law. I was amused by how 'cell phone' was taken as a given, when talking about a CCD sensor.
I believe that most, if not all, cell phone cameras have cheaper CMOS sensors, not CCD sensors (which have a lower image noise, but they need a more expensive manufacturing process, less compatible with modern digital logic and more similar to the manufacturing processes used for DRAM).
AFAIK the CCD technology continues to be used only in large-area expensive sensors inside some professional video cameras, in applications like astronomy, microscopy, medical imaging and so on.
Quite true even full-frame DSLRs typically use CMOS sensors for some time now.
CCD was the first thing that came to mind as 'charge' is right in the name.
Out of curiosity, looked up invention dates for CCD 1969 and CMOS 1963 and CMOS sensor 1993 (quite a gap). I was playing with DRAM light sensitivity in the lab in the late 80's. I'm guessing CMOS had too much noise to be useful for a long while or something.
No, it was not taken as a given, it was an example of a very common product that digital image sensors enabled. I could have chosen e.g. digital cinema cameras, but they would not nearly have the same profound effect as cell phone cameras have had on society.
Survivorship bias. There is a lot more science that has not panned out.
I'm not saying quantum computing won't pan out, but if it has to there's some fundamental piece that is missing so far.
In contrast this effort is trying to imagine and monetize GPS before relativity is discovered.
That's not the point. The point is that a lot of discoveries and inventions wouldn't have happened if it weren't for researching just for curiosity's sake. Research results will often be useless for product development or capitalism in general. However, focusing research on achieving specific goals only might actually take you further away from your goals. You can't focus on something you don't know exists, you have to discover it first.
Maybe, when we have quantum computers, one nerd makes an accidental discovery that enables us to build a room temperature superconductor, and maybe not. But if we don't let people research freely what they're interested in and only things that will pan out, we're going to lose out on a lot of things.
I agree.
I didn't say quantum computing research is useless.
My point is that we are not at stage that we can offer a small prize and find monetizable uses for it.
Fundamental research requires a lot more funding than this.
> In contrast this effort is trying to imagine and monetize GPS before relativity is discovered.
The theory of relativity was discovered decades before GPS. Similarly, the theory of quantum computing was discovered in the 1990s.
I agree with the sentiment: trying to find applications for a technology (Large fault tolerant quantum computer) that doesn't exist yet. I just think relativity is the wrong comparison. I do not think that this effort if not worth it due to not having fault tolerant quantum computers. Theory alone can take one very far.
The origins of quantum computing give it a clear use: simulation of many-body systems.
Number factorization and anything else in BQP is also an use for them.
Looking forward to when leetcode problems require BPQ complexity analysis
We might never retire but at least we’ll grind out leetcode in our 60’s while on adderall, ozempic, lions mane etc…and won’t feel a day over 55!
N body problems are usually non linear.
Quantum-everything is linear, how is this distinction overcome?
Also, doesn't solving this problem hint at quantum gravity?
From Feynman's 1982 talk on quantum computing:
"[N]ature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."
0. https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-...
>and if you want to make a simulation of nature, you'd better make it quantum mechanical
That is an absurd claim, if taken by itself. Most of nature relevant to us behaves classically. If you want to do simulations of a building or a car or an earthquake or the climate, modeling it as a quantum system would be absurd.
He's obviously talking about the nonclassical regime, where the fundamental quantum nature of reality matters.
You can simulate quantum mechanics with classical computers pretty well, as long as you stick to the copenhagen interpretation.
No even simulation of pure quantum states scales exponentially with the number of degrees of freedom, that’s irrespective of any interpretation or invoking of non-unitary evolution like measurements, just pure simulation of the Schrödinger equation. If you simulate an environment to e.g. incorporate wave function collapse or measurement operations you’ll work with a master equation that also grows linearly with the complexity of the density matrix simulation.
Feynman's lecture explains why classical computers are terrible at simulating quantum systems.
The basic problem is that the number of states grows exponentially with the size of the system. You very quickly have to start making approximations, and it takes an enormous amount of classical computing power and memory to handle even relatively small systems.
Yes, you have to make approximations and deal with (estimates of) errors.
However, quantum computers also have to deal with noise and errors. So far, that's not very different.
(If we manage to build error-correcting quantum computers, that might change.)
The approximations don't just introduce small errors. To simulate quantum systems classically, you need to make drastic assumptions that fundamentally change the nature of the system.
This is very different from, say, an approximation that adds in a small amount of noise that you can estimate. The approximations in simulating quantum systems classically can radically change the behavior of the system, in ways that you might not understand or be able to easily estimate.
Huh? Your simulation doesn't care about your interpretation. All interpretations of quantum mechanics make the same predictions.
We currently can't even simulate a hydrogen atom.
We can absolutely simulate the hydrogen atom. This paper lists the equations and fundamental constants that allow calculating the hydrogen energy levels with around 13 digits of accuracy: https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.93....
That is not a simulation and those are not fundamental constants.
Because a simulation is too difficult, there are approximate formulae for computing the quantities of interest, like the energy levels of the spectrum of the hydrogen atom.
These approximate formulae include a large number of constants which are given in the paper linked by you and which are adjusted to match the experimental results.
A simulation of the hydrogen atom would start from a much smaller set of constants: the masses of the proton and of the electron, the magnetic moments of the proton and of the electron, the so-called fine structure constant (actually the intensity of the electromagnetic interaction) and also a few fundamental constants depending on the system of units used, which in SI would be the elementary electric charge (determined by the charge of a coulomb in natural units), the Planck constant (determined by the mass of a kilogram in natural units) and the speed of light in vacuum (determined by the length of a meter in natural units).
The inputs of the formulas for the hydrogen energy levels in the paper are: The Rydberg constant, the fine structure constant, the electron-to-proton mass ratio, the electron-to-muon mass ratio, the Compton wavelength of the electron, and some nuclear properties (charge radius, Friar radius, and nuclear polarizability). All inputs except the nuclear properties are as fundamental as it gets according to our current understanding of physics (note that the Rydberg constant and Compton wavelength are simple combinations of other physical constants). Nuclear physics is dominated by quantum chromodynamics which is not nearly as well developed as QED.
The constants are determined by fitting the theory to the best available measurements (not only in hydrogen). This is exactly what fundamental constants do: They convert unit-less theory expressions into measurable quantities.
We know how to simulate it, but we can't do it. Those equations though require too much computation if you solve them with any known classical algorithm.
This is completely wrong. My laptop can solve the equations in fractions of a second. I believe that with some optimizations it should be trivial to do the calculations on a 1960s mainframe.
That is not true.
You can solve such equations in fractions of a second only for very low precisions, much lower than the precision that can be reached in measurements.
For higher precision in quantum electrodynamics computations, you need to include an exponentially increasing number of terms in the equations, which come from higher order loops that are neglected when doing low precision computations.
When computing the energy levels of the spectrum of a hydrogen atom with the same precision as the experimental results (which exceed by a lot the precision of FP64 numbers, so you need to use an extended precision arithmetic library, not simple hardware instructions), you need either a very long time or a supercomputer.
I am not sure how much faster can that be done today, e.g. by using a GPU cluster, but some years ago it was not unusual for the comparisons between experiments and quantum electrodynamics to take some months (but I presume that the physicists doing the computations where not experts in optimizations, so perhaps it would have been possible to accelerate the computations by some factor).
I believe you might be confusing the QED calculations of hydrogen with those of the electron g-factor. Just have a look into the paper I linked (section VII). Most of the QED corrections are given analytically, no computers involved at all. You could in principle calculate this with pen-and-paper (and a good enough table of transcendental functions).
The most accurate hydrogen spectroscopy (of the 1S-2S transition) has reached a relative accuracy of a few parts in 1E15 which is around an order of magnitude above the precision of FP64 numbers.