In this post, just before beginning a class on quantum computing at NYU, I predicted that scalable quantum computing ("SQC") is in fact impossible in the physical world.

I was right.

And I can finally articulate why. The full explanation (“Scalable Quantum Computing is Impossible”) is posted here and in the following YouTube video.

Here is the general idea. Let me make a few assumptions:

·
A system is not “scalable” in T (where T might
represent, for example, total time, number of computation steps, number of
qubits, number of gates, etc.) if the probability of success decays
exponentially with T. In fact, the whole
point of the Threshold
Theorem (and fault-tolerant quantum error correction (“FTQEC”) in general)
is to show that the probability of success of a quantum circuit could be made
arbitrarily close to 100% with “only” a polynomial increase in resources.

·
Quantum computing is useless without at least a
million or a billion controllably entangled physical qubits, which is among the more optimistic estimates for useful fault-tolerant quantum circuits. (Even "useful" QC isn’t all that useful,
limited to a tiny set of very specific problems. Shor’s Algorithm, perhaps the most
famous of all algorithms that are provably faster on a quantum computer than a
classical computer, won’t even be useful if and when it can be implemented
because information encryption technology will simply stop making use of prime
factorization!)

o There are lots of counterarguments, but they’re all desperate attempts to save QC. “Quantum annealing” is certainly useful, but it’s not QC. Noisy Intermediate-Scale Quantum (“NISQ”) is merely the hope that we can do something useful with the 50-100 shitty, noisy qubits that we already have. For example, Google’s “quantum supremacy” demonstration did absolutely nothing useful, whether or not it would take a classical computer exponential time to do a similarly useless computation. (See the “Teapot Problem.”)

Given these assumptions, what do I actually think about the possibility of SQC?

First of all, what reasons do we have to believe that SQC is possible at all? Certainly the thousands of peer-reviewed publications, spanning the fields of theoretical physics, experimental physics, computer science, and mathematics, that endorse SQC, right? Wrong. As I pointed out in my last post, there is an unholy marriage between SQC and the Cult of U, and the heavily one-sided financial interest propping them up is an inherent intellectual conflict of interest. Neither SQC nor FTQEC has ever been experimentally confirmed, and even some of their most vocal advocates are scaling back their enthusiasm. The academic literature is literally full of falsehoods, my favorite one being that Shor’s Algorithm has been implemented on a quantum computer to factor the numbers 15 and 21. (See, e.g., p. 175 of Bernhardt’s book.) It hasn’t.

Second, SQC depends heavily on whether U (the assumption
that quantum wave states always evolve linearly or unitarily… i.e., that wave
states do not “collapse”) is true. It is
*not *true, a point that I have made many, many times (here here here here
here
here
etc.). Technically, useful QC might
still be possible even if U is false, as long as we can controllably and
reversibly entangle, say, a billion qubits before irreversible collapse
happens. But here’s the problem. The largest double-slit interference
experiment (“DSIE”) ever done was on an 810-atom
molecule. I’ll discuss this more in
a moment, but this provides very good reason to think that collapse would
happen long before we reached a billion controllably entangled qubits.

Third, the Threshold Theorem and theories of QEC, FTQEC,
etc., all depend on a set of assumptions, many of which have been heavily
criticized (e.g., Dyakonov). But not only are some of these assumptions
problematic, they may actually be logically inconsistent… i.e., they *can’t
*all be true. Alicki
shows that noise models assumed by the Threshold Theorem assume infinitely fast
quantum gates, which of course are physically impossible. And Hagar
shows that three of the assumptions inherent in TT/FTQEC result in a logical
contradiction. Given that FTQEC has
never been empirically demonstrated, and that its success depends on
theoretical assumptions whose logical consistency is assumed by people who are
generally bad at logic (which I’ve discussed in various papers (e.g., here and here) and in various blog entries (e.g., here and here)), I’d say their conclusions are likely false.

But here’s the main problem – and why I think that
SQC is in fact impossible in the real world:

**Noise sometimes measures, but QC theory assumes it doesn't.**

In QC/QEC theory, noise is modeled as reversible, which means
that it is *assumed* to not make permanent measurements. (Fundamentally,
a QC needs to be a reversible system. The
whole point of QEC is to “move” the entropy of the noise to a heat bath so that the evolution
of the original superposition can be reversed. I pointed out here
and here
that scientifically demonstrating the reversibility of large systems is impossible
as a logical contradiction.) This assumption is problematic for two huge reasons.

First, measurements
are intentionally treated with a double standard in QC/QEC theory. The theory assumes (and needs) measurement at the end
of computation but ignores it *during* the computation. The theory's noise models literally assume that
interactions with the environment that occur *during* the computation are reversible (i.e., not
measurements), while interactions with the environment that occur at the *end* of
the computation are irreversible measurements, with no logical, mathematical,
or scientific justification for the distinction. This is not an oversight: QEC cannot correct irreversible measurements, so proponents of QEC are forced to assume that *unintended *interactions are reversible but *intended *interactions are irreversible. Can Mother Nature really distinguish our intentions?

Second, and more importantly, the history and theory of DSIEs indicates that
*noise sometimes measures!* All DSIEs have
in fact depended on dispersion of an object’s wave packet both to produce a
superposition (e.g., “cat” state) and to demonstrate interference effects. However, the larger the object, the more time
it takes to produce that superposition and the larger the cross section for a
decohering interaction with particles and fields permeating the
universe. As a result, the probability
of success of a DSIE decays
exponentially as the square of the object’s mass (p ~ e^(-m^{2})), which helps to explain why despite exponential technological progress, we can't yet do a DSIE on an object having 1000 atoms, let alone a million or a billion.
What this means is that DSIEs are not scalable, and the fundamental *reason*
for this unscalability – a reason which seems equally applicable to SQC – is that
noise at least sometimes causes irreversible projective measurements.

This is fatal to the prospect of scalable quantum computing. If a single irreversible measurement (even if such an event is rare) irreparably kills a quantum calculation, then the probability of success decays exponentially with T, which by itself implies that quantum computing is not scalable. But DSIEs demonstrate that not only does noise sometimes cause irreversible measurement, those irreversible measurements happen frequently enough that, despite the very best technology developed over the past century, it is practically impossible to create controllably highly entangled reversible systems larger than a few thousand particles.

Quantum computing is neither useful nor scalable.

Just stumbled across your article. Thanks you have confirmed what I have thought for many years, QC (as we are currently pursuing) will never work. Maybe in another 100 years when we have invented technologies, physical mechanisms to physically construct them. In the mean time I will continue trusting good old 1's and 0's.

ReplyDelete