Scalable Quantum Computing is Impossible

 On the (Im)possibility of Scalable Quantum Computing

or…

 

Andrew Knight, J.D.

aknight@nyu.edu

www.inavat.com

Entire talk on YouTube: https://youtu.be/steLwWVqZXY

 

Scalability in QC

§  Can you make lots of qubits?  (Cost, size, cooling, etc.)

§  Can you store a superposition state with high fidelity?

§  Can you apply one-qubit gates with high fidelity?

§  Can you entangle with two-qubit gates with high fidelity?

§  Can you adequately entangle sufficiently many logical qubits with high fidelity to do something useful?

§  If you can’t do this, then you can’t do QC.

§  Caveat: Maybe you can still do Noisy Intermediate-Scale Quantum (“NISQ”), but it has very limited (if any) applications. [Preskill 2018]

§  The word “scalable” may be misleading in QC.

 

How QC may be faster
(using Shor’s Algorithm to factor 15)

§  Caveat: SA has NEVER been implemented in any form.  Only heavily simplified algorithms have been implemented.  [Beckman]

§  Imagine factoring the number 15, which is 4 bits.  Requires minimum 21 computational/logical qubits, which is a 221-dimension vector in which all amplitudes are initially 0 except for one (specifically, the amplitude corresponding to |111100000000000000000>).

§  Then you apply a 221x221 unitary matrix corresponding to Shor’s algorithm, which is really just a (reversible) basis shift of the original vector.  Measuring this basis-shifted vector gives information on its prime factors… although you have to repeat the computation many times to build up a useful probability distribution.

§  Classically, this is a very computationally intense problem because you have to calculate the amplitudes in that 221x221 unitary matrix.  The matrix keeps track of the entanglements between the 21 qubits.

§  For a quantum computer, that matrix can be effectively applied by subjecting those 21 qubits to approximately 4600 gates (each of which is a member of a universal gate set) in the correct order.  Mathematically, that enormous unitary matrix is the product of the matrices corresponding to those 4600 gates.  (Any unitary matrix, including Shor’s algorithm, can be expressed as the product of matrices corresponding to a universal gate set.)

§  QC would be faster because with a classical computer we have to keep track of all those amplitudes, while with QC nature keeps track of them in the form of entanglements.  So any time we manipulate a qubit, the amplitudes corresponding to its correlations with other entangled qubits get instantaneously manipulated.

 

Environmental Noise

§  The primary problem with QC is environmental noise or dephasing decoherence. (Note: typically dephasing time T2 > energy relaxation time T1)

§  Decoherence does not necessarily imply “measurement”… but more later…

§  Entanglement, which is the primary benefit of QC, is a double-edged sword.

§  Because the gates take time, at any given time t during the computation, the qubits are in a complicated entangled state equivalent to the action of some unitary matrix U(t), so that the vector describing the 21 qubits has up to 221 nonzero complex amplitudes. 

§  Good news: When a gate provides a controlled/known change to a qubit, many (or even all of) the amplitudes in the massive matrix immediately change, thanks to Mother Nature, far faster than could be computed classically.

§  Bad news: The same thing happens with noise (i.e., if some random/unexpected interaction occurs between the environment and a qubit).  If that noise changes all the amplitudes, it would “crash” the quantum computation. 

§  Is there a way to enjoy the benefits without the detriments?

§  Quantum Error Correction (“QEC”): If enough of the qubits are decoupled from each other so that a single error never destroys all the information in the amplitudes, and if the key information is coded in a kind of “redundancy” that allows detection and correction of that error, then maybe that “crash” can be avoided.

 

Threshold Theorem

§  Theory of QEC: encode a single logical qubit (“LQ”) using a system of physical qubits (“PQ”).

§  Without QEC, probability of QC success decreases exponentially with T (number of gates, number of qubits, total time, etc.).

§  Note: Repeating a very noise quantum computation many times doesn’t solve the problem, as “the required number of attempts [to adequately reduce the probability of never finding a coherent outcome]… is exponential in the length [of the input].” [Unruh]

§  With QEC, probability of QC success decreases with T but is still better.

§  Necessary gate accuracy ε ~ (log T)-b (where typically 3<b<4) versus ε ~ 1/T without QEC.  [Preskill 1998 p. 31]

§  But with concatenated codes utilizing QEC, and with error rates of gates and storage below a threshold εth, and under various other assumptions (discussed later), the probability of QC success can be made arbitrarily close to 1. [Aharonov]

§  [Preskill 1998] estimates thresholds εth for gate and storage errors at around 10-4.

 

Useful QC (?)

§  How many LQs do we need to do something useful?

§  Economic Pricing Models: around 10K LQs subject to over 3 billion gates.  [Chakrabarti]

§  Shor’s algorithm for 2000 binary (600 decimal) digit number: >10K LQs subject to over 600 billion gates [Beckman]

§  Estimates for PQ/LQ in QEC for implementing SA:

§  In general (concatenated codes): assuming error rates (for both storage and gate) around 10-6, need 3 levels of concatenation (73 = 343 PQ per LQ) plus lots of ancillas. [Preskill 1998] estimates 5 million PQs to implement SA for 2000 digit number.

§  Using surface codes: assuming PQ error rate 1/10 of threshold rate, need 14500 PQ per LQ.  But you also need lots of logical ancillas at 800,000 PQs each.  [Fowler] estimates 1 billion PQs to implement SA for 2000 digit number.

§  Note “quantum supremacy” [Arute], like “teapot problem,” does not solve any useful problem.  Also, quantum annealing, which is useful and possible, is not really QC.

§  Even if a quantum algorithm might be “useful,” beware of caveats.  [Aaronson 2015]

§  Noise Intermediate-Scale Quantum (“NISQ”)… [Preskill 2018]  “Perhaps NISQ will allow us to speed up the time to solution for problems of broad interest in the near future, but we don’t know yet whether that will happen.” 

§  Therefore, everything depends on QEC…

 

Quantum Error Correction

§  General idea is to transfer entropy of noise to ancilla bits. 

§  Encode single LQ in several PQs (and each of those PQs is encoded in several more PQs in a first layer of concatenation, and so forth…).

§  Then regularly create new ancilla qubits, correlate them to the LQs, apply syndrome extraction operations, measure the ancilla (which provides information on which PQs were corrupted without providing information on the state of those qubits), then correct the errors with more gates.  Following figure from [Steane]:


§  Important: You cannot encode, error correct, and decode between gates, because the encoding/correction/decoding operations themselves require lots of gates! 

§  Therefore, computations/gates must be done on encoded qubits. This is called executing gates “transversally,” and is theoretically possible with particular gates (which create universal set).  [Shor]  This is the basis of “fault-tolerant” QEC (“FTQEC”).  See [Preskill 1998] for description of fault-tolerant NOT, Hadamard, Phase, CNOT, and Toffoli gates.

 

QEC – Digitization of Noise

§  Encode qubit in state a|0> + b|1> to LQ “cat state” |Ψ> = a|000> + b|111>

§  Assume small random noise rotation to qubit 2 about X axis yields                                     

E2 |Ψ> = [a|000> + b|111>] – iε2 [a|010> + b|101>]

§  Add three ancilla in state |000>, then apply syndrome extraction operator S that leaves original three qubits intact but changes ancilla according to location of parity error. 

§  S|111>|000> = |111>|000>, S|110>|000> = |111>|001>, etc.

§  S E2 |Ψ> = [a|000> + b|111>] |000> – iε2 [a|010> + b|101>] |101>

§  Measure ancilla.  Measurement of |000> (most likely) projects Ψ to a|000> + b|111>, while measurement of |101> (with likelihood ε22) projects Ψ to a|010> + b|101>, which we can then correct by applying gate corresponding to X rotation to qubit 2.

§  This example was simplified for bit flips (X) but also applies to phase flips (Z) and both (Y).  Need 7 PQs per LQ per layer of concatenation.  [Steane]

§  Note: If qubit 2 had been measured (by noise, environment, scientist, etc.) in {|0>,|1>} basis, then LQ would have collapsed to |000> (with probability a2) or |111> (with probability b2).  QEC cannot correct this kind of error.

§  The error above adds possible eigenstates, and the purpose of QEC is to eliminate the extra eigenstates through measurement/collapse of ancilla (and to undo the error if it collapses to the “wrong” state).  However, a measurement irreversibly reduces the number of eigenstates.

 

Assumptions of QEC/TT

§  Some of the assumptions of the Threshold Theorem [Dyakonov 2020] and QEC [Preskill 1998]:

§  Cat states can be verifiably prepared at the necessarily level of concatenation. 

§  E.g., three levels requires 343 PQs in state a|0000…000000> + b|1111…111111>

§  Unlimited on-demand fresh ancilla bits

§  Maximum parallelism to minimize storage errors

§  Gates can act on any pair of qubits

§  Errors are random (no systemic, common-cause, etc.)

§  Errors are uncorrelated.

§  “Thus when we say that the probability of error per qubit is (for example) ε 10−5, we actually mean that, given two specified qubits, the probability that errors afflict both is ε2 10−10. This is a very strong assumption.”  [Preskill 1998, emphasis added.]

 

Problems with QEC

§  Purely theoretical/mathematical.  Never achieved. (But see [Ofek], [Dyakonov 2020])

§  Like the theoretical reversibility of scrambling an egg, scalable QC may rest on mathematical assumptions that are at odds with our actual observations of the world.

§  Logical inconsistencies between mathematical theorems and physical reality:

§  Threshold Theorem assumes physically unrealistic infinitely fast gates. [Alicki]

§  Three assumptions of FTQEC are contradictory: a) uncorrelated errors; b) gates can be executed in time scales of Rabi frequency; and c) unlimited on-demand fresh ancilla bits.  [Hagar]

§  Hype, exaggeration, fraud? 

§  “QEC … typically incurring 10-50 physical qubits to encode one fault-tolerant qubit.”  [Tannu]

§  “It’s genuinely gotten harder to draw the line between defensible optimism and exaggerations verging on fraud.”  [Aaronson 2021]

§  “[T]he era of fault-tolerant quantum computing may still be rather distant.”  [Preskill 2018]

§  Every error code is limited to some set of “correctable” errors, and no code can address all errors.

§  [Waintal] suggests one type of error (“silent stabilizer failure” in which a stabilizer is not measured for several clock cycles) that puts lower limit on precision ηL of LQ in surface code: “[I]t is sufficient that a single stabilizer failure occurs for the duration of one logical operation to produce an irreversible logical failure, irrespectively of [the number of physical qubits].” 

§  The quantum computation will crash unless the probability ps for this event ps < 10-20 but could actually be 15 orders of magnitude higher.

§  [Dyakonov 2006] and [Dyakonov 2020] discuss several other problems.

 

But the Main Problem…

§  The word “measurement” is treated with a double standard in QC/QEC theory.

§  QC theory assumes (and needs) measurement at the end of computation but ignores it during the computation.

§  QC theory assumes that noise does not measure, but scientists do. That can’t be true, because Mother Nature does not distinguish between noise and scientists.  Sometimes, noise measures.  (More later…)

§  Assumption that errors result in random and uncorrelated phase shifts (or, equivalently through Hadamard gates, bit flips), or correlations that decay in space/time.

§  Implies that the environment retains no “memory” of these interactions -- e.g., photon bounces off qubit and correlates to it but the interaction itself provides no permanent information to the environment as to the qubit’s state. See, e.g., [Hagar].

§  Implies that the environment does not make any permanent measurement.  (If you perfectly reverse a “measurement” so that there is no lasting evidence of a measurement, then there was actually no measurement.  See, e.g., [Żukowski].) 

§  Even “projective” errors are treated as equivalent to random phase shifts; i.e., they are treated as inherently reversible, in which case no permanent measurement or memory gets embedded in the environment.  [Steane 7.1.2] 

§  This assumption guarantees that errors are reversible noise, which is convenient for QC theory because QEC cannot “fix” an irreversible measurement.

§  The noise models literally assume that interactions with the environment that occur during the computation are reversible (i.e., not measurements), while interactions with the environment that occur at the end of the computation are irreversible measurements, with no logical, mathematical, or scientific justification.

§  [Hagar p. 524] points out that we are treating entanglement with a double standard by assuming that error correlations decay but inter-qubit correlations don’t.  “[I]f one is allowed to cheat just once in quantum mechanics, one can indeed to miracles.”

 

A Different Approach…

§  Double-Slit Interference Experiments (“DSIE”):

§  Demonstrate quantum effects in objects (and photons, etc.)

§  Need to create “cat” state over position eigenstates

§  Need to maintain that state long enough to show interference effects

§  Every DSIE ever performed on matter (i.e., fermions) depends on dispersion of quantum wave packet via quantum uncertainty: Δx(mΔv) ℏ/2

§  Problems:

§  Objects and fields throughout the universe are constantly decohering superpositions.

§  The time needed to create a particular superposition increases with the number/mass of (entangled) particles… AND

§  The time needed to decohere a superposition decreases with the number/mass of (entangled) particles.

§  Therefore, the probability p of a molecule of mass m surviving long enough to create cat state and demonstrate interference p ~ e^(-m2)

§  Makes it increasingly hard at increasing rates to do DSIEs on larger objects. [Knight 2020]

 

DSIE Size Limitations

§  How much can an object disperse? Coherence lengths lc given by [Tegmark]:

§  Not technologically feasible to do DSIE on dust particle.

§  In the past century, the largest object subject to DSIE was an 810-atom molecule. [Eibenberger]

§  If p ~ e^(-m2), how feasible/expensive will it be to do DSIE on object with 106 – 109 fermions?

§  Therefore, DSIEs are not “scalable.”

 

DSIE Parallels to QC

§  “Position qubit”: Position of a fermion is used as the basis of a “position qubit,” so that its state is a superposition of position eigenstates |0> and |1> corresponding to semiclassical localizations separated by some distance d.

§  It will be at least as hard to create a cat state of 106 – 109 PQs (or highly entangled set of 10K LQs each encoded by 10K+ PQs in cat states) than to do DSIE on an object with 106 – 109 fermions… but probably much, much harder!

§  Nevertheless, imagine you’ve somehow managed to create a cat state of N fermions. 

§  Note: We assume that, unlike in the case of a molecule, the degrees of freedom of the N fermions are controllable, otherwise we could not do the quantum computation/processing.  Already this makes creation of the cat state much harder than the molecule.

§  Some (noise) particle having coherence width w gets absorbed by qubit K, causing its trajectory to change.

§  If w > d, then no information gets transmitted about the qubit’s state.  This is the kind of error that could potentially be addressed by QEC.

§  If w < d, then the particle’s absorption by qubit K distinguishes the qubit’s state by embedding its position information into the environment.  Any and all other measurements of the other qubits in the cat state will perfectly correlate to the position of qubit K, which means that the cat state will be irreparably destroyed.  QEC cannot fix this error (an unintentional measurement) for the same reason that it cannot restore a quantum state after an intentional measurement at the end of a computation.

§  Because this is the same mechanism that afflicts DSIEs: there is a practical/cost limit to the number of entangled “position qubits”; a QC based on “position qubits” is not scalable (and probably not even useful); and QEC will not work on it.

 

Conclusions

§  QC/QEC theory applies double standard to “measurement.”  Interactions with the qubits at the end of computation count as “measurement” that irreversibly project onto {|0>,|1>} basis; however, interactions during computation do not.  Instead, noise is generally assumed/modeled as random and uncorrelated and providing no (permanent) information to the environment regarding a qubit’s state.

§  QC, like DSIEs, depend on the demonstration of quantum interference effects. DSIEs demonstrate that sometimes noise makes actual (permanent, irreversible) measurements; this kind of noise is what makes DSIEs non-scalable. 

§  The failure of QC theory to adequately address this kind of irreversible noise makes it suspect. To the extent that this mechanism is fundamental to the quantum world (as opposed to a quirk of DSIEs), the combination of entanglement with noisy/unintended projective measurements may similarly limit the power of QC.

§  The scalability of QC depends on creating systems of larger and larger size that are: a) highly entangled and b) reversible (until the final intentional measurement).  DSIEs are not scalable in practice, even if the assumption of universality of quantum mechanics implies that DSIEs are scalable in principle (but see [Knight 2021]).  If there is something fundamental about the physical world that makes it practically impossible to create highly entangled reversible systems larger than a few thousand particles, and if we can’t create a useful QC with fewer than around a million PQs, then useful QC is effectively impossible.

§  The example of the “position qubit”-based QC is suggestive.  It is neither scalable nor subject to QEC.  Couple of possibilities:

§  It’s a bad example of a qubit; or

§  It highlights that unintended projective measurements by objects and fields ubiquitous in the universe, which present a fundamental scalability problem in DSIEs, also present a fundamental scalability problem in quantum computing.

 

References

§  Aaronson, S., 2015. Read the fine print. Nature Physics11(4), pp.291-293.

§  Aaronson, S., 2021. QC ethics and hype: the call is coming from inside the house.  https://www.scottaaronson.com/blog/?p=5387

§  Aharonov, D. and Ben-Or, M., 2008. Fault-tolerant quantum computation with constant error rate. SIAM Journal on Computing.

§  Alicki, R., 2006. Quantum error correction fails for Hamiltonian models. Fluctuation and Noise Letters6(03), pp.C23-C28.

§  Arute, F., Arya, K., Babbush, R., Bacon, D., Bardin, J.C., Barends, R., Biswas, R., Boixo, S., Brandao, F.G., Buell, D.A. and Burkett, B., 2019. Quantum supremacy using a programmable superconducting processor. Nature574(7779), pp.505-510.

§  Beckman, D., Chari, A.N., Devabhaktuni, S. and Preskill, J., 1996. Efficient networks for quantum factoring. Physical Review A54(2), p.1034.

§  Chakrabarti, S., Krishnakumar, R., Mazzola, G., Stamatopoulos, N., Woerner, S. and Zeng, W.J., 2020. A threshold for quantum advantage in derivative pricing. arXiv preprint arXiv:2012.03819.

§  Dyakonov, M.I., 2006. Is fault-tolerant quantum computation really possible. Future Trends in Microelectronics. Up the Nano Creek, p.4.

§  Dyakonov, M.I., 2020. Will we ever have a quantum computer?. Springer.

§  Eibenberger, S., Gerlich, S., Arndt, M., Mayor, M. and Tüxen, J., 2013. Matter–wave interference of particles selected from a molecular library with masses exceeding 10000 amu. Physical Chemistry Chemical Physics15(35), pp.14696-14700.

§  Fowler, A.G., Mariantoni, M., Martinis, J.M. and Cleland, A.N., 2012. Surface codes: Towards practical large-scale quantum computation. Physical Review A86(3), p.032324.

§  Hagar, A., 2009. Active Fault-Tolerant Quantum Error Correction: The Curse of the Open System. Philosophy of Science76(4), pp.506-535.

§  Knight, A., 2020. Macroscopic Quantum Superpositions Cannot Be Measured, Even in Principle.  https://philarchive.org/archive/KNIMQS

§  Knight, A., 2021. The Invalid Inference of Universality in Quantum Mechanics.  https://philarchive.org/archive/KNITII

§  Ofek, N., Petrenko, A., Heeres, R., Reinhold, P., Leghtas, Z., Vlastakis, B., Liu, Y., Frunzio, L., Girvin, S.M., Jiang, L. and Mirrahimi, M., 2016. Extending the lifetime of a quantum bit with error correction in superconducting circuits. Nature536(7617), pp.441-445.

§  Preskill, J., 1998. Fault-tolerant quantum computation. In Introduction to quantum computation and information (pp. 213-269).

§  Preskill, J., 2018. Quantum Computing in the NISQ era and beyond. Quantum2, p.79.

§  Shor, P.W., 1996, October. Fault-tolerant quantum computation. In Proceedings of 37th Conference on Foundations of Computer Science (pp. 56-65). IEEE.

§  Steane, A.M., 2007. A tutorial on quantum error correction. In PROCEEDINGS-INTERNATIONAL SCHOOL OF PHYSICS ENRICO FERMI (Vol. 162, p. 1). IOS Press; Ohmsha; 1999.

§  Tannu, S.S. and Qureshi, M.K., 2019, April. Not all qubits are created equal: a case for variability-aware policies for NISQ-era quantum computers. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (pp. 987-999).

§  Tegmark, M., 1993. Apparent wave function collapse caused by scattering. Foundations of Physics Letters, 6(6), pp.571-590.

§  Unruh, W.G., 1995. Maintaining coherence in quantum computers. Physical Review A51(2), p.992.

§  Waintal, X., 2019. What determines the ultimate precision of a quantum computer. Physical Review A99(4), p.042318.

§  Żukowski, M. and Markiewicz, M., 2021. Physics and Metaphysics of Wigner’s Friends: Even Performed Premeasurements Have No Results. Physical Review Letters126(13), p.130402.

 

No comments:

Post a Comment

All comments are moderated. After submitting your comment, please give me 24 hours to approve. Thanks!