Sunday, November 7, 2021

On the (Im)possibility of Scalable Quantum Computing

I just finished a paper entitled, “On the (Im)possibility of Scalable Quantum Computing,” which is the expanded article version of the YouTube presentation in this post.  I will submit it to the arXiv but fully expect it to be rejected, like some of my other papers, on the basis that it questions a fashionable religion within the physics community.  While this paper does not specifically reject the Cult of U, it does argue that the multi-billion-dollar quantum computing industry is founded on a physical impossibility.

The paper can be accessed as a preprint here, and here is the abstract:

The potential for scalable quantum computing depends on the viability of fault tolerance and quantum error correction, by which the entropy of environmental noise is removed during a quantum computation to maintain the physical reversibility of the computer’s logical qubits. However, the theory underlying quantum error correction applies a linguistic double standard to the words “noise” and “measurement” by treating environmental interactions during a quantum computation as inherently reversible, and environmental interactions at the end of a quantum computation as irreversible measurements. Specifically, quantum error correction theory models noise as interactions that are uncorrelated or that result in correlations that decay in space and/or time, thus embedding no permanent information to the environment. I challenge this assumption both on logical grounds and by discussing a hypothetical quantum computer based on “position qubits.” The technological difficulties of producing a useful scalable position-qubit quantum computer parallel the overwhelming difficulties in performing a double-slit interference experiment on an object comprising a million to a billion fermions.

No comments:

Post a Comment

All comments are moderated. After submitting your comment, please give me 24 hours to approve. Thanks!