World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Showing posts with label quantum computing. Show all posts
Showing posts with label quantum computing. Show all posts

Wednesday, March 9, 2022

All Contradictions Are False

Today I received a great question from a reader that was worth posting here.  First, he references my paper on the impossibility of scalable quantum computing and says:

There seems to be nowhere for anyone to discuss it. Academia is sealed off to the public nowadays. 

Agreed.  That’s because, as I discussed in this post, there is an unholy marriage between the prophets of quantum computing and the Cult of U (i.e., the fanatical belief that quantum wave states always evolve linearly).  After attempting to submit several papers to the arXiv that questioned the assumption of U, I had assumed I was blacklisted until the unexpected happened.

Then, he asks a key question:

We hear (ad nauseum) that the spin on an electron can be both 'up' and 'down' simultaneously, but what does that actually mean?  It may be true in ket space, but it cannot be true in actual space.

First, let me point out that this should be a really stupid question.  No intelligent person should ever have to ask, “What does it mean if X is true AND X is false?”  Logically, that looks like X ∩ ¬X, which any logician will say is a contradiction and therefore a false statement.

But here’s the thing… the reader’s question is a perfectly reasonable and rational question in the physics world, because we are constantly bombarded by characterizations of quantum superpositions that are completely nonsensical.  For example, Schrodinger’s Cat is constantly described by physicists as “a cat that is both dead and alive simultaneously.”  But this is a contradiction and is necessarily false. 

This is the kind of logical error that physicists routinely make.  I have tried to point them out in papers like this and this.

If an object is in quantum state described by |A> + |B>, it is simply not the case that the object is in both state |A> and |B> simultaneously.  In fact, none of the following statements are true:

·       The object is in state |A>.

·       The object is in state |B>.

·       The object is not in state |A>.

·       The object is not in state |B>.

Indeed, there simply is no fact about whether the object is in state |A> or |B>.

In other words, dear reader, you have been misled by physicists who either don't understand basic logic or are intentionally trying to deceive the world with bad characterizations of quantum mechanics, because that is what is necessary to keep the investment money flowing into quantum computing.

By the way, infinity itself is also a contradiction.  As I discuss in this post, atheist physicists like to posit infinitely many universes to account for the existence of something that has a zero probability.  But all contradictions are false, so infinity cannot be used to imply anything true.

Sunday, November 7, 2021

On the (Im)possibility of Scalable Quantum Computing

I just finished a paper entitled, “On the (Im)possibility of Scalable Quantum Computing,” which is the expanded article version of the YouTube presentation in this post.  I will submit it to the arXiv but fully expect it to be rejected, like some of my other papers, on the basis that it questions a fashionable religion within the physics community.  While this paper does not specifically reject the Cult of U, it does argue that the multi-billion-dollar quantum computing industry is founded on a physical impossibility.

The paper can be accessed as a preprint here, and here is the abstract:

The potential for scalable quantum computing depends on the viability of fault tolerance and quantum error correction, by which the entropy of environmental noise is removed during a quantum computation to maintain the physical reversibility of the computer’s logical qubits. However, the theory underlying quantum error correction applies a linguistic double standard to the words “noise” and “measurement” by treating environmental interactions during a quantum computation as inherently reversible, and environmental interactions at the end of a quantum computation as irreversible measurements. Specifically, quantum error correction theory models noise as interactions that are uncorrelated or that result in correlations that decay in space and/or time, thus embedding no permanent information to the environment. I challenge this assumption both on logical grounds and by discussing a hypothetical quantum computer based on “position qubits.” The technological difficulties of producing a useful scalable position-qubit quantum computer parallel the overwhelming difficulties in performing a double-slit interference experiment on an object comprising a million to a billion fermions.

Thursday, May 20, 2021

Quantum Computing is 99% Bullshit

 

In this post, just before beginning a class on quantum computing at NYU, I predicted that scalable quantum computing ("SQC") is in fact impossible in the physical world.

I was right.

And I can finally articulate why.  The full explanation (“Scalable Quantum Computing is Impossible”) is posted here and in the following YouTube video.

Here is the general idea.  Let me make a few assumptions:

·       A system is not “scalable” in T (where T might represent, for example, total time, number of computation steps, number of qubits, number of gates, etc.) if the probability of success decays exponentially with T.  In fact, the whole point of the Threshold Theorem (and fault-tolerant quantum error correction (“FTQEC”) in general) is to show that the probability of success of a quantum circuit could be made arbitrarily close to 100% with “only” a polynomial increase in resources.

·       Quantum computing is useless without at least a million or a billion controllably entangled physical qubits, which is among the more optimistic estimates for useful fault-tolerant quantum circuits.  (Even "useful" QC isn’t all that useful, limited to a tiny set of very specific problems.  Shor’s Algorithm, perhaps the most famous of all algorithms that are provably faster on a quantum computer than a classical computer, won’t even be useful if and when it can be implemented because information encryption technology will simply stop making use of prime factorization!)

o   There are lots of counterarguments, but they’re all desperate attempts to save QC.  “Quantum annealing” is certainly useful, but it’s not QC.  Noisy Intermediate-Scale Quantum (“NISQ”) is merely the hope that we can do something useful with the 50-100 shitty, noisy qubits that we already have.  For example, Google’s “quantum supremacy” demonstration did absolutely nothing useful, whether or not it would take a classical computer exponential time to do a similarly useless computation.  (See the “Teapot Problem.”)

Given these assumptions, what do I actually think about the possibility of SQC?

First of all, what reasons do we have to believe that SQC is possible at all?  Certainly the thousands of peer-reviewed publications, spanning the fields of theoretical physics, experimental physics, computer science, and mathematics, that endorse SQC, right?  Wrong.  As I pointed out in my last post, there is an unholy marriage between SQC and the Cult of U, and the heavily one-sided financial interest propping them up is an inherent intellectual conflict of interest.  Neither SQC nor FTQEC has ever been experimentally confirmed, and even some of their most vocal advocates are scaling back their enthusiasm.  The academic literature is literally full of falsehoods, my favorite one being that Shor’s Algorithm has been implemented on a quantum computer to factor the numbers 15 and 21.  (See, e.g., p. 175 of Bernhardt’s book.)  It hasn’t. 

Second, SQC depends heavily on whether U (the assumption that quantum wave states always evolve linearly or unitarily… i.e., that wave states do not “collapse”) is true.  It is not true, a point that I have made many, many times (here here here here here here etc.).  Technically, useful QC might still be possible even if U is false, as long as we can controllably and reversibly entangle, say, a billion qubits before irreversible collapse happens.  But here’s the problem.  The largest double-slit interference experiment (“DSIE”) ever done was on an 810-atom molecule.  I’ll discuss this more in a moment, but this provides very good reason to think that collapse would happen long before we reached a billion controllably entangled qubits.

Third, the Threshold Theorem and theories of QEC, FTQEC, etc., all depend on a set of assumptions, many of which have been heavily criticized (e.g., Dyakonov).  But not only are some of these assumptions problematic, they may actually be logically inconsistent… i.e., they can’t all be true.  Alicki shows that noise models assumed by the Threshold Theorem assume infinitely fast quantum gates, which of course are physically impossible.  And Hagar shows that three of the assumptions inherent in TT/FTQEC result in a logical contradiction.  Given that FTQEC has never been empirically demonstrated, and that its success depends on theoretical assumptions whose logical consistency is assumed by people who are generally bad at logic (which I’ve discussed in various papers (e.g., here and here) and in various blog entries (e.g., here and here)), I’d say their conclusions are likely false.

But here’s the main problem – and why I think that SQC is in fact impossible in the real world:

Noise sometimes measures, but QC theory assumes it doesn't.

In QC/QEC theory, noise is modeled as reversible, which means that it is assumed to not make permanent measurements.  (Fundamentally, a QC needs to be a reversible system.  The whole point of QEC is to “move” the entropy of the noise to a heat bath so that the evolution of the original superposition can be reversed.  I pointed out here and here that scientifically demonstrating the reversibility of large systems is impossible as a logical contradiction.)  This assumption is problematic for two huge reasons.

First, measurements are intentionally treated with a double standard in QC/QEC theory.  The theory assumes (and needs) measurement at the end of computation but ignores it during the computation.  The theory's noise models literally assume that interactions with the environment that occur during the computation are reversible (i.e., not measurements), while interactions with the environment that occur at the end of the computation are irreversible measurements, with no logical, mathematical, or scientific justification for the distinction.  This is not an oversight: QEC cannot correct irreversible measurements, so proponents of QEC are forced to assume that unintended interactions are reversible but intended interactions are irreversible.  Can Mother Nature really distinguish our intentions?  

Second, and more importantly, the history and theory of DSIEs indicates that noise sometimes measures!  All DSIEs have in fact depended on dispersion of an object’s wave packet both to produce a superposition (e.g., “cat” state) and to demonstrate interference effects.  However, the larger the object, the more time it takes to produce that superposition and the larger the cross section for a decohering interaction with particles and fields permeating the universe.  As a result, the probability of success of a DSIE decays exponentially as the square of the object’s mass (p ~ e^(-m2)), which helps to explain why despite exponential technological progress, we can't yet do a DSIE on an object having 1000 atoms, let alone a million or a billion.  What this means is that DSIEs are not scalable, and the fundamental reason for this unscalability – a reason which seems equally applicable to SQC – is that noise at least sometimes causes irreversible projective measurements.

This is fatal to the prospect of scalable quantum computing.  If a single irreversible measurement (even if such an event is rare) irreparably kills a quantum calculation, then the probability of success decays exponentially with T, which by itself implies that quantum computing is not scalable.  But DSIEs demonstrate that not only does noise sometimes cause irreversible measurement, those irreversible measurements happen frequently enough that, despite the very best technology developed over the past century, it is practically impossible to create controllably highly entangled reversible systems larger than a few thousand particles.  

Quantum computing is neither useful nor scalable.  

Monday, May 17, 2021

The (Quantum Computing) House of Cards

In the physics community, there is a house of cards built upon a nearly fanatical belief in the universality of quantum mechanics – i.e., that quantum wave states always evolve in a linear or unitary fashion.  Let’s call this fanaticism the Cult of U.

When I began this process a couple years ago, I didn’t realize that questioning U was such a sin, that I could literally be ostracized from an “intellectual” community by merely doubting U.  Having said that, there are a few doubters, not all of whom have been ostracized.  For instance, Roger Penrose, one of the people I most admire in this world, recently won the Nobel Prize in Physics, despite his blatant rejection of U.  However, he rejected U in the only way deemed acceptable by the physics community: he described in mathematical detail the exact means by which unitarity may be broken, and conditioned the rejection of U on the empirical confirmation of his theory.  As I describe in this post, Penrose proposes gravitational collapse of the wave function, a potentially empirically testable hypothesis that is being explored at the Penrose Institute.  In other words, he implicitly accepts that: a) U should be assumed true; and b) it is his burden to falsify U with a physical experiment.

I disagree.  In the past year, I’ve attempted (and, I believe, succeeded) to logically falsify U – i.e., by showing that it is logically inconsistent and therefore cannot be true – in this paper and this paper.  I also showed in this paper why U is an invalid inference and should never have been assumed true.  Setting aside that they have been ignored or quickly (and condescendingly) dismissed by nearly every physicist who glanced at them, all three were rejected by arvix.org.  This is both weird and significant.

The arXiv is a preprint server, specializing in physics (although it has expanded to other sciences), supported by Cornell University, that is not peer reviewed.  The idea is simply to allow researchers to quickly and publicly post their work as they begin the process of formal publication, which can often take years.  Although not peer-reviewed, arXiv does have a team of moderators who reject “unrefereeable” work: papers that are so obviously incorrect (or just generally shitty) that no reputable publisher would even consider it or send it for peer review by referees.  Think perpetual motion machines and proofs that we can travel faster than light.

What’s even weirder is that I submitted the above papers under the “history and philosophy of physics” category.  Even if a moderator thought the papers did not contain enough equations for classification in, say, quantum physics, on what basis could anyone say that they weren’t worthy of being refereed by a reputable journal that specializes in the philosophy of physics?  For the record, a minor variation of the second paper was in fact refereed by Foundations of Physics, and the third paper was not only refereed, but was well regarded and nearly offered publication by Philosophy of Science.  Both papers are now under review by other journals.  No, they haven’t been accepted for publication anywhere yet, but arXiv’s standard is supposed to be whether the paper is at least refereeable, not whether a moderator agrees with the paper’s arguments or conclusions! 

It was arXiv’s rejection of my third paper (“The Invalid Inference of Universality in Quantum Mechanics”) that made it obvious to me that the papers were flagged because of their rejection of U.  This paper offers an argument about the nature of logical inferences in science and whether the assumption of U is a valid inference, an argument that was praised by two reviewers at a highly rated journal that specializes in the philosophy of physics.  No reasonable moderator could have concluded that the paper was unrefereeable. As a practical matter, it makes no difference, as there are other preprint servers where I can and do host my papers.  (I also have several papers on the arXiv, such as this – not surprisingly, none of them questions U.)

But the question is: if my papers (and potentially others’ papers) were flagged for their rejection of U… why?!

You might think this is a purely academic question.  Who cares whether or not quantum wave states always evolve linearly?  For example, the possibilities of Schrodinger’s Cat and Wigner’s Friend follow from the assumption of U.  But no one actually thinks that we’ll ever produce a real Schrodinger’s Cat in a superposition state |dead> + |alive>, right?  This is just a thought experiment that college freshmen like to talk about while getting high in their dorms, right? 

Is it possible that there is a vested interest… perhaps a financial interest… in U?

Think about some of the problems and implications that follow from the assumption of U.  Schrodinger’s Cat and Wigner’s Friend, of course, but there’s also the Measurement Problem, the Many Worlds Interpretation of quantum mechanics, the black hole information paradox, physical reversibility, and – oh yeah – scalable quantum computing. 

Since 1994, with the publication of Shor’s famous algorithm, untold billions of dollars have flowed into the field of quantum computing.  Google, Microsoft, IBM, and dozens of other companies, as well as the governments of many countries, have poured ridiculous quantities of money into the promise of quantum computing. 

And what is that promise?  Well, I have an answer, which I’ll detail in a future post.  But here’s the summary: if there is any promise at all, it depends entirely on the truth of U.  If U is in fact false, then a logical or empirical demonstration that convincingly falsifies U (or brings it seriously into question) would almost certainly be catastrophic to the entire QC industry. 

I’m not suggesting a conspiracy theory.  I’m simply pointing out that if there are two sides to a seemingly esoteric academic debate, but one side has thousands of researchers whose salaries and grants and reputations and stock options depend on their being right (or, at least, not being proven wrong), then it wouldn’t be surprising to find their view dominating the literature and the media.  The prophets of scalable quantum computing have a hell of a lot more to lose than the skeptics.

That would help to explain why the very few publications that openly question U usually do so in a non-threatening way: accepting that U is true until empirically falsified.  For example, it will be many, many years before anyone will be able to experimentally test Penrose’s proposal for gravitational collapse.  Thus it would be all the more surprising to find articles in well-ranked, peer-reviewed journals that question U on logical or a priori grounds, as I have attempted to do.

Quoting from this post:

As more evidence that my independent crackpot musings are both correct and at the cutting edge of foundational physics, Foundations of Physics published this article at the end of October that argues that “both unitary and state-purity ontologies are not falsifiable.”  The author correctly concludes then that the so-called “black hole information paradox” and SC disappear as logical paradoxes and that the interpretations of QM that assume U (including MWI) cannot be falsified and “should not be taken too seriously.”  I’ll be blunt: I’m absolutely amazed that this article was published, and I’m also delighted. 

Today, I’m even more amazed and delighted.  In the past couple of posts, I have referenced an article (“Physics and Metaphysics of Wigner’s Friends: Even Performed Premeasurements Have No Results”), which was published in perhaps the most prestigious and widely read physics journal, Physical Review Letters, but only in the past few days have I really understood its significance.  (The authors also give a good explanation in this video.)

What the authors concluded about a WF experiment is that either there is “an absolutely irreversible quantum measurement [caused by an objective decoherence process] or … a reversible premeasurement to which one cannot ascribe any notion of outcome in logically consistent way.”

What this implies is that if WF is indeed reversible, then he does not make a measurement, which is very, very close to the logical contradiction I pointed out here and in Section F of this post.  While the authors don’t explicitly state it, their article implies that U is not scientific because it cannot (as a purely logical matter) be empirically tested at the size/complexity scale of WF.  This is among the first articles published in the last couple decades in prestigious physics journals that make a logical argument against U.

What’s even more amazing about the article is that it explicitly suggests that decoherence might result in objective collapse, which is essentially what I realized in my original explanation of why SC/WF are impossible in principle, even though lots of physicists have told me I’m wrong.  Further, the article openly suggests a relationship between (conscious) awareness, the Heisenberg cut between the microscopic and macroscopic worlds, and the objectivity of wave function collapse below that cut.  All in an article published in Physical Review Letters!

Now, back to QC.  After over two decades of hype that the Threshold Theorem would allow for scalable quantum computing (by providing for fault-tolerant quantum error correction (“FTQEC”)), John Preskill, one of the most vocal proponents of QC and original architects of FTQEC, finally admitted in this 2018 paper that “the era of fault-tolerant quantum computing may still be rather distant.”  As a consolation prize, he offered up NISQ, an acronym for Noisy Intermediate-Scale Quantum, which I would describe as: “We’ll just have to try our best to make something useful out of the 50-100 shitty, noisy, non-error-corrected qubits that we’ve got.”

Despite what should have been perceived as a huge red flag, more and more money keeps flowing into the QC industry, leading Scott Aaronson to openly muse just two months ago about the ethics of unjustified hype: “It’s genuinely gotten harder to draw the line between defensible optimism and exaggerations verging on fraud.”

Fraud??!!

The quantum computing community and the academic members of the Cult of U are joined at the hip, standing at the top of an unstable house of cards.  When one falls, they all do.  Here are some signs that their foundation is eroding:

·       Publication in reputable journals of articles that question or reject U on logical bases (without providing any mathematical description of collapse or means for empirically confirming it).

·       Hints and warnings among leaders in the QC industry that promises of scalable quantum computing (which inherently depends on U) are highly exaggerated.

I am looking forward to the day when the house of cards collapses and the Cult of U is finally called out for what it is.

Wednesday, January 27, 2021

Is Scalable Quantum Computing Possible? And Why Does It Matter?

Tomorrow I begin a class on quantum computing at NYU, taught by Javad Shabani.  In preparation, I am reading Scott Aaronson’s fascinating Quantum Computing Since Democritus.

The notion of quantum computing is simple.  Computers rely on bits – transistors that serve as little on-off switches.  By starting with an initial string of bits and then manipulating them in a particular way according to software (e.g., turning “on” or 1 switches to “off” or 0, etc.), a computer can essentially perform any calculation.  Computers don’t need to be made of transistors, of course, but that tends to be much more efficient than using, say, Tinker Toys.  A quantum computer is simply a computer whose bits are replaced with “qubits” (quantum bits).  Unlike a classical bit that can only take the state 0 or 1, a qubit can be in a superposition of 0 and 1 (or, more precisely, state α|0> + β|1>, where α and β are complex amplitudes and |α|2 is the likelihood of finding the qubit in state |0> and |β|2 is the likelihood of finding the qubit in state |1> if measured in the {|0>,|1>} basis). 

The reason this matters is that because there are “infinitely many” (well, not really, but certainly lots of) possible states for a single qubit, because α and β can vary widely, while there are only two states (0 or 1) for a classical bit.  In some sense, then, the “information content” of a qubit (and ultimately a quantum computer) is vastly greater than the information content of a classical bit (and corresponding classical computer).  If you think your iPhone is fast now, imagine one with a quantum computer processor!

At least... that’s the advertisement for quantum computing.  In reality, there are several problems with actual quantum computing.  I won’t dig too deeply into them, as they’re well described by articles such as this, but here are a few:

·         Nobody knows what to do with them.  There are a couple of particular kinds of software, such as Shor’s algorithm for factoring large composite numbers, that would have useful implications for cryptography and information security.  Beyond that, there don't seem to be many real-world applications of quantum computers that would be significantly faster than classical computers.

·         Qubits must remain isolated from the rest of the world (except for their entanglements with other qubits) during the computation, but this is a massively difficult problem because of decoherence.  You can have a microSD card with literally billions of classical bits... you can stick it in your pocket, use it to pick a piece of chicken out of your teeth, drop it in the toilet, probably zap it in the microwave for a few seconds... and it will probably still work fine.  (Full disclosure: I’ve never actually tried.)  But qubits are so ridiculously sensitive to influences from the world that it takes a huge multi-million-dollar system just to adequately cool and isolate even a dozen qubits.

·         Even if there were a way to adequately isolate lots of qubits, as well as entangle and manipulate them in a way necessary to execute a useful algorithm, and even if you could do this for a reasonable price on a reasonably sized device, error correction seems to be a major problem.  Errors are caused (at least in part) by decoherence, and quantum error-correction means are supposedly possible in principle, but these means (e.g., requiring 1000 additional error-correcting qubits for each existing qubit) may prove seriously problematic for the future of quantum computing.

At the end of the day, the real question is not whether a “quantum computer” consisting of a handful of entangled qubits is possible – of course it is, and such computers have already been built.  Rather, it is whether the problems of isolation, decoherence, and error-correction will prevent the possibility of "scaling up" a quantum computer to some useful size.  Aaronson famously offered $100,000 for “a demonstration, convincing to me, that scalable quantum computing is impossible in the physical world.”  I want to know the answer to this question not just because it’s such a massively important question pervading modern science and technology, but also because of its relationship to my own work on consciousness, with implications going both ways.  Specifically, what might the physical nature of consciousness tell us about the possibility of scalable quantum computing, and what might the possibility of scalable quantum computing tell us about the physical nature of consciousness?

Here’s an example.  I have been arguing for some time (e.g., in this paper and this post) that macroscopic quantum superpositions, like Schrodinger’s Cat (“SC”) and Wigner’s Friend (“WF”), can never be demonstrated, even in principle, because any “macroscopic” object (e.g., a dust particle, a cat, a planet, etc.) is already so well correlated to other objects through a history of interactions (including “indirect” interactions because of transitivity of correlation) that it can never exist in a superposition of macroscopically distinct position eigenstates relative to those other objects.  Of course, the majority opinion – practically the default position – among physicists and philosophers of physics is that WF is possible.  Nevertheless, even those who claim that WF is possible will admit that it’s really difficult (and perhaps impossible) in practice and will often resort to the plausibility of conscious AI (i.e., “Strong AI”) to save their arguments.  David Deutsch in this article, for example, spends dozens of pages with lots of quantum mechanics equations “proving” that WF is possible, but then spends a half page saying, essentially, that OK, this probably isn’t possible for an actual flesh-and-blood human but we might be able to do it on a computer and since it’s obvious that consciousness can be created on a computer... blah blah...

The problem, of course, is that not only is it not obvious, but I showed in these papers (here and here) that consciousness actually cannot be created on a computer because it is not algorithmic.  So if the possibility of WF depends on AI being conscious, and if computer consciousness is in fact physically impossible, then there must be some explanation for why WF is also physically impossible – and that explanation may equally apply to the impossibility of large quantum computers.  Further, many proponents of the possibility of computer consciousness, such as Aaronson, suspect that we’ll need a quantum computer to do the job, in which case the possibility of WF and conscious AI may hinge on the possibility of scalable quantum computing.   

Anyway, this is all to say that much of what I have discovered, innovated, and now believe about consciousness, quantum mechanics, information, Wigner’s Friend, etc., is closely related to the question of whether scalable quantum computing is possible.  Before actually beginning the class on quantum computing, here is my prediction: I think that scalable quantum computing is, in fact, impossible in the physical world.  Here’s why.

First, the possibility of scalable quantum computing, like the possibility of macroscopic quantum superpositions, follows from the assumption of “U” (i.e., the “universality” or “unitary-only” assumption that a quantum wave state always evolves linearly).  But U is an invalid logical inference as I argue in this paper; I actually think it is irrational to believe U.  In other words, it seems that the primary argument in support of scalable quantum computing is actually a logically invalid inference.  Further, I think that most of those who believe U (which is probably the vast majority of physicists) don’t even know why they believe U.  As a bettor, I would say that the smart money goes on those who actually understand (and, better yet, can justify) the assumptions they make.  The fact that so many of those who believe in scalable quantum computing also assume U leads me to doubt their claims.

Second, the possibility of scalable quantum computing depends on foundational questions about quantum mechanics, and very few scientists (including those who assert that scalable quantum computing is possible) actually understand quantum mechanics.  I know this may sound arrogant... how can I possibly claim to understand QM well enough to conclude that so few people do?  Well, that isn’t what I said – although, incidentally, I do believe I now understand QM at a level far more deeply than most.  You don’t have to understand a topic to be able to identify logical contradictions.  Unlike my brilliant physician wife, I know next to nothing about medicine or the human body, but if I heard a doctor say, “The brain does X” and then later say “The brain does not do X,” then I will know that the doctor does not understand the brain.  So it is with QM.  Here are a couple of papers in which I’ve addressed contradictions by physicists discussing QM (here, here, and here), and it drives me absolutely bonkers at the cognitive dissonance required for a physicist to say something like “Schrodinger’s Cat is both dead and alive.”

Third, and most importantly, I think that scalable quantum computing will run into the same problem as macroscopic quantum superpositions, which (as discussed above and in the cited papers) I think are impossible to create and empirically demonstrate.  I’m not sure it’s exactly the same problem, but it’s really similar.  For example, I argued here that when a tiny object measures the position of a measuring device, it inherently measures the position of other objects in the rest of the universe, whose positions are already well correlated to that of the measuring device.  Will that argument apply to, say, a million qubits that are entangled with each other but isolated and uncorrelated to other objects in the universe?  I don’t know, but it certainly suggests a similar problem. 

On a related note, I have argued that a superposition state of a single particle can grow via quantum dispersion, but as the object grows in size, a potential superposition suffers two problems: reduction in the rate of dispersion (thanks to the Uncertainty Principle) and increase in the rate of decoherence.  We can do double-slit interference experiments on objects as large as maybe a thousand atoms, although anything much beyond that seems to be impossible for all practical purposes.  I suspect the same problem, or something comparable, will arise with groups of entangled qubits.  In other words, I am reasonably confident that there is a set of quantum superpositions that are physically impossible to empirically demonstrate, even in principle – and I would bet that whatever physical mechanism prevents such superpositions would also prevent scalable quantum computing.   

But I don’t know for certain.  For example, I don’t know how an individual qubit is placed in an initial (superposition) state, nor do I know how groups of qubits are entangled and manipulated in the way necessary to perform the desired algorithm.  It may turn out that the only real limitation is decoherence, and perhaps error correction may indeed be adequate to overcome decoherence limitations.  I sincerely doubt it, but these are the sorts of questions I am looking forward to answering this semester!