Showing posts with label Schrodinger's Cat/Wigner's Friend. Show all posts
Showing posts with label Schrodinger's Cat/Wigner's Friend. Show all posts

Monday, December 14, 2020

The Measurement Problem and Bad Assumptions

I recently came to the realization that the assumed universal linearity/applicability of quantum mechanics (an assumption called “U”) cannot be empirically verified, as I discuss in this paper, this paper, this post, this post, and this update

This opinion is so unorthodox and so seemingly absurd (at least within the physics community) that the arXiv preprint server, which is not peer reviewed, literally rejected both of the above papers, presumably (although they won’t tell me) on the basis that it questions the “obvious” fact that macroscopic quantum superpositions of any size can be created and measured in principle. 

As more evidence that my independent crackpot musings are both correct and at the cutting edge of foundational physics, Foundations of Physics published this article at the end of October that argues that “both unitary and state-purity ontologies are not falsifiable.”  The author correctly concludes then that the so-called “black hole information paradox” and SC disappear as logical paradoxes and that the interpretations of QM that assume U (including MWI) cannot be falsified and “should not be taken too seriously.”  I’ll be blunt: I’m absolutely amazed that this article was published, and I’m also delighted. 

Finally, the tides may be turning, and whether or not I am ever acknowledged or lent credibility, I can take solace in the knowledge that I am on to something and that I likely understand things about the universe that few other scientists do.  But I think I can take this further.  I think I can show (and indeed already have shown) that there is no experiment that can be performed, even in principle, that can falsify or verify the assumption of U. 

For today’s post, I’m going to just focus on whether U was ever a valid assumption in the first place.  If it’s not – spoiler alert: it’s not – then there was never a Measurement Problem (or black hole information paradox or SC problem, etc.) in the first place. 

By the way, as I have done in this post and this post, I like to search certain phrases just to gauge whether and to what extent certain ideas exist on the internet.  To give you an idea of just how pervasive the assumption of U is, the following search phrases on Google yield, collectively, a total of 16 results:

"quantum mechanics cannot be linear"
"quantum theory cannot be linear"
"quantum mechanics cannot be unitary"
"quantum theory cannot be unitary"
"quantum mechanics cannot be universal"
"quantum theory cannot be universal"
"quantum mechanics is not linear"
"quantum mechanics is not unitary"

That’s right: this blog might be the first place in the history of the Internet to state, in these words, that quantum mechanics cannot be universal.  Incredible.


Just as logicians are bound by the laws of physics, physicists are bound by the rules of logic.  Some of the most persistent problems and paradoxes in the foundations of quantum mechanics, many of which have been unsuccessfully tackled by physicists and mathematicians for nearly a century, persist exactly because they aren’t really physics or mathematics problems at all, but rather problems of logic. 

Consider the problems of Schrödinger’s Cat (“SC”) and its conscious cousin, Wigner’s Friend (“WF”).  If SC is characterized (neglecting normalization constants) as, “A cat in state |dead> + |alive> is simultaneously dead and alive,”[1] then SC is inherently impossible because “dead” and “alive” are mutually exclusive.  Therefore, if SC is physically possible, at least in principle, then it is patently false that a cat in state |dead> + |alive> is simultaneously dead and alive.  Indeed, the characterization of any object in a quantum superposition of eigenstates of some observable as being “simultaneously in those eigenstates” is equally problematic.

While SC is not inherently paradoxical, it is more than a little odd.  The generally accepted conclusion that a SC state is possible in principle follows directly from the assumption that the mathematics of quantum mechanics (i.e., its linear or unitary evolution) applies universally at all scales, including both electron and cat.  This assumption leads to the century-old measurement problem (“MP”) of quantum mechanics (“QM”).  The measurement problem is actually a paradox, meaning that it comprises logically incompatible assumptions.  Contradictions are necessarily false and therefore cannot exist in nature.  If the conjunction of statements A and B, for example, leads to a contradiction, then at least one of statements A and B is false.  Often, MP has been characterized as the conjunction of three or more assumptions (e.g., Maudlin (1995) and Brukner (2017)); the logical incompatibility of these assumptions has been shown many times (e.g., Frauchiger and Renner (2018) and Brukner (2018)).  A simpler characterization of MP, reducing it to two assumptions, has been provided by Gao (2019):

P1) The mental state of an observer supervenes on her wave function;

P2) The wave function always evolves in accord with a linear dynamical equation (such as the Schrödinger equation).

Assumption P2 is often characterized as the “universality” of QM, in that the rules of QM are assumed to apply universally at all scales.  Kastner (2020) correctly notes that “unitary-only evolution” is a better description than “universality” because, she argues, a complete theory of QM (if there is one) will necessarily apply universally but its wave equations may not evolve in a purely unitary or linear fashion.  Whether meaning “universal” or “unitary-only,” in this paper I’ll generally refer to assumption P2 as “U.”

Contradictions do not exist in nature.  Because at least one of P1 and P2 is indeed false, a simultaneous belief in both is due to faulty reasoning.  The measurement problem is not a problem with nature; it is a problem with us.  It is either the case that P1 is the result of improper assumptions and/or reasoning, that P2 is the result of improper assumptions and/or reasoning, or both.  My goal in this paper is to attack P2 on a variety of grounds. 


A.        What’s the Problem?

The measurement problem is inextricably related to SC, WF, etc., and might be colloquially phrased as, “If a cat can exist as a superposition over macroscopically distinct eigenstates |dead> and |alive>, then why do we always see either a dead or a live cat?”  Or: “If quantum mechanics applies universally at all scales (or wave functions always evolve linearly), then why do we never observe quantum superpositions?”  Or even better: “Why don’t we see quantum superpositions?” 

As Penrose (1999) points out, MP is actually a paradox and any solution to it requires showing that at least one of its assumptions is incorrect.  All formulations of MP depend on the assumption of U.  Of course, MP would be solved if it were shown that QM wave states did not always evolve linearly, such as above certain scales.  After 100 years, why has this not yet been shown?  Perhaps this is the wrong question.  The first questions we should ask are: were we justified in assuming U in the first place?  Has it been empirically demonstrated?  Can it be empirically demonstrated?

U is itself an inference.  The question is not whether logical inferences can be made in science –they can and must.  One can never prove that a scientific hypothesis or law always holds; rather, it can only be adequately verified to permit such an inference, subject always to the possibility of experimental falsification.  However, U is a very special kind of inference, one that I will argue in the following sections is invalid.  First, U has been verified only in microscopic regimes.  No experiment has shown it to apply to macroscopic regimes; there is no direct experimental evidence for the applicability of the linear dynamics of QM to macroscopic systems.  Second, the lack of such evidence is not for lack of trying.  Rather, there seems to be a kind of asymptotic limit to the size of a system to which we are able to gather such evidence.  Third, U gives rise to the measurement problem – that is, it conflicts with what seems to be good empirical evidence that linear QM dynamics do not apply in most[2] cases.  These together, as I will argue, render U an invalid inference.   Further, even if one disagrees with this argument, the burden of proof rests not with the skeptics but with those who endorse an inference of U.  

Regarding the first point – that U has been verified only in the microscopic realm – there are certainly those who choose to tamper with the colloquial meanings of the words “microscopic” and “macroscopic,” or attempt to redefine “mesoscopic” as “nearly macroscopic,” to bolster their case.  Some may include as “macroscopic quantum superpositions” atoms in superposition over position eigenstates separated by a meter, or “macroscopic” (though barely visible) objects in superposition over microscopically-separated position eigenstates (e.g., O’Connell et al. (2010)).  I regard this as sophistry, particularly when such examples are used as empirical evidence that the creation and measurement of truly macroscopic superpositions, such as SC, are mere technological hurdles.  Nevertheless, I will assume that any physicist acting in good faith will readily admit that there is currently no direct empirical evidence that linear QM evolution applies to a cat, a human (e.g., WF), or even a virus.  That lack of experimental evidence renders an inference that QM applies universally especially bold.  And bold inferences are not necessarily problematic – until they come into conflict with other observations.

Consider these statements:

A1) Newton’s law of gravity applies universally.

A2) The observed perihelion of Mercury is in conflict with Newton’s law of gravity.

Statement A1 was a valid inference for a very long time.  The conjunction of these statements, however, is a contradiction, implying that at least one of them is false.  A contradiction sheds new doubt on each statement and increases the evidence necessary to verify each unless and until one of the assumptions is shown false.  Despite enormous quantities of data supporting an inference of A1, conflicting evidence ultimately led Einstein to reject A1 and formulate general relativity.

The measurement problem is such a paradox; it is the conjunction of two or more statements that lead to a contradiction.  If there were no paradox, we might reasonably have inferred U based only on the limited experimental data showing interference effects from electrons, molecules, etc.  However, U is in direct logical conflict with other statement(s) the veracity of which we seem to have a great deal of evidence.  Therefore, to justify the inference of U, we need more than a reason: we need a good reason.  However, given that the paradox arose essentially simultaneously with quantum theory, leading Erwin Schrödinger to propose his hypothetical cat as an intuitive argument against U, the burden of proof has always lain with those who assert U.  Have they met their burden?  Do we have good evidence to support the inference of U?

B.        Fighting Fire with Fire

Many (perhaps most) physicists have never questioned the assumption of U, and once asked whether we have good evidence to support the inference of U may regard the question itself as nonsense.  “Of course the wave function always evolves linearly – just look at the equations!”  Indeed, standard QM provides no mathematical formalism to explain, predict, or account for breaks in linearity.  Some collapse theories (such as the “GRW” spontaneous collapse of Ghirardi, Rimini, and Weber (1986) and gravitational collapse of Penrose (1996)) do posit such breaches, but no experiment has yet confirmed any of them or distinguished them from other interpretations of QM.  It is thus tempting, when evaluating U, to glance at the equations of QM and note that they do, indeed, evolve linearly.  But the question isn’t whether the equations evolve linearly, but whether the physical world always obeys those equations, and the answer to that question does not appear within the QM formalism itself.[3]

Modern physicists, who rely heavily on mathematics to proceed, typically demand rigorous mathematical treatment in addressing and solving physics problems.  Ordinarily, such demands are appropriate.  However, MP arises directly as a result of the mathematics of QM, in which the Schrödinger equation evolves linearly and universally.  Because MP is itself a product of the mathematics of QM, its solution is inherently inaccessible via the symbolic representations and manipulations that produced it.  If the math itself is internally consistent – and I have no reason to believe otherwise – then you cannot use the math of QM as evidence that the math of QM is always correct.[4]  You can, however, use empirical evidence to support such an inference.  Do we have such evidence?

C.        Empirical Data

The rules of QM allow us to make probabilistic predictions on the outcomes of measurements that differ from the expectations of classical probability.  In a very real sense, this is both how QM was discovered[5] as well as what makes an event quantum mechanical.  Wave functions of objects contain a superposition of terms having complex probability amplitudes, and interference between those terms in an experiment can alter outcome probabilities.  A demonstration of quantum effects, then, depends on an interference experiment – i.e., an experiment that demonstrates interference effects in the form of altered probability distributions.  In a very real sense, quantum mechanics is fundamentally about making probabilistic predictions that depend on whether interference effects from terms in a coherent superposition are relevant.[6]

It is often claimed that no violation of the linearity of QM has ever been observed, or that no experiment has ever shown the non-universality of QM.  In fact, this is the only empirical scientific evidence that physicists can cite to support the inference of U.  How good is this evidence?  Consider the following claim, perhaps supported by the vast majority of physics:

“The mathematics of QM applies to every object subjected to a double-slit interference experiment, no matter how massive, because no experiment has ever demonstrated a violation.” 

Indeed, double-slit interference experiments (“DSIE”) have been successfully performed on larger and larger (though still microscopic) objects, such as a C60 molecule.  (Arndt et al. (1999).)  However, to evaluate the extent to which this evidence supports an inference of U, it is necessary to consider how DSIEs are set up and performed. 

Nature – thanks to the Heisenberg Uncertainty Principle – creates superpositions ubiquitously. Quantum uncertainty, loosely defined for a massive object as , guarantees dispersion of quantum wave packets, thus increasing the size of location superpositions over time. However, interactions with fields, photons, and other particles ever-present in the universe constantly “measure” the locations of objects and thus decohere[7] these superpositions.  (See, e.g., Tegmark (1993) and Joos et al. (2013).) This decoherence, which I’ll discuss in greater detail in the next section, explains both why we don't observe superpositions in our normal macroscopic world and also why visible interference patterns from quantum superpositions of non-photon objects[8] are so difficult to create.

For instance, let's consider the non-trivial process, first performed by Davisson and Germer in 1927, of producing an electron in (non-normalized) superposition state |A> + |B>, where |A> is the wave state corresponding to the electron traversing slit A while |B> is the wave state corresponding to the electron traversing adjacent slit B in a double-slit plate. Electrons, one at a time, are passed through (and localized by) an initial collimating slit; quantum uncertainty results in dispersion of each electron’s wave state at a rate inversely proportional to the width of the collimating slit. If the process is designed so that adequate time elapses before the electron’s wave state reaches the double-slit plate, and without an intervening decoherence event with another object, the electron’s wave will be approximately spatially coherent over a width wider than that spanned by both slits. If the electron then traverses the double-slit plate, its wave function becomes the superposition |A> + |B>.  Because such a superposition does not correspond to its traversing slit A or traversing slit B, it carries no “which-path” information about which slit the electron traversed.  If each electron is then detected at a sensor located sufficiently downstream from the double-slit plate, again without an intervening decoherence event with another object, the spatial probability distribution of that electron’s detection will be calculable consistent with quantum mechanical interference effects. This lack of which-path information (thanks to successfully preventing any decohering correlations with other objects in the universe) implies that the electron’s superposition coherence was maintained, and thus the rules of quantum mechanics (and not classical probability) would apply to probability distribution calculations.[9]

Because the dispersion of an object’s wave function is directly proportional to Planck’s constant and inversely proportional to its mass, the ability to demonstrate the wave-like behavior of electrons is in large part thanks to the electron’s extremely small mass.[10] The same method of producing superpositions – waiting for quantum dispersion to work its magic – has been used to produce double-slit interference effects of objects as large as a few hundred and perhaps a couple thousand atoms.  (See, e.g., Eibenberger et al. (2013) and Fein et al. (2019).)  However, the more massive the object, the slower the spread of its wave state and the more time is available for an event to decohere any possible superposition.  Are there other methods, besides quantum dispersion, to prepare an object for a DSIE?  I don’t know.  However, every successful DSIE to date has indeed depended on quantum dispersion of the object’s wave packet, and it is this evidence, not the hypothetical possibility of other experiments, that is available to support (or not) an inference of U.

So, within the data available to support the inference of U, performing a DSIE by passing an object through slits A and B separated by some distance d first requires making the object spatially coherent over a distance exceeding d.  To get the object in the superposition |A> + |B> to subsequently show interference effects, you have to provide the object in a state that is adequately quantum mechanically “fuzzy” over a distance exceeding d – that is, a state that would already demonstrate interference effects.  In other words, to do a DSIE on an object to show that it does not violate the linearity of QM, you have to first provide an object prepared so that an interference experiment would not violate the linearity of QM.

Said another way, the observation of an interference effect in a double-slit interference experiment presupposes the spatial coherence of an object over some macroscopically resolvable distance.  But the ability to produce that object (or an object of any size, at least in principle) in spatial coherence is the very assumption of U.  Measuring interference effects by an object that has already been prepared to show interference effects is not a confirmation of, or evidence for, the universal linearity of QM.  What it does show is that QM is linear at least to that level.  For example, if QM is indeed nonlinear through a physical collapse mechanism like that proposed by GRW, then such a collapse might be confirmed by first preparing a system in an appropriate superposition (which should, if properly measured, demonstrate interference effects), and then failing to observe interference effects.  The ability to demonstrate, for example, a C60 molecule exhibiting interference effects puts a lower limit on the scale (mass, time period, etc.) to which a physical collapse mechanism would act.

My goal in this section is not to call into question the usefulness of interference experiments in demonstrating the applicability of linear QM to objects in those experiments.  My goal is to point out the logical circularity of asserting that “QM applies universally because no interference experiment has shown a violation in linearity,” given that interference experiments are only performed on objects that have already been successfully prepared in a state that can demonstrate interference.  The experimental difficulty is not in showing interference effects from an object prepared to show interference effects; the difficulty is in preparing the object to show interference effects.  So what do the empirical data tell us about the difficulty in preparing objects to show interference effects?  And do those data support an inference of U?

D.        Empirical Data Revisited

It may well be true that 100% of interference experiments have failed to show nonlinearity, but if all experiments performed so far only probe the microscopic realm and – more importantly – if these experiments are quite literally chosen because they only probe the microscopic realm, then the fact that no interference experiment has ever shown a violation of linearity is simply not evidence to support the inference that QM is universally linear.

If the reason that interference experiments are chosen to probe only the microscopic realm is merely that of convenience, or insufficient grant funding, or technological limitation, then my argument would be limited only to the conclusion that current empirical data do not support an inference to U, in which case the proponent of U has failed to meet any reasonable burden of proof.  However, if it turns out that interference experiments are chosen to probe only the microscopic realm because there is something about the physical world, directly related to the size of systems, that makes it impossible at least for all practical purposes (“FAPP”) to probe larger systems, then this would serve as empirical evidence against U, in which case the proponent’s unmet burden of proving the inference of U is far, far greater.

Here are a few empirical facts: a) so far, the largest object to show interference effects in a DSIE is a molecule consisting of around two thousand atoms (Fein et al. (2019)); b) these experiments have depended on quantum dispersion of an object’s wave packet to produce adequate spatial coherence; and c) the rate of quantum dispersion quickly approaches zero as the object increases in size.  I would argue that these facts, particularly our inability to prepare macroscopic object to show interference effects, constitute very good evidence against U.[11]

Let me elaborate.  If an experimenter can rely on quantum dispersion to put a molecule in adequate spatial coherence to measure interference effects, why can’t he do that for a dust particle or a cat?  Consider the difficulty in performing a DSIE on a dust particle. Let's assume it is a 50μm-diameter sphere with a density of 1000 kg/m3 and it has just been localized by an impact with a green photon (λ ≈ 500nm). How long will it take for its location “fuzziness” to exceed its own diameter (which would be the absolute minimum spatial coherence allowing for passage through a double-slit plate)?  Letting  ≈ 10-18 m/s, it would take 5x1013 seconds (about 1.5 million years) for the location uncertainty to reach a spread of 50μm.[12]  In other words, if we sent a dust particle into deep space, its location relative to other objects in the universe is so well defined due to its correlations to those objects that it would take over a million years for the universe to “forget” where the dust particle is to a resolution allowing for the execution of a DSIE.[13]  In this case, information in the universe would still exist to localize the dust particle to a resolution of around 50μm, but not less. Unfortunately, this rough calculation depends on a huge assumption: that new correlation information isn’t created in that very long window of time. In reality, the universe is full of particles and photons that constantly bathe (and thus localize) objects.

Thus there is a trade-off in the delocalization caused by natural quantum dispersion and localizing “measurements” caused by interactions with the plethora of stuff whizzing through space. This trade-off is heavily dependent on the size of the object; a tiny object (like an electron) disperses quickly due to its low mass and experiences a low interaction rate with other objects, allowing an electron to more easily demonstrate interference effects. On the other hand, a larger object disperses more slowly while suffering a much higher interaction rate with other objects. These observations can be quantified in terms of coherence lengths: for a particular decoherence source acting on a particular object, what is the largest fuzziness we might expect in the object's center of mass? And, if we're hoping to do a DSIE, does this fuzziness exceed the object's diameter?

Tegmark (1993) calculates coherence lengths (roughly “the largest distance from the diagonal where the spatial density matrix has non-negligible components”) for a 10μm dust particle and a bowling ball caused by various decoherence sources, as shown in Table I.  Even in deep space, cosmic microwave background (“CMB”) radiation alone will localize the dust particle to a dimension many orders of magnitude smaller than its diameter, thus ruling out any possibility for that object to become adequately delocalized (and thus adequately spatially coherent) relative to the universe to perform an interference experiment.  The prospects are far worse for a bowling ball-sized cat.

Table I.  Some values of coherence lengths for a 10μm dust particle and a bowling ball caused by various decoherence sources, given by Tegmark (1993). 

In other words, as at least a practical matter, the physical world is such that there is a size limit to the extent that quantum dispersion can be relied upon to perform a DSIE.  Having said that, no one seriously argues (as far as I know) that SC or WF could be produced, even in principle, through natural quantum dispersion.  Rather, the typical argument is that SC/WF could be produced through amplification of a quantum event via a von Neumann measurement chain.  Crucially, however, the purported ability to amplify a quantum superposition assumes universal linearity of QM, which means that it cannot be logically relied upon to contradict the argument that QM is not universally linear.  Further, there is no empirical evidence that quantum amplification ever has produced a measurable macroscopic quantum superposition.[14]  In other words, without assuming that quantum amplification can accomplish what quantum dispersion cannot – i.e., ignoring the logical circularity of assuming the very conclusion that I am arguing against – one must conclude that existing empirical evidence does not support the inference that a DSIE can in principle be performed on macroscopic objects like a cat.

As a purely empirical matter, all DSIEs that have been performed depend on quantum dispersion, which depends inversely on the size of an object, to produce the object in spatial coherence that exceeds some macroscopically resolvable distance.  Consequently, all such experiments have been chosen specifically to probe the microscopic realm, where quantum dispersion “works.”  This observation is sufficient to invalidate the inference that QM wave states evolve linearly beyond the microscopic realm, because all such experiments are chosen specifically on the basis of probing the microscopic realm. 

Having said that, this section shows that such choices for experimentation are not merely on the basis of convenience – rather, the physical world is such that interference experiments inherently become increasingly difficult at an increasing rate as the size of an object increases.  There seems to be an asymptotic limit[15] on the size of an object on which we can perform a DSIE.  Our difficulty in preparing larger objects to show interference effects is (or should be) telling us something fundamental about whether QM wave states always evolve linearly. 

Importantly, I am not asserting that the above analysis shows that performing a DSIE on a cat is impossible in principle.  Rather, it shows that a fundamental feature of our physical world is that our efforts to demonstrate interference effects for larger systems have quickly diminishing returns; the harder we try to increase the size of an object to which QM is verifiably linear, the more slowly that size increases.  There is at least some physical size (perhaps within an order of magnitude of the dust particle in Table I) above which no conceivable experiment, no matter how technologically advanced, could demonstrate interference effects.  The fact that such a size exists to physically distinguish the “macroscopic” from the “microscopic,” which, as a practical matter, forces us to choose interference experiments that probe only the microscopic regime, is strong empirical evidence against an inference of U.  In other words, the existence of a FAPP limitation, even if there is no in-principle limitation, is itself evidence against an inference of U.

E.        Burden of Proof

It is worth emphasizing that the measurement problem does not arise from the evidence that QM is linear in microscopic systems; it arises only from an inference that QM remains linear in macroscopic systems.  I have shown in the above sections that the inference of U:

·         Is not supported by any direct empirical evidence;

·         Is such that no practical or currently technologically conceivable experiment can provide direct empirical evidence; and

·         Is logically incompatible with one or more assertions (such as statement P1) for which we ostensibly have a great deal of evidence, thus giving rise to the measurement problem.

I cannot offer or conceive of any rational scientific basis on which to accept such an inference. For these reasons alone, from a scientific standpoint, MP should be dismissed – not because it has been solved, but because it should never have arisen in the first place.  MP depends on the truth of at least two statements, one of which is U.  It should be enough to show that the best empirical evidence regarding that statement is inadequate to support, and in fact opposes, the inference of U.  Not only have proponents of U failed to meet their burden of proving that an inference of U is valid, that burden, in light of the arguments in this section, is exceptional.


Arndt, M., Nairz, O., Vos-Andreae, J., Keller, C., Van der Zouw, G. and Zeilinger, A., 1999. Wave–particle duality of C 60 molecules. nature401(6754), pp.680-682.

Brukner, Č., 2017. On the quantum measurement problem. In Quantum [Un] Speakables II (pp. 95-117). Springer, Cham.

Brukner, Č., 2018. A no-go theorem for observer-independent facts. Entropy20(5), p.350.

D’Ariano, G.M., 2020. No purification ontology, no quantum paradoxes. Foundations of Physics, pp.1-13.

Davisson, C. and Germer, L.H., 1927. The scattering of electrons by a single crystal of nickel. Nature119(2998), pp.558-560.

Eibenberger, S., Gerlich, S., Arndt, M., Mayor, M. and Tüxen, J., 2013. Matter–wave interference of particles selected from a molecular library with masses exceeding 10000 amu. Physical Chemistry Chemical Physics15(35), pp.14696-14700.

Fein, Y.Y., Geyer, P., Zwick, P., Kiałka, F., Pedalino, S., Mayor, M., Gerlich, S. and Arndt, M., 2019. Quantum superposition of molecules beyond 25 kDa. Nature Physics15(12), pp.1242-1245.

Frauchiger, D. and Renner, R., 2018. Quantum theory cannot consistently describe the use of itself. Nature communications9(1), pp.1-10.

Gao, S., 2019. The measurement problem revisited. Synthese196(1), pp.299-311.

Ghirardi, G.C., Rimini, A. and Weber, T., 1986. Unified dynamics for microscopic and macroscopic systems. Physical review D34(2), p.470.

Hossenfelder, S., 2018. Lost in math: How beauty leads physics astray. Basic Books.

Joos, E., Zeh, H.D., Kiefer, C., Giulini, D.J., Kupsch, J. and Stamatescu, I.O., 2013. Decoherence and the appearance of a classical world in quantum theory. Springer Science & Business Media.

Kastner, R.E., 2020. Unitary-Only Quantum Theory Cannot Consistently Describe the Use of Itself: On the Frauchiger–Renner Paradox. Foundations of Physics, pp.1-16.

Knight, A., 2020.  No paradox in wave-particle duality.  Foundations of Physics, 50(11), pp. 1723-27.

Maudlin, T., 1995. Three measurement problems. topoi14(1), pp.7-15.

O’Connell, A.D., Hofheinz, M., Ansmann, M., Bialczak, R.C., Lenander, M., Lucero, E., Neeley, M., Sank, D., Wang, H., Weides, M. and Wenner, J., 2010. Quantum ground state and single-phonon control of a mechanical resonator. Nature464(7289), pp.697-703.

Penrose, R., 1996. On gravity's role in quantum state reduction. General relativity and gravitation28(5), pp.581-600.

Penrose, R., 1999. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press.

Tegmark, M., 1993. Apparent wave function collapse caused by scattering. Foundations of Physics Letters6(6), pp.571-590.

[1] Such as in this article

[2] Although, to show U false, just one counterexample is necessary.

[3] Unless you count the “projection postulate,” which some would argue is prima facie evidence that the QM equations do not always evolve linearly.

[4] Some go further and assert that the beauty and/or simplicity of linear dynamical equations are evidence for their universal applicability.  I, like Hossenfelder (2018), disagree.  Aesthetic arguments are not empirical evidence for a scientific hypothesis, despite assertions by some string theorists to the contrary.

[5] The characterization of light as discrete particle-like objects, thanks to Planck’s use of  to avoid the Ultraviolet Catastrophe and Einstein’s explanation of the photoelectric effect, showed that classical probability is inapplicable to predicting the detection outcome of individual particles in a double-slit interference experiment.

[6] Like all probability rules, a statistically significant ensemble is necessary to obtain useful information. A measurement on any object will always yield a result that is consistent with that object's not having been in a superposition; only by measuring many identically prepared objects may the presence of a superposition appear in the form of an interference pattern.

[7] The theory underlying decoherence is not incompatible with the assumption of U; in fact, many (if not most) of the proponents of decoherence specifically endorse U.  Rather, decoherence is often used to explain why it is so difficult to macroscopic objects in (coherent) superpositions.

[8] Interference effects of photons are actually quite easy to observe in part because photons do not self-interact and thus are not decohered by other radiation. Prior to the invention of lasers, a dense source of coherent photons, which confirmed light's wave-like behavior, came directly from the sun.

[9] Indeed, the existence of which-path information – that is, the existence of a correlating fact about the passage of the electron through one slit or the other – is incompatible with existence of a superposition at the double-slit plane.  (See, e.g., Knight (2020).)

[10] We might alternatively say that the de Broglie wavelength of an electron can be made sufficiently large in a laboratory so as to reveal its wave nature.

[11] Penrose (1999) suggests that the fact that we always observe measurement results is excellent empirical evidence that the QM wave function cannot always evolve linearly.

[12] Tegmark (1993) notes that macroscopic systems tend to be in “nearly minimum uncertainty states.”

[13] This estimate completely neglects the additional time necessary to subsequently measure an interference pattern.

[14] A related objection is whether it is possible to adequately isolate or shield a macroscopic object from decoherence sources long enough for dispersion to work its magic.  The answer is no, for reasons, including logical circularity, that exceed the scope of this paper.  But, like the hypothetical proposed “fix” of amplification, there is no actual evidence that shielding or isolation ever has produced a measurable macroscopic superposition.

[15] I don’t mean this in a rigorous mathematical sense.  Rather there is some rough object size below which we are able, as a practical matter, to show interference effects of the object and above which we simply cannot.  We might loosely call this size the “Heisenberg cut.”

Tuesday, September 1, 2020

Macroscopic Quantum Superpositions Cannot Be Measured, Even in Principle

In this post, I pointed out that even though the phrase "copy the brain" occurs all over the Internet, my post might be the first in history to state that it is "impossible to copy the brain," an illuminating observation about the pervasive assumption that brains can be copied.

The same is true of these phrases, of which a Google search yields only my own works:
"Schrodinger's Cat is impossible"
"Schrodinger's Cat is not possible"
"Wigner's Friend is impossible"
"Wigner's Friend is not possible"
"Macroscopic quantum superpositions are impossible"
"Macroscopic quantum superpositions are not possible"
"Macroscopic superpositions are impossible"
"Macroscopic superpositions are not possible"

Obviously, I’m not the first person to doubt that they are possible, but the fact that the above phrases yield nothing (until you remove "not possible" or "impossible," yielding thousands of results) should tell us something.  It is practically established doctrine in the philosophy and foundations of physics that the Schrodinger’s Cat and Wigner’s Friend thought experiments, along with the ability to measure a macroscopic system in quantum superposition, are possible in principle.

The thing is – they’re not.

Here is my newest YouTube video, entitled “Why Macroscopic Quantum Superpositions are Impossible in Principle.”  The concepts are elaborated in preprints here and here, with a more comprehensive explanation of (my current understanding of) quantum mechanics in this blog post.

A brief (13-minute) version of the above video can be found here:

Wednesday, August 19, 2020

Explaining Quantum Mechanics

I’ve been on a quest to understand quantum mechanics (“QM”) and think I’ve actually made significant progress, even though Feynman claimed that no one understands it.  What I’m going to write in this post is not ready for an academic paper, so I’m not going to try to prove anything.  Instead, I’m just going to explain my understanding of QM and how it is consistent with what I’ve learned and the results of QM experiments. 

I’m not sure yet whether this explanation is: a) correct; b) original; or c) testable.  Of course, I think it is correct as so far it’s the only explanation that seems consistent with everything I’ve learned about physics and QM.  Not only do I believe it’s correct, but I also hope it’s correct since it has helped me to understand a lot more about the physical world; to discover that it is false would mean that I am fundamentally mistaken in a way that would take me back to the drawing board.  Whether or not it is original is less important to me; as I noted before in this post, even if I had naturally stumbled upon an explanation of QM that had been discovered and written about by someone else, such as Rovelli’s Relational Interpretation, I’d be fine with that because my primary quest is to understand the physical world.  As it turns out, the following explanation is not equivalent to the Relational Interpretation or, as far as I’m aware, any of the other existing interpretations of QM.  So, if I’ve come upon something original, I’d like to get it into the information ether for purposes of priority, sharing ideas, collaboration, etc.  Finally, if my explanation actually is both correct and original, I would really like it to be testable and not just another interpretation of QM.  Currently, all QM interpretations are empirically indistinguishable.  While hypothetical experiments have been proposed to distinguish one or more interpretations from others (such as Deutsch’s classic paper, which aims to show how a collapse interpretation might be distinguished from a non-collapse interpretation), not only are such experiments impossible for all practical purposes in the near future, but may actually be impossible in principle.  Of course, even if it turns out to be just another indistinguishable interpretation, it is already valuable (at least to me) from a pedagogical point of view, as it clarifies and simplifies QM in a way that I haven’t seen elsewhere.  Having said all that, here is my current understanding/explanation for QM.

Information, Facts, and Relativity of Superposition

First, let’s start with information.  If I throw a rock with my hand, then that fact – that is, the fact that the rock and my hand interacted in a particular way – gets embedded in a correlation between the rock and my hand so that they will both, in some sense, evidence the interaction/event.  The information about that fact gets carried in that correlation so that future interactions/events are consistent with that fact, which is what gives rise to predictability.  The trajectory of that rock can then be predicted and calculated relative to my hand because the information stored in the correlation between the rock and my hand guarantees that future events are consistent with that past event (and all past events).  The set of possible futures that are consistent with past events is so limited that the future trajectory of the rock can be calculated to (almost) limitless precision.  In fact, the precision to which a person could predict a rock’s trajectory is so good that it was thought until the discovery of quantum mechanics (and specifically Planck’s constant) that the physical world is perfectly deterministic.  Many (perhaps most) physicists are determinists, believing that the world evolves in a predetermined manner.  The notion of determinism is heavily related to Newtonian physics: If I know an object’s initial conditions, and I know the forces acting on it, then I can predict its future trajectory. 

This is certainly true to some degree.  However, due to chaos, the further in the future we go, the more sensitive those predictions are to the precision of the initial conditions.  So if we don’t know an object’s initial conditions to infinite precision, then it’s just a matter of time before chaos amplifies the initial uncertainty to the point of complete unpredictability.  This fascinating paper shows that that’s true even if we are looking at three massive black holes with initial conditions specified to within Planck’s constant.  Of course, QM asserts that we can’t specify initial conditions better than that, so this seems to me pretty good evidence that the universe is fundamentally indeterministic. 

The thing is... why should we ever have believed that infinite precision was possible, even in principle?  Instead, the amount of information in the universe is finite, a concept reasonably well established by the entropy concept of the Bekenstein Bound, and also well articulated by Rovelli’s explanation that the possible values of an object’s location in phase space cannot be smaller than a volume that depends on Planck’s constant.  However, even if we can all agree that information in the universe is finite, there is no agreement on whether it is constant.  Most physicists seem to think it’s constant, which is in part what gives rise to the so-called black hole information paradox.

Part of the motivation for believing that information is constant in the universe is that in quantum mechanics, solutions to the Schrodinger Equation evolve linearly and deterministically with time; that is, the amount of information contained in a quantum wave state does not change with time.  Of course, the problem with this is that a quantum wave state is a superposition of possible measurement outcomes (where those possible outcomes are called “eigenstates” of the chosen “measurement operator”)... and we never observe or measure a superposition.  So either the quantum wave state at some point “collapses” into one of the possible measurement outcomes (in which case the wave state is not always linear or universally valid), or it simply appears to collapse as the superposition entangles with (and gets amplified by) the measuring device and ultimately the observer himself, so that the observer evolves into a quantum superposition of having observed mutually exclusive measurement outcomes.  This second situation is called the Many Worlds Interpretation (“MWI”) of QM.

I regard MWI as a nonstarter and give specific reasons why it is nonsensical in the Appendix of this paper.  But there is another deeper reason why I reject MWI: it is a pseudoscientific religion that is lent credibility by many well-known scientists, among them Sean Carroll.  Essentially, neither MWI nor the concept of a Multiverse (which is mathematically equivalent to MWI, according to Leonard Susskind) is empirically testable, which already excludes them from the realm of science.  But more importantly, they both posit the concept of infinity to overcome the fine-tuning problem or Goldilocks Enigma in physics.  People like Carroll don’t like the notion that our universe (which appears “fine-tuned” for the existence of intelligent life) is extraordinarily unlikely, a fact that many (including me) suggest as evidence for the existence of a Creator.  So to overcome odds that approach zero, they simply assert (with no empirical evidence whatsoever) the existence of infinitely many worlds or universes, because 0 * ∞ = 1.  That is, infinity makes the impossible possible.  But just as anything logically follows from a contradiction, anything follows from infinity – thus, infinity is, itself, a contradiction.

Suffice it to say that I’m reasonably sure that at some point a quantum wave state stops being universal.  Call it “collapse” or “reduction” if you will, but the idea is that at some point an object goes from being in a superposition of eigenstates to one particular eigenstate.  (Later in this post, when I discuss the in-principle impossibility of Schrodinger’s Cat, it won’t actually make any difference whether wave state collapse is actual or merely apparent.)  With regard to information, some have characterized this as keeping information constant (e.g., Rovelli), as decreasing the amount of information (e.g., Aaronson), or as increasing the amount of information (e.g., Davies). 

Anyway, here’s what I think: the information in the universe is contained in correlations between entangled objects, and essentially every object is correlated directly or indirectly to every other object (i.e., universal entanglement).  That information is finite, but additional interactions/events between objects (and/or their fields) may increase the information.  (Quick note: whether or not “objects” exist at all, as opposed to just fields, doesn’t matter.  Art Hobson might say that when I throw a rock, all I experience is the electrostatic repulsion between the fields of the rock and my hand, but that doesn’t change that I can treat it as a physical object on which to make predictions about future interactions/events.)  I gave an example using classical reasoning in this post and in this paper, but the idea is very simple. 

For example, imagine a situation in which object A is located either 1cm or 2cm from object B, by which I mean that information in the universe exists (in the form of correlations with other objects in the universe) to localize object A relative to object B at a distance of either 1cm or 2cm.  (As wacky as this situation sounds, it’s conceptually identical to the classic double-slit interference experiment.)  That is, there is a fact – embedded in correlations with other objects – about objects A and B not being separated by 3cm, or 0.5cm, or 1000 light-years, etc., but there is not a fact about whether object A is separated from object B by a distance of 1cm or 2cm.  It’s got nothing to do with knowledge.  It’s not that object A is “secretly” located 1cm from object B and we just don’t know it.  Rather, there just is no fact about whether object A and B are separated by 1cm or 2cm.  (If it were simply a question of knowledge, then we wouldn’t see quantum interference.)

That’s quantum superposition.  In other words, if at some time t0 there is no fact about whether object A and B are located 1cm or 2cm apart, then they exist in a (location) superposition.   Object A would say that object B is in a superposition, just as object B would say that object A is in a superposition.  We might call this relativity of superposition.  It was in this post that I realized that a superposition of one object exists relative to another object, and both objects have the same right to say that the other object is in superposition.  Compare to Special Relativity: there is a fact about an object's speed relative to another, and each object can equally claim that the other is moving at that speed, even though it makes no sense to talk of an object's speed otherwise.  Similarly, there may be a fact about one object being in superposition relative to another, with each object correctly claiming that the other is in superposition, even though it makes no sense to talk of an object in superposition without reference to another object.

Whether or not a superposition exists is a question of fact.  If a superposition exists (because the facts of the universe are inadequate to reduce it), then the rules of quantum mechanics, which depend on interference, apply to probabilistic predictions; if a superposition does not exist, then ordinary classical probability will suffice because interference terms vanish.  If at some time t1 an event occurs that correlates objects A and B in a way that excludes the possibility of their being separated by 2cm, then that correlating event is new information in the universe about object A being located 1cm from object B.  Importantly, that information appears at time t1 and does not retroactively apply.  We cannot now say that objects A and B were in fact separated by 1cm at time t0 but we simply didn’t know.  Indeed, this is the very mistake often made in the foundations of physics that I addressed in this soon-to-be-published paper.  Said another way, if an object’s superposition is simply the lack of a relevant fact, then the measurement of that object in a manner (a “measurement basis”) that reduces the superposition is new information.  By “measurement,” I simply mean the entanglement of that object with other objects in the universe that are already well correlated to each other.

By the way, I have no idea how or why the universe produces new information when an event reduces a quantum superposition, but this is not a criticism of my explanation.  Either new information arises in the universe or it doesn’t, but since we have no scientific explanation for the existence of any information, I don’t see how the inexplicable appearance of all information at the universe’s birth is any more intellectually satisfying than the inexplicable appearance of information gradually over time.

I should also mention that when I say “superposition,” I almost always mean in the position basis.  The mathematics of QM is such that a quantum state (a unit-length ray in Hilbert space) can be projected onto any basis so that an eigenstate in one basis is actually a superposition in another basis.  However, the mathematics of QM has failed to solve many of the big problems in the foundations of physics and has arguably rendered many of them insoluble (at least if we limit ourselves to the language and math of QM).  I have far more confidence in the explanatory powers of logic and reason than the equations of QM.  So even though I fully understand that, mathematically, every pure quantum state is a superposition in some basis, when I say an object is in superposition, I nearly always mean that it is in a location or position superposition relative to something else.  There are lots of reasons for this choice.  First, I don’t really understand how a measurement can be made in anything but the position basis; other scholars have made the same point, so I’m not alone.  We typically measure velocity, for example, by measuring location at different times.  We could measure the velocity of an object by bouncing light off it and measuring its redshift, but without giving it a great deal of thought, I suspect that even measuring redshift ultimately comes down to measuring the location of some object that absorbs or scatters from the redshifted photon.  And when I say that we only measure things in the position basis, I don’t just mean in a lab... our entire experience all comes down to the localization of objects over time.  In other words, the most obvious way (and arguably the only way) to imagine a quantum superposition is in the position basis.

Second, the universe has clearly already chosen the position basis as a preferred basis.  Objects throughout the universe are already well localized relative to each other.  When an object exists in a (location) superposition, other objects and fields are constantly bathing that object to localize it and thereby reduce or decohere the superposition in the position basis.  In fact, it is the localizing effects of objects and fields throughout the universe that makes the creation of (location) superpositions of anything larger than a few atoms very difficult.  The concept of decoherence can explain why superpositions tend to get measured in the basis of their dominating environment, but does not explain why the universe chose the position basis to impose on superpositions in the first place.  Nevertheless, there is something obviously special about the position basis.

Transitivity of Correlation

Because information exists in the form of correlations between objects that evidence the occurrence of past events (and correspondingly limit future possible events), that information exists whether or not observers know it about their own subsystem, a different correlated subsystem, or even a different uncorrelated subsystem.

Consider an object A that is well correlated in location to an object B, by which I mean that relative to object A, there is a fact about the location of object B (within some tolerance, of course) and object B is not in a location superposition relative to object A.  (Conversely, relative to object B, there is a fact about the location of object A and object A is not in a location superposition relative to object B.)  Object A may be well correlated to object B whether or not object A “knows” the location of B or can perform an interference experiment on an adequate sampling of identically prepared objects to show that object B is not in a location superposition relative to object A.  The means by which objects A and B became well correlated is irrelevant, but may be due to prior interactions with each other and each other’s fields (electromagnetic, gravitational, etc.), mutual interaction with other objects and their fields, and so forth.  Now consider an object C that is well correlated in location to object B; object C must also be well correlated to object A.  That is, if object C is not in a location superposition relative to object B, then it is not in a location superposition relative to object A, whether or not object A “knows” anything about object C or can perform an interference experiment to test whether object C is in a location superposition relative to object A.

I’ll call this notion the transitivity of correlation.  It seems insanely obvious to me, but I can’t find it in the academic literature.  Consider a planet orbiting some random star located a billion light years away.  I certainly have never interacted directly with that planet, and I may have never even interacted with an object (such as a photon) that has been reflected or emitted by that planet.  Nevertheless, that planet is still well localized to me; that is, there is a fact about its location relative to me to within some very, very tiny Planck-level fuzziness.  I don’t know the facts about its location, but if I were to measure it (to within some tolerance far exceeding quantum uncertainty), classical calculations would suffice.  I would have no need of quantum mechanics because it is well correlated to me and not in a superposition relative to me.  This is true because of the transitivity of correlation: the planet is well correlated to its sun, which is well correlated to its galaxy, which is well correlated to our galaxy, which is well correlated to our sun, etc.

The thing is – everything in the universe is already really, really well correlated, thanks to a vast history of correlating events, the evidence of which is embedded in mutual entanglements.  But for the moment let’s imagine subsystem A that includes a bunch of well-correlated objects (including object 1) and subsystem B that includes its own bunch of well-correlated objects (including object 2), but the two subsystems are not themselves well correlated.  In other words, they are in a superposition relative to each other because information does not exist to correlate them.  From the perspective of an observer in A, the information that correlates the objects within B exists but is unknown, while information that would well-correlate objects 1 and 2 does not exist.  However, events/interactions between objects 1 and 2 creates new information to reduce their relative superpositions and make them well correlated.  Then, because objects in A are already well correlated to object 1, while objects in B are already well correlated to object 2, the events that correlate objects 1 and 2 correspondingly (and “instantaneously”) correlate all the objects in subsystem A to all the objects in subsystem B.

This is a paraphrasing of what Einstein called “spooky action at a distance” (and also what many scholars have argued is a form of weird or impermissible nonlocality in QM).  But explained in the manner above, I don’t find this spooky at all.  Rather, from the perspective of an observer in subsystem A, unknown facts of past events are embedded in entanglements within subsystem B, while there simply are no facts (or perhaps inadequate facts) to correlate subsystems A and B.  Once those facts are newly created (not discovered, but created) through interactions between objects 1 and 2, the preexisting facts between objects in subsystem B become (new) facts between objects in both subsystems.  Let me say it another way.  An observer OA in subsystem A is well correlated to object 1, and an observer OB in subsystem B is well correlated to object 2, but they are not well correlated to each other; i.e., they can both correctly say that the other observer is in a superposition.  When object 1 becomes well correlated to object 2, observer OA becomes well correlated to observer OB.  This correlation might appear “instantaneous” with the events that correlate objects 1 and 2, but there’s nothing spooky or special-relativity-violating about this.  Rather, observer OA was already well correlated to object 1 and observer OB was already well correlated to object 2, so they become well correlated to each other upon the correlation of object 1 to object 2.

Of course, because everything in the universe is already so well correlated, the above scenario is only possible if one or both of the subsystems are extremely small.  The universe acts like a superobserver “bully” constantly decohering quantum superpositions, and in the process creating new information, in its preferred (position) basis.  Still, if the universe represents subsystem A, then a small subsystem B can contain its own facts (i.e., embedded history of events) while being in superposition relative to A.  Imagine subsystem B containing two correlated particles P1 and P2 – e.g., they have opposite spin (or opposite momentum).  When the position of particle P1 is then correlated (e.g., by detection, measurement, or some other decoherence event) to objects in subsystem A (i.e., the rest of the universe), that position corresponds to a particular spin (or momentum).  But because particle P1 was already correlated to its entangled particle P2, the spin (or momentum) of particle P2 is opposite, a fact that preceded detection of particle P1 by the universe.  That fact will be reflected in any detection of particle P2 by the universe.  Further, facts do not depend on observer status.  An observer relative to particles P1 and P2 has as much right to say that the universe was in superposition and that the correlation event (between P1 and objects in the rest of the universe) reduced the superposition of the universe. 

This explanation seems so obviously correct that I don’t understand why, in all my reading and courses in QM, no one has ever explained it this way.  To buttress the notion that facts can exist in an uncorrelated subsystem (and that measurement of that subsystem by a different or larger system creates correlating facts to that subsystem but not within that subsystem), consider this.  As I walk around in the real world, thanks to natural quantum dispersion, I am always in quantum superposition relative to the rest of the world, whether we consider my center of mass or any other measurable quantity of my body.  Not by much, of course!  But given that the universe does not contain infinite information, there must always be some tiny superposition fuzziness between me and the universe – yet that doesn’t change the fact that my subsystem includes lots of correlating facts among its atoms.

Killing Schrodinger’s Cat

I tried to explain in this draft paper why Schrodinger’s Cat and Wigner’s Friend are impossible in principle, but the explanation still eludes the few who have read it.  The following explanation will add to the explanation I gave in that draft paper.  The idea of SC/WF is that there is some nonzero time period in which some macroscopic system (e.g., a cat) is in an interesting superposition (e.g., of states |dead> and |alive>) relative to an external observer.  It is always (ALWAYS) asserted in the academic literature that while detecting such a superposition would be difficult or even impossible in practice, it is possible in principle. 

Let’s start with some object in superposition over locations A and B, so that from the perspective of the box (containing the SC experiment) and the external observer, its state is (unnormalized) superposition state |A> + |B>.  However, from the perspective of the object, the box is in a superposition of being located in positions W and X in corresponding states |boxW> and |boxX> and the observer is in a superposition of being located in positions Y and Z in corresponding states |obsY> and |obsZ>.  But remember that the box and observer are, at the outset of the experiment, already well correlated in their positions, which means that from the object’s perspective, the system is in state |boxW>|obsY> + |boxX>|obsZ>.  When the object finally gets correlated to the box, it “instantly” and necessarily gets correlated to the observer.  It makes no difference whether the quantum wave state actually collapses/reduces or instead evolves linearly and universally.  Either way – whether the wave state remains |boxW>|obsY> + |boxX>|obsZ> or collapses into, say, |boxW>|obsY> – there is never an observer who is not yet correlated to the box and who can do an interference experiment on it to confirm the existence of a superposition.  Once the object “measures” the position of the box, it inherently measures the position of the observer, which means that there is never an observer for whom the box is in a superposition (unless the box is already in a superposition relative to the observer, which, as I pointed out in the draft paper, is impossible because of very fast decoherence of macroscopic objects).

In Eq. 1 of the draft paper, I show a simple von Neumann progression, with each arrow (à) representing a time evolution.  If it’s right, then there is a moment when the measuring system is in superposition but the observer is not.  The problem with that conclusion, as I’ve been trying to explain, is that because the observer is already well correlated to the measuring system (and the box, cat, etc.) to within a tolerance far better than the distance separating the object’s eigenstates |n>, correlation of the object’s location to that of the measuring device “instantly” correlates the observer and the rest of the universe.  Once there is a fact (and thus no superposition) about the object’s location relative to the measuring device, there is also a fact (and thus no superposition) about the object’s location relative to the observer.  There is no time at which the observer can attempt an interference experiment to confirm that the box (or cat or measuring device, etc.) are in a superposition.

Therefore, the SC and WF experiments are impossible in principle.  An MWI apologist might retort that Schrodinger’s cat is, indeed, in a superposition, along with the observer.  But since there is no time at which an interference experiment could be performed, even in principle, to confirm this, their claim is both unscientific and useless.  They may as well say that unicorns exist but are impossible to detect.

Other Thoughts and Implications

CCCH.  In this post, I heavily criticized a paper that incorrectly asserted that the consciousness-causes-collapse hypothesis (“CCCH”) has already been empirically falsified.  I did so because I despise bad logic and bad science, not because I endorse CCCH.  Indeed, if my explanation of QM is correct, then collapse (or apparent collapse) of the quantum wave function has nothing to do with consciousness but is rather the result of new information produced from correlating interactions with the environment/universe.

The Damn Cat.  What I haven’t discussed so far are “interesting” evolutions due to amplifications of quantum events.  The argument above, which is an extension of the argument in my draft paper on Schrodinger’s cat, is essentially that quantum amplification of a tiny object in superposition cannot place the macroscopic box containing SC in a superposition relative to the observer any more easily than the box itself can naturally disperse (via quantum uncertainty) relative to the observer.  And decoherence by objects and fields in the universe makes adequate dispersion of the box impossible even in principle.  But there’s more.  If I am right in my explanation of QM, then interacting subsystems will agree on the facts embedded in their correlations.  For example, if subsystem B is in a superposition of being located 1cm and 2cm from subsystem A, then when an observer in subsystem A “measures” B at a distance of, say, 1cm, then from the perspective of an observer in subsystem B, B measured A also at a distance of 1cm.

In the case of SC (and WF), I analyzed them in terms of fuzziness of objects, such as a particular carbon atom that would have been located in the live cat’s brain is instead located in the dead cat’s tail, thus requiring a quantum fuzziness to span a meter or so.  But of course SC is far more interesting: in one case there is a live cat and in another case there is a dead cat, represented by a vastly different set of correlations among its constituent atoms. 

In order for a SC superposition to actually be produced relative to the external observer (and the universe to which he is well correlated), it must be the case that a “comparable” superposition of the universe is produced relative to SC.  Then, when an event occurs that correlates the two systems – which, remember, can ultimately be traced back to the location of some tiny particle at A or B, separated by some tiny distance – observers in each of the two systems will agree on the correlations.  So let’s say we’ve somehow created a SC superposition.  We wait a few minutes, pop a bottle of champagne to celebrate the amazing feat, and then we open the box to look, at which point see a live cat – that is, we (and the rest of the universe) become correlated with |live>.  But since the correlations among the atoms in the box exist before we open the box, then the hypothetical state |dead> must be correlated with a universe that would have seen that exact set of facts as a dead cat.  How does one measure a live cat as dead?  Remember, we are not just talking about measuring a heartbeat, etc... we are talking about a universe that is constantly inundating the cat’s atoms in a way so that every observer in that universe would observe a dead cat.  That is, if it were possible to produce a cat in a superposition of |dead> and |alive> inside a box, then from the perspective of an observer inside the box, the universe would have to be in a superposition of being in a state that would measure a set of atoms as being a dead cat and another state that would measure the same set of atoms as being a live cat.  Ridiculous.