__Background of Schrodinger’s Cat__

Quantum mechanics stops where classical probabilities
start. In the classical world, we work
with probabilities directly, while in quantum mechanics, we work with
probability

*amplitudes*, which are*complex*numbers (involving that weird number i), before applying the Born rule, which requires squaring the norm of the amplitude to arrive at probability. If you don’t know anything about quantum mechanics, this may sound like gibberish, so here’s an example showing how quantum mechanics defies the rules of classical probability.
I shine light through a tiny hole toward a light detector
and take a reading of the light intensity.
Then I repeat the experiment, the only difference being that I’ve
punched a second tiny hole next to the first one. Classical probability (and common sense!)
tells us that the detector should measure

*at least*the same light intensity as before, but probably more. After all, by adding another hole, surely we are allowing*more*light to reach the detector... right?! Nope. We could actually measure*less*light, because through the process of interference among eigenstates in a superposition, quantum mechanics screws up classical probability. In some sense, the violation of classical probability, which tends to happen only in the microscopic world, is really what QM is all about. And when I say “microscopic,” what I really mean is that the largest object to which an interference experiment has been performed (thus demonstrating QM effects) is a molecule of a few hundred amu, which is much, much, much, much smaller than can be seen with the naked eye or even a light microscope. So we have no direct empirical evidence that the rules of QM even apply to macroscopic objects.
Having said that, many physicists and philosophers insist
that there’s no limit “in principle” to the size of an object in quantum
superposition. The question I’ve been
wrestling with for a very long time is this: is there an actual dividing line
between the “micro” and “macro” worlds at which QM is no longer
applicable? The “rules” of quantum
mechanics essentially state that when one quantum object interacts with
another, they just entangle to create a bigger quantum object – that is, until
the quantum object becomes big enough that normal probability rules apply,
and/or when the quantum object becomes entangled with a “measuring device”
(whatever the hell that is). The
so-called measurement
problem, and the ongoing debates regarding demarcation between “micro” and “macro,”
have infested physics and the philosophy of quantum mechanics for the past
century.

And no thought experiment better characterizes this
infestation than the obnoxiously annoying animal called Schrodinger’s Cat. The idea is simple: a cat is placed in a box
in which the outcome of a tiny measurement gets amplified so that one outcome
results in a dead cat while the other outcome keeps the cat alive. (For example, a Geiger counter measures a
radioisotope so that if it “clicks” in a given time period, a vial of poison is
opened.) Just before we open the box at
time t

_{0}, there’s been enough time for the poison to kill the cat, so we should expect to see either a live or dead cat. Here’s the kicker: the “tiny measurement” is on an object that is in quantum superposition, to which the rules of classical probability don’t apply.
So does the quantum superposition grow and eventually
entangle with the cat, in which case, just prior to time t

_{0}, the cat is itself in a superposition of “dead” and “alive” states (and to which the rules of classical probability do not apply)? Or does the superposition, before entangling with the cat, reduce to a probabilistic mixture, such as through decoherence or collapse of the wave function? And what the hell is the difference? If the cat is in a superposition just prior to time t_{0}, then there just is no objective fact about whether the cat is dead or alive, and our opening of the box at t_{0}is what decoheres (or collapses or whatever) the entangled wave state, allowing the universe to then randomly*choose*a dead or live cat. However, if the cat is in a mixed state just prior to t_{0}, then there*is*an objective fact about whether it is dead or alive – but we just don’t*know*the fact until we open the box. So the question really comes down to this: do we apply classical probability or quantum mechanics to Schrodinger’s Cat? Or, to use physics terminology, the question is whether, just prior to opening the box, Schrodinger’s Cat is in a coherent superposition or a probabilistic mixed state.__Why is this such a hard question?__

It’s a hard question for a couple reasons. First, remember that QM is about
statistics. We
never see superpositions.

**Rather, superpositions are***The outcome of every individual trial of every experiment ever performed in the history of science has been consistent with the absence of quantum superpositions.**inferred*when the outcomes of many, many trials of an experiment on “identically prepared” objects don’t match what we would have expected from normal probability calculations. So if the only way to empirically distinguish between a quantum cat and a classical cat requires doing lots of trials on physically identical cats... ummm... how exactly do we create physically identical cats? Second, the experiment itself must be an “interference” experiment that allows the eigenstates in the wave state to interfere, thus changing normal statistics into quantum statistics. This is no easy task in the case of Schrodinger’s Cat, and you can’t just do it by opening the box and looking, because the probabilities of finding the cat dead or alive will be the same whether or not the cat was in a superposition just prior to opening the box. So doing lots of trials is not enough – they must be trials of the right kind of experiment – i.e., an interference experiment. And in all my reading on SC, I have never – not once – encountered anything more than a simplistic, hypothetical mathematical treatment of the problem. “All you have to do is measure the cat in the basis {(|dead> + |live>)/√2, (|dead> - |live>)/√2}! Easy as pie!” But the details of actually setting up such an experiment are so incredibly, overwhelmingly complicated that it’s unlikely that it is physically possible, even in principle.
There’s a further complication. If SC is indeed in a quantum superposition
prior to t

_{0}, then there is no fact about whether the cat is dead or alive. But don’t you think the cat would disagree? OK, so if you believe cats don’t think, an identical thought experiment involving a human is called Wigner’s Friend: physicist Eugene Wigner has a friend who performs a measurement on a quantum object in a closed, isolated lab. Just before Wigner opens the door to ask his friend about the outcome of the measurement, is his friend in a superposition or a mixed state? If Wigner’s Friend is in a superposition, then that means there is no fact about which outcome he measured, but surely he would disagree! Amazingly, those philosophers who argue that WF is in a superposition actually*agree*that when he eventually talks to Wigner, he will insist that he measured a particular outcome, and that he remembers doing the measurement, and so forth, so they have to invent all kinds of fanciful ideas about memory alteration and erasure, retroactive collapse, etc., etc. All this to continue to justify the “in-principle” possibility of an absolutely ridiculous thought experiment that has done little more than confuse countless physics students.
I’m so tired of this.
I’m so tired of hearing about Schrodinger’s Cat and Wigner’s
Friend. I’m so tired of hearing the
phrase “possible in principle.” I’m so
sick of long articles full of quantum mechanics equations that “prove” the
possibility of SC without any apparent understanding of the limits to those
equations, the validity of their assumptions, or the extent to which their
physics analysis has any foundation in the actual observable physical
world. David Deutsch’s classic
paper is a prime example, in which he uses lots of “math” to “prove” not
only that the WF experiment can be done, but that WF can actually send a
message to Wigner, prior to t

_{0}, that is uncorrelated to the measurement outcome. Then, in a couple of sentences in Section 8.1, he casually mentions that his analysis assumes that: a) computers can be conscious; and b) Wigner’s Friend’s lab can be sufficiently isolated from the rest of the universe. Assumption a) is totally unfounded, which I discuss in this paper and this paper and in this post and this post, and I’ll refute assumption b) now.__Why the Schrodinger Cat experiment is not possible, even in principle__

Let me start by reiterating the meaning of superposition:
a quantum superposition represents a lack of objective fact. I’m sick of hearing people say things like
“Schrodinger’s Cat is partially alive and partially dead.” No.
That’s wrong. Imagine an object
in a superposition state |A> + |B>.
As soon as an event occurs that correlates one state (and not the other)
to the rest of the universe (or the “environment”), then the superposition no
longer exists. That event could consist
of a single photon that interacts with the object in a way that distinguishes
the eigenstates |A> and |B>, even if that photon has been traveling
millions of years through space prior to interaction, and continues to travel
millions of years more through space after interaction. The mere fact that evidence that
distinguishes |A> from |B> exists is enough to decohere the superposition
into one of those eigenstates.

In the real world there could never be a SC superposition
because a dead cat interacts with the universe in very different (and
distinguishable) ways from a live cat... imagine the trillions of impacts per
second with photons and surrounding atoms that would differ depending on the
state of the cat. Now imagine that all
we need is ONE such impact and that would immediately destroy any potential
superposition. (I pointed out in this
post how a group of researchers showed that relativistic time dilation on
the Earth’s surface was enough to prevent macroscopic superpositions!) And that’s why philosophers who discuss the
possibility of SC often mention the requirement of “thermally isolating”
it. What they mean is that we have to
set up the experiment so that not even a single photon can be emitted,
absorbed, or scattered by the box/lab in a way that is correlated to the cat’s
state. This is impossible in practice;
however, they claim it is possible in principle. In other words, they agree that decoherence
would kill the SC experiment by turning SC into a normal probabilistic mixture,
but claim that decoherence can be prevented by the “in-principle possible” act
of thermally isolating it.

Wrong.

In the following analysis, all of the superpositions will
be location superpositions. There are
lots of different types of superpositions, such as spin, momentum, etc., but every actual measurement in the real world is arguably a position
measurement (e.g., spin measurements are done by measuring

*where*a particle lands after its spin interacts with a magnetic field). So here’s how I’ll set up my SC thought experiment. At time t_{0}, the cat, measurement apparatus, box, etc., are thermally isolated so that (somehow) no photons, correlated to the rest of the universe, can correlate to the events inside the box and thus prematurely decohere a quantum superposition. I’ll even go a step further and place the box in deep intergalactic space where the spacetime has essentially zero curvature to prevent the possibility that gravitons could correlate to the events inside the box and thus gravitationally decohere a superposition. I’ll also set it up so that, when the experiment begins at t_{0}, a tiny object is in a location superposition |A> + |B>, where eigenstates |A> and |B> correspond to locations A and B separated by distance D. (I’ve left out coefficients, but assume they are equal.) The experiment is designed so that the object remains in superposition until time t_{1}, when the location of the object is measured by amplifying the quantum object with a measuring device so that measurement of the object at location A would result in some macroscopic mass (such as an indicator pointer of the measuring device) being located at position M_{A}in state |M_{A}>, while a measurement at location B would result in the macroscopic mass being located at position M_{B}in state |M_{B}>. Finally, the experiment is designed so that location of the macroscopic mass at position M_{A}would result, at later time t_{2}, in a live cat in state |live>, while location at position M_{B}would result in a dead cat in state |dead>. Here’s the question: at time t_{2}, is the resulting system described by the superposition |A>|M_{A}>|live> + |B>|M_{B}>|dead>, or by the mixed state of 50% |A>|M_{A}>|live> and 50% |B>|M_{B}>|dead>?
First of all, I’m not sure why decoherence doesn’t
immediately solve this problem. At time
t

_{0}, the measuring device, the cat, and the box are already well correlated with each other; the only thing that is not well correlated is the tiny object. In fact, that’s not even true... the tiny object is well correlated to everything in the box in the sense that it will NOT be detected in locations X, Y, Z, etc.; instead, the only lack of correlation (and lack of fact) is whether it is located at A or B. But as soon as anything in the box correlates to the tiny object’s location at A or B, then a superposition no longer exists and a mixed (i.e., non-quantum) state emerges. So it seems to me that the superposition has already decohered at time t_{1}when the measuring device, which is already correlated to the cat and box, entangles with the tiny object. In other words, it seems logically necessary that at t_{1}, the combination of object with measuring device has already reduced to the mixed state 50% |A>|M_{A}> and 50% |B>|M_{B}>, so clearly by later time t_{2}the cat is, indeed, either dead or alive and not in a quantum superposition.
Interestingly, even before t

_{1}, the gravitational attraction by the cat might actually decohere the superposition! If the tiny object is a distance R>>D from the cat having mass M_{cat}, then the differential acceleration on the object due to its two possible locations relative to the cat is approximately GM_{cat}D/2R^{3}. How long will it take for the object to then move a measurable distance δx? For a 1kg cat located R=1m from the tiny object, t ≈ 170000 √(δx/D), where t is in seconds. If we require the tiny object to traverse the entire distance D before we call it “measurable” (which is ridiculous but provides a limiting assumption), then t ≈ 170000 s. However, if we allow motion over a Planck length to be “measurable” (which is what Mari et al. assume!), and letting D be something typical for a double slit experiment, such as 1μm, then t ≈ 1ns. (This makes me wonder how much gravity interferes with maintaining quantum superpositions in the growing field of quantum computing, and whether it will ultimately prevent scalable, and hence useful, quantum computing.)
Gravitational decoherence or not, it seems logically
necessary to me that by time t

_{1}, the measuring device has already decohered the tiny object’s superposition. I’m not entirely sure how a proponent of SC would reply, as very few papers on SC actually mention decoherence, but I assume the reply would be something like: “OK, yes, decoherence has happened relative to the box, but the box is thermally isolated from the universe, so the superposition has not decohered relative to the universe and outside observers.” Actually, I think this is the only possible objection – but it is wrong.
When I set up the experiment at time t

_{0}, the box (including the cat and measuring device inside) were already extremely well correlated to me and the rest of the universe. Those correlations don’t magically disappear by “isolating.” In fact, Heisenberg’s Uncertainty Principle (HUP) tells us that correlations are quite robust and long-lasting, and the development of quantum “fuzziness” becomes more and more difficult as the mass of an object increases: Δx(mΔv) ≥ ℏ/2.
Let’s start by considering a tiny dust particle, which is
much, much, much larger than any object that has currently demonstrated quantum
interference. We’ll assume it is a 50μm diameter
sphere with a density of 1000kg/m

^{3}and an impact with a green photon (λ ≈ 500nm) has just localized it. How long will it take for its location fuzziness to exceed distance D of, say, 1μm? Letting Δv = ℏ/2mΔx ≈ 1 x 10^{-17}m/s, it would take 10^{11}seconds (around 3200 years) for the location uncertainty to reach a spread of 1μm. In other words, if we sent a dust particle into deep space, its location relative to other objects in the universe is so well defined due to its correlations to those objects that it would take several millennia for the universe to “forget” where the dust particle is within the resolution of 1μm. Information would still exist to localize the dust particle to a resolution of around 1μm, but not less. But this rough calculation depends on a huge assumption: that new correlation information isn’t created in that time! In reality, the universe is full of particles and photons that constantly bathe (and thus localize) objects. I haven’t done the calculation to determine just how many localizing impacts a dust particle in deep space could expect over 3200 years, but it’s more than a handful. So there’s really no chance for a dust particle to become delocalized relative to the universe.
So what about the box containing Schrodinger’s Cat? I have absolutely no idea how large the box
would need to be to “thermally isolate” it so that information from inside does
not leak out – probably enormous so that correlated photons bouncing around
inside the box have sufficient wall thickness to thermalize before being
exposed to the rest of the universe – but for the sake of argument let’s say
the whole experiment (cat included) has a mass of a few kg. It will now take 10

^{11}times longer, or around 300 trillion years – or 20,000 times longer than the current age of the universe – for the box to become delocalized from the rest of the universe by 1μm, assuming it can somehow avoid interacting with even a single stray photon passing by. Impossible. (Further, I neglected gravitational decoherence due to interaction with other objects in the universe, but 300 trillion years is a long time. Gravity may be weak, but it's not*that*weak!)
What does this tell us?
It tells us that the SC box will necessarily be localized relative to
the universe (including any external observer) to a precision much, much
smaller than the distance D that distinguishes eigenstates |A> and |B> of
the tiny object in superposition. Thus,
when the measuring device inside the box decoheres the superposition relative
to the box, it also does so relative to the rest of the universe. If there is a fact about the tiny object’s
position (say, in location A) relative to the box, then there is also
necessarily a fact about its position relative to the universe – i.e.,
decoherence within the box necessitates decoherence in general. An outside observer may not

*know*its position until he opens the box and looks, but the fact exists before that moment. When a new fact emerges about the tiny object’s location due to interaction and correlation with the measuring device inside the box, then that new fact eliminates the quantum superposition relative to the rest of the universe, too.
And, by the way, the conclusion doesn’t change by
arbitrarily reducing the distance D. A
philosopher might reply that if we make D really small, then eventually
localization of the tiny object relative to the box might

*not*localize it relative to the universe. Fine. But ultimately, to make the SC experiment work, we have to amplify whatever distance distinguishes eigenstates |A> and |B> to some large macroscopic distance. For instance, the macroscopic mass of the measuring device has eigenstates |M_{A}> and |M_{B}> which are necessarily distinguishable over a large (i.e., macroscopic) distance – say 1cm, which is 10^{4}larger than D=1μm. (Even at the extreme end, to sustain a superposition of the cat, if there is an atom in a blood cell that would have been in its head in state |live> at a particular time that is in its tail in state |dead>, then quantum fuzziness would be required on the order of 1m.)
What this tells us is that quantum amplification doesn’t
create a problem where none existed. If
there is no physical possibility, even in principle, of creating a macroscopic
quantum superposition by sending a kilogram-scale object into deep space and
waiting for quantum fuzziness to appear – whether or not you try to “thermally
isolate” it – then you can’t stuff a kilogram-scale cat in a box and depend on
quantum amplification to outsmart nature.
There simply is no way, even in principle, to adequately isolate a macroscopic
object (cat included) to allow the existence of a macroscopic quantum
superposition.