I recently came to the realization that the assumed universal linearity/applicability of quantum mechanics (an assumption called “U”) cannot be empirically verified, as I discuss in this paper, this paper, this post, this post, and this update.

This opinion is so unorthodox and so seemingly absurd (at least within the physics community) that the arXiv preprint server, which is not peer reviewed, literally rejected both of the above papers, presumably (although they won’t tell me) on the basis that it questions the “obvious” fact that macroscopic quantum superpositions of any size can be created and measured in principle.

As more
evidence that my independent
crackpot musings are both correct and at the cutting edge of foundational
physics, *Foundations of Physics* published this
article at the end of October that argues that “both *unitary* and *state-purity*
ontologies are not falsifiable.” The
author correctly concludes then that the so-called “black hole information
paradox” and SC disappear as logical paradoxes and that the interpretations of
QM that assume U (including MWI) cannot be falsified and “should not be taken
too seriously.” I’ll be blunt: I’m
absolutely amazed that this article was published, and I’m also delighted.

Finally, the tides may be turning, and whether or not I
am ever acknowledged or lent credibility, I can take solace in the knowledge
that I am on to something and that I likely understand things about the
universe that few other scientists do.
But I think I can take this further.
I think I can show (and indeed already have shown) that there is no
experiment that can be performed, even in principle, that can falsify *or
verify* the assumption of U.

For today’s post, I’m going to just focus on whether U was ever a valid assumption in the first place. If it’s not – spoiler alert: it’s not – then there was never a Measurement Problem (or black hole information paradox or SC problem, etc.) in the first place.

**I. INTRODUCTION**

Just as logicians are bound by the laws of physics,
physicists are bound by the rules of logic.
Some of the most persistent problems and paradoxes in the foundations of
quantum mechanics, many of which have been unsuccessfully tackled by physicists
and mathematicians for nearly a century, persist exactly *because* they
aren’t really physics or mathematics problems at all, but rather problems of
logic.

Consider the problems of Schrödinger’s Cat (“SC”) and its conscious cousin, Wigner’s Friend (“WF”). If SC is characterized (neglecting normalization constants) as, “A cat in state |dead> + |alive> is simultaneously dead and alive,”[1] then SC is inherently impossible because “dead” and “alive” are mutually exclusive. Therefore, if SC is physically possible, at least in principle, then it is patently false that a cat in state |dead> + |alive> is simultaneously dead and alive. Indeed, the characterization of any object in a quantum superposition of eigenstates of some observable as being “simultaneously in those eigenstates” is equally problematic.

While SC is not inherently paradoxical, it is more than a little odd. The generally accepted conclusion that a SC state is possible in principle follows directly from the assumption that the mathematics of quantum mechanics (i.e., its linear or unitary evolution) applies universally at all scales, including both electron and cat. This assumption leads to the century-old measurement problem (“MP”) of quantum mechanics (“QM”). The measurement problem is actually a paradox, meaning that it comprises logically incompatible assumptions. Contradictions are necessarily false and therefore cannot exist in nature. If the conjunction of statements A and B, for example, leads to a contradiction, then at least one of statements A and B is false. Often, MP has been characterized as the conjunction of three or more assumptions (e.g., Maudlin (1995) and Brukner (2017)); the logical incompatibility of these assumptions has been shown many times (e.g., Frauchiger and Renner (2018) and Brukner (2018)). A simpler characterization of MP, reducing it to two assumptions, has been provided by Gao (2019):

P1) The mental state of an observer supervenes on her
wave function;

P2) The wave function always evolves in accord with a linear dynamical equation (such as the Schrödinger equation).

Assumption P2 is often characterized as the “universality” of QM, in that the rules of QM are assumed to apply universally at all scales. Kastner (2020) correctly notes that “unitary-only evolution” is a better description than “universality” because, she argues, a complete theory of QM (if there is one) will necessarily apply universally but its wave equations may not evolve in a purely unitary or linear fashion. Whether meaning “universal” or “unitary-only,” in this paper I’ll generally refer to assumption P2 as “U.”

Contradictions do not exist in nature. Because at least one of P1 and P2 is indeed false, a simultaneous belief in both is due to faulty reasoning. The measurement problem is not a problem with nature; it is a problem with us. It is either the case that P1 is the result of improper assumptions and/or reasoning, that P2 is the result of improper assumptions and/or reasoning, or both. My goal in this paper is to attack P2 on a variety of grounds.

**II. EVALUATING
UNIVERSALITY**

**A. What’s
the Problem?**

The measurement problem is inextricably related to SC, WF, etc., and might be colloquially phrased as, “If a cat can exist as a superposition over macroscopically distinct eigenstates |dead> and |alive>, then why do we always see either a dead or a live cat?” Or: “If quantum mechanics applies universally at all scales (or wave functions always evolve linearly), then why do we never observe quantum superpositions?” Or even better: “Why don’t we see quantum superpositions?”

As Penrose (1999) points out, MP is actually a paradox
and any solution to it requires showing that at least one of its assumptions is
incorrect. All formulations of MP depend
on the assumption of U. Of course, MP
would be solved if it were shown that QM wave states did not always evolve
linearly, such as above certain scales.
After 100 years, why has this not yet been shown? Perhaps this is the wrong question. The first questions we should ask are: were
we justified in assuming U in the first place?
Has it been empirically demonstrated?
*Can* it be empirically demonstrated?

U is itself an inference.
The question is not whether logical inferences can be made in science
–they can and must. One can never prove
that a scientific hypothesis or law always holds; rather, it can only be adequately
verified to permit such an inference, subject always to the possibility of
experimental falsification. However, U
is a very special kind of inference, one that I will argue in the following
sections is invalid. First, U has been
verified only in microscopic regimes. No
experiment has shown it to apply to macroscopic regimes; there is no direct
experimental evidence for the applicability of the linear dynamics of QM to
macroscopic systems. Second, the lack of
such evidence is not for lack of trying.
Rather, there seems to be a kind of asymptotic limit to the size of a
system to which we are able to gather such evidence. Third, U gives rise to the measurement
problem – that is, it conflicts with what seems to be good empirical evidence
that linear QM dynamics do *not* apply in most[2] cases. These together, as I will argue, render U an
invalid inference. Further, even if one disagrees with this
argument, the burden of proof rests not with the skeptics but with those who endorse
an inference of U.

Regarding the first point – that U has been verified only
in the microscopic realm – there are certainly those who choose to tamper with
the colloquial meanings of the words “microscopic” and “macroscopic,” or
attempt to redefine “mesoscopic” as “nearly macroscopic,” to bolster their
case. Some may include as “macroscopic
quantum superpositions” atoms in superposition over position eigenstates
separated by a meter, or “macroscopic” (though barely visible) objects in
superposition over microscopically-separated position eigenstates (e.g., O’Connell
*et al.* (2010)). I regard this as
sophistry, particularly when such examples are used as empirical evidence that
the creation and measurement of truly macroscopic superpositions, such as SC,
are mere technological hurdles. Nevertheless,
I will assume that any physicist acting in good faith will readily admit that there
is currently no direct empirical evidence that linear QM evolution applies to a
cat, a human (e.g., WF), or even a virus.
That lack of experimental evidence renders an inference that QM applies
universally especially bold. And bold
inferences are not necessarily problematic – until they come into conflict with
other observations.

Consider these statements:

A1) Newton’s law of gravity applies universally.

A2) The observed perihelion of Mercury is in conflict with Newton’s law of gravity.

Statement A1 was a valid inference for a very long time. The conjunction of these statements, however, is a contradiction, implying that at least one of them is false. A contradiction sheds new doubt on each statement and increases the evidence necessary to verify each unless and until one of the assumptions is shown false. Despite enormous quantities of data supporting an inference of A1, conflicting evidence ultimately led Einstein to reject A1 and formulate general relativity.

The measurement problem is such a paradox; it is the
conjunction of two or more statements that lead to a contradiction. If there were no paradox, we might reasonably
have inferred U based only on the limited experimental data showing
interference effects from electrons, molecules, etc. However, U is in direct logical conflict with
other statement(s) the veracity of which we seem to have a great deal of
evidence. Therefore, to justify the
inference of U, we need more than a reason: we need a *good* reason. However, given that the paradox arose
essentially simultaneously with quantum theory, leading Erwin Schrödinger to
propose his hypothetical cat as an intuitive argument against U, the burden of
proof has always lain with those who assert U.
Have they met their burden? Do we
have good evidence to support the inference of U?

**B. Fighting
Fire with Fire**

Many (perhaps most) physicists have never questioned the
assumption of U, and once asked whether we have good evidence to support the
inference of U may regard the question itself as nonsense. “Of course the wave function always evolves
linearly – just look at the equations!”
Indeed, standard QM provides no mathematical formalism to explain,
predict, or account for breaks in linearity.
Some collapse theories (such as the “GRW” spontaneous collapse of
Ghirardi, Rimini, and Weber (1986) and gravitational collapse of Penrose (1996))
do posit such breaches, but no experiment has yet confirmed any of them or
distinguished them from other interpretations of QM. It is thus tempting, when evaluating U, to
glance at the equations of QM and note that they do, indeed, evolve
linearly. But the question isn’t whether
the equations evolve linearly, but whether the physical world *always*
obeys those equations, and the answer to that question does not appear within
the QM formalism itself.[3]

Modern physicists, who rely heavily on mathematics to
proceed, typically demand rigorous mathematical treatment in addressing and
solving physics problems. Ordinarily,
such demands are appropriate. However, MP
arises directly as a *result* of the mathematics of QM, in which the Schrödinger
equation evolves linearly and universally.
Because MP is itself a product of the mathematics of QM, its solution is
inherently inaccessible via the symbolic representations and manipulations that
produced it. If the math itself is
internally consistent – and I have no reason to believe otherwise – then you
cannot use the math of QM as evidence that the math of QM is always correct.[4] You can, however, use empirical evidence to
support such an inference. Do we have
such evidence?

**C. Empirical
Data**

The rules of QM allow us to make probabilistic predictions on the outcomes of measurements that differ from the expectations of classical probability. In a very real sense, this is both how QM was discovered[5] as well as what makes an event quantum mechanical. Wave functions of objects contain a superposition of terms having complex probability amplitudes, and interference between those terms in an experiment can alter outcome probabilities. A demonstration of quantum effects, then, depends on an interference experiment – i.e., an experiment that demonstrates interference effects in the form of altered probability distributions. In a very real sense, quantum mechanics is fundamentally about making probabilistic predictions that depend on whether interference effects from terms in a coherent superposition are relevant.[6]

It is often claimed that no violation of the linearity of QM has ever been observed, or that no experiment has ever shown the non-universality of QM. In fact, this is the only empirical scientific evidence that physicists can cite to support the inference of U. How good is this evidence? Consider the following claim, perhaps supported by the vast majority of physics:

*“The mathematics of QM applies to every object
subjected to a double-slit interference experiment, no matter how massive,
because no experiment has ever demonstrated a violation.”*

Indeed, double-slit interference experiments (“DSIE”)
have been successfully performed on larger and larger (though still
microscopic) objects, such as a C_{60} molecule. (Arndt
*et al.* (1999).) However,
to evaluate the extent to which this evidence supports an inference of U, it is
necessary to consider how DSIEs are set up and performed.

Nature – thanks to the Heisenberg Uncertainty Principle –
creates superpositions ubiquitously. Quantum uncertainty, loosely defined for a
massive object as , guarantees dispersion of
quantum wave packets, thus increasing the size of location superpositions over
time. However, interactions with fields, photons, and other particles
ever-present in the universe constantly “measure” the locations of objects and
thus decohere[7]
these superpositions. (See, e.g.,
Tegmark (1993) and Joos *et al.* (2013).) This decoherence, which
I’ll discuss in greater detail in the next section, explains both why we don't
observe superpositions in our normal macroscopic world and also why visible
interference patterns from quantum superpositions of non-photon objects[8] are so difficult to
create.

For instance, let's consider the non-trivial process, first performed by Davisson and Germer in 1927, of producing an electron in (non-normalized) superposition state |A> + |B>, where |A> is the wave state corresponding to the electron traversing slit A while |B> is the wave state corresponding to the electron traversing adjacent slit B in a double-slit plate. Electrons, one at a time, are passed through (and localized by) an initial collimating slit; quantum uncertainty results in dispersion of each electron’s wave state at a rate inversely proportional to the width of the collimating slit. If the process is designed so that adequate time elapses before the electron’s wave state reaches the double-slit plate, and without an intervening decoherence event with another object, the electron’s wave will be approximately spatially coherent over a width wider than that spanned by both slits. If the electron then traverses the double-slit plate, its wave function becomes the superposition |A> + |B>. Because such a superposition does not correspond to its traversing slit A or traversing slit B, it carries no “which-path” information about which slit the electron traversed. If each electron is then detected at a sensor located sufficiently downstream from the double-slit plate, again without an intervening decoherence event with another object, the spatial probability distribution of that electron’s detection will be calculable consistent with quantum mechanical interference effects. This lack of which-path information (thanks to successfully preventing any decohering correlations with other objects in the universe) implies that the electron’s superposition coherence was maintained, and thus the rules of quantum mechanics (and not classical probability) would apply to probability distribution calculations.[9]

Because the dispersion of an object’s wave function is
directly proportional to Planck’s constant and inversely proportional to its
mass, the ability to demonstrate the wave-like behavior of electrons is in
large part thanks to the electron’s extremely small mass.[10] The same method of
producing superpositions – waiting for quantum dispersion to work its magic – has
been used to produce double-slit interference effects of objects as large as a
few hundred and perhaps a couple thousand atoms. (See, e.g., Eibenberger *et al.* (2013)
and Fein *et al.* (2019).) However,
the more massive the object, the slower the spread of its wave state and the
more time is available for an event to decohere any possible superposition. Are there other methods, besides quantum
dispersion, to prepare an object for a DSIE?
I don’t know. However, every
successful DSIE to date has indeed depended on quantum dispersion of the
object’s wave packet, and it is this evidence, not the hypothetical possibility
of other experiments, that is available to support (or not) an inference of U.

So, within the data available to support the inference of
U, performing a DSIE by passing an object through slits A and B separated by
some distance *d* first requires making the object spatially coherent over
a distance exceeding *d*. To get
the object in the superposition |A> + |B> to subsequently show interference effects, you
have to provide the object in a state that is adequately quantum mechanically
“fuzzy” over a distance exceeding *d* – that is, a state that would
already demonstrate interference effects.
In other words, to do a DSIE on an object to show that it does not
violate the linearity of QM, you have to first provide an object prepared so
that an interference experiment would not violate the linearity of QM.

Said another way, the observation of an interference
effect in a double-slit interference experiment presupposes the spatial
coherence of an object over some macroscopically resolvable distance. But the ability to produce that object (or an
object of any size, at least in principle) in spatial coherence is the very
assumption of U. Measuring interference
effects by an object that has already been prepared to show interference
effects is not a confirmation of, or evidence for, the *universal*
linearity of QM. What it *does*
show is that QM is linear at least to that level. For example, if QM is indeed nonlinear
through a physical collapse mechanism like that proposed by GRW, then such a
collapse might be confirmed by first preparing a system in an appropriate
superposition (which should, if properly measured, demonstrate interference
effects), and then failing to observe interference effects. The ability to demonstrate, for example, a C_{60}
molecule exhibiting interference effects puts a lower limit on the scale (mass,
time period, etc.) to which a physical collapse mechanism would act.

My goal in this section is not to call into question the usefulness of interference experiments in demonstrating the applicability of linear QM to objects in those experiments. My goal is to point out the logical circularity of asserting that “QM applies universally because no interference experiment has shown a violation in linearity,” given that interference experiments are only performed on objects that have already been successfully prepared in a state that can demonstrate interference. The experimental difficulty is not in showing interference effects from an object prepared to show interference effects; the difficulty is in preparing the object to show interference effects. So what do the empirical data tell us about the difficulty in preparing objects to show interference effects? And do those data support an inference of U?

**D. Empirical
Data Revisited**

It may well be true that 100% of interference experiments
have failed to show nonlinearity, but if all experiments performed so far only
probe the microscopic realm and – more importantly – if these experiments are
quite literally chosen *because* they only probe the microscopic realm, then
the fact that no interference experiment has ever shown a violation of linearity
is simply not evidence to support the inference that QM is *universally* linear.

If the reason that interference experiments are chosen to
probe only the microscopic realm is merely that of convenience, or insufficient
grant funding, or technological limitation, then my argument would be limited only
to the conclusion that current empirical data do not support an inference to U,
in which case the proponent of U has failed to meet any reasonable burden of
proof. However, if it turns out that interference
experiments are chosen to probe only the microscopic realm because there is
something about the physical world, directly related to the size of systems,
that makes it impossible at least for all practical purposes (“FAPP”) to probe larger
systems, then this would serve as empirical evidence *against* U, in which
case the proponent’s unmet burden of proving the inference of U is far, far
greater.

Here are a few empirical facts: a) so far, the largest
object to show interference effects in a DSIE is a molecule consisting of
around two thousand atoms (Fein *et al.* (2019)); b) these experiments
have depended on quantum dispersion of an object’s wave packet to produce
adequate spatial coherence; and c) the rate of quantum dispersion quickly
approaches zero as the object increases in size. I would argue that these facts, particularly
our inability to prepare macroscopic object to show interference effects,
constitute very good evidence against U.[11]

Let me elaborate. If
an experimenter can rely on quantum dispersion to put a molecule in adequate
spatial coherence to measure interference effects, why can’t he do that for a
dust particle or a cat? Consider the
difficulty in performing a DSIE on a dust particle. Let's assume it is a 50μm-diameter
sphere with a density of 1000 kg/m^{3} and it has just been localized
by an impact with a green photon (λ ≈ 500nm). How long will it take for its
location “fuzziness” to exceed its own diameter (which would be the absolute
minimum spatial coherence allowing for passage through a double-slit plate)? Letting ≈ 10^{-18} m/s, it would take 5x10^{13}
seconds (about 1.5 million years) for the location uncertainty to reach a
spread of 50μm.[12]
In other words, if we sent a dust
particle into deep space, its location relative to other objects in the
universe is so well defined due to its correlations to those objects that it
would take over a million years for the universe to “forget” where the dust
particle is to a resolution allowing for the execution of a DSIE.[13] In this case, information in the universe
would still exist to localize the dust particle to a resolution of around 50μm,
but not less. Unfortunately, this rough calculation depends on a huge
assumption: that new correlation information isn’t created in that very long
window of time. In reality, the universe is full of particles and photons that
constantly bathe (and thus localize) objects.

Thus there is a trade-off in the delocalization caused by natural quantum dispersion and localizing “measurements” caused by interactions with the plethora of stuff whizzing through space. This trade-off is heavily dependent on the size of the object; a tiny object (like an electron) disperses quickly due to its low mass and experiences a low interaction rate with other objects, allowing an electron to more easily demonstrate interference effects. On the other hand, a larger object disperses more slowly while suffering a much higher interaction rate with other objects. These observations can be quantified in terms of coherence lengths: for a particular decoherence source acting on a particular object, what is the largest fuzziness we might expect in the object's center of mass? And, if we're hoping to do a DSIE, does this fuzziness exceed the object's diameter?

Tegmark (1993) calculates coherence lengths (roughly “the largest distance from the diagonal where the spatial density matrix has non-negligible components”) for a 10μm dust particle and a bowling ball caused by various decoherence sources, as shown in Table I. Even in deep space, cosmic microwave background (“CMB”) radiation alone will localize the dust particle to a dimension many orders of magnitude smaller than its diameter, thus ruling out any possibility for that object to become adequately delocalized (and thus adequately spatially coherent) relative to the universe to perform an interference experiment. The prospects are far worse for a bowling ball-sized cat.

Table I. Some values of coherence lengths for a 10μm dust particle and a bowling ball caused by various decoherence sources, given by Tegmark (1993).

In other words, as at least a practical matter, the
physical world is such that there is a size limit to the extent that quantum dispersion
can be relied upon to perform a DSIE.
Having said that, no one seriously argues (as far as I know) that SC or WF
could be produced, even in principle, through natural quantum dispersion. Rather, the typical argument is that SC/WF
could be produced through amplification of a quantum event via a von Neumann measurement
chain. Crucially, however, the purported
ability to amplify a quantum superposition *assumes* universal linearity
of QM, which means that it cannot be logically relied upon to contradict the
argument that QM is not universally linear.
Further, there is no empirical evidence that quantum amplification ever
has produced a measurable macroscopic quantum superposition.[14] In other words, without assuming that quantum
amplification can accomplish what quantum dispersion cannot – i.e., ignoring
the logical circularity of assuming the very conclusion that I am arguing
against – one must conclude that existing empirical evidence does not support the
inference that a DSIE can in principle be performed on macroscopic objects like
a cat.

As a purely empirical matter, all DSIEs that have been
performed depend on quantum dispersion, which depends inversely on the size of
an object, to produce the object in spatial coherence that exceeds some macroscopically
resolvable distance. Consequently, all
such experiments have been chosen specifically to probe the microscopic realm,
where quantum dispersion “works.” This observation
is sufficient to invalidate the inference that QM wave states evolve linearly *beyond*
the microscopic realm, because all such experiments are chosen specifically on
the basis of probing the microscopic realm.

Having said that, this section shows that such choices for
experimentation are not merely on the basis of convenience – rather, the
physical world is such that interference experiments inherently become *increasingly
difficult at an increasing rate* as the size of an object increases. There seems to be an asymptotic limit[15] on the size of an object on
which we can perform a DSIE. Our
difficulty in preparing larger objects to show interference effects is (or
should be) telling us something fundamental about whether QM wave states always
evolve linearly.

Importantly, I am not asserting that the above analysis
shows that performing a DSIE on a cat is impossible in principle. Rather, it shows that a fundamental feature
of our physical world is that our efforts to demonstrate interference effects
for larger systems have quickly diminishing returns; the harder we try to increase
the size of an object to which QM is verifiably linear, the more slowly that
size increases. There is at least some
physical size (perhaps within an order of magnitude of the dust particle in Table
I) above which no conceivable experiment, no matter how technologically
advanced, could demonstrate interference effects. The fact that such a size exists to physically
distinguish the “macroscopic” from the “microscopic,” which, as a practical
matter, *forces* us to choose interference experiments that probe only the
microscopic regime, is strong empirical evidence against an inference of
U. In other words, the existence of a
FAPP limitation, even if there is no in-principle limitation, is itself evidence
against an inference of U.

**E. Burden
of Proof**

It is worth emphasizing that the measurement problem does
not arise from the evidence that QM is linear in microscopic systems; it arises
only from an inference that QM remains linear in macroscopic systems. I have shown in the above sections that the
inference of U:

·
Is not supported by any direct empirical
evidence;

·
Is such that no practical or currently technologically
conceivable experiment can provide direct empirical evidence; and

· Is logically incompatible with one or more assertions (such as statement P1) for which we ostensibly have a great deal of evidence, thus giving rise to the measurement problem.

I cannot offer or conceive of any rational scientific basis on which to accept such an inference. For these reasons alone, from a scientific standpoint, MP should be dismissed – not because it has been solved, but because it should never have arisen in the first place. MP depends on the truth of at least two statements, one of which is U. It should be enough to show that the best empirical evidence regarding that statement is inadequate to support, and in fact opposes, the inference of U. Not only have proponents of U failed to meet their burden of proving that an inference of U is valid, that burden, in light of the arguments in this section, is exceptional.

**REFERENCES**

Arndt, M.,
Nairz, O., Vos-Andreae, J., Keller, C., Van der Zouw, G. and Zeilinger, A.,
1999. Wave–particle duality of C 60 molecules. *nature*, *401*(6754), pp.680-682.

Brukner, Č.,
2017. On the quantum measurement problem. In *Quantum [Un] Speakables II* (pp. 95-117). Springer, Cham.

Brukner, Č.,
2018. A no-go theorem for observer-independent facts. *Entropy*, *20*(5), p.350.

D’Ariano,
G.M., 2020. No purification ontology, no quantum paradoxes. *Foundations of Physics*, pp.1-13.

Davisson, C.
and Germer, L.H., 1927. The scattering of electrons by a single crystal of
nickel. *Nature*, *119*(2998), pp.558-560.

Eibenberger,
S., Gerlich, S., Arndt, M., Mayor, M. and Tüxen, J., 2013. Matter–wave
interference of particles selected from a molecular library with masses
exceeding 10000 amu. *Physical Chemistry Chemical Physics*, *15*(35), pp.14696-14700.

Fein, Y.Y.,
Geyer, P., Zwick, P., Kiałka, F., Pedalino, S., Mayor, M., Gerlich, S. and
Arndt, M., 2019. Quantum superposition of molecules beyond 25 kDa. *Nature Physics*, *15*(12), pp.1242-1245.

Frauchiger,
D. and Renner, R., 2018. Quantum theory cannot consistently describe the use of
itself. *Nature communications*, *9*(1), pp.1-10.

Gao, S.,
2019. The measurement problem revisited. *Synthese*, *196*(1), pp.299-311.

Ghirardi,
G.C., Rimini, A. and Weber, T., 1986. Unified dynamics for microscopic and
macroscopic systems. *Physical review D*, *34*(2), p.470.

Hossenfelder,
S., 2018. *Lost in math: How beauty leads physics astray*. Basic Books.

Joos, E., Zeh, H.D., Kiefer, C., Giulini, D.J., Kupsch, J.
and Stamatescu, I.O.,
2013. *Decoherence and the appearance of a classical world in quantum theory*. Springer Science & Business Media.

Kastner,
R.E., 2020. Unitary-Only Quantum Theory Cannot Consistently Describe the Use of
Itself: On the Frauchiger–Renner Paradox. *Foundations of Physics*, pp.1-16.

Knight, A., 2020. No paradox in wave-particle duality. Foundations of Physics, 50(11), pp. 1723-27.

Maudlin, T.,
1995. Three measurement problems. *topoi*, *14*(1), pp.7-15.

O’Connell,
A.D., Hofheinz, M., Ansmann, M., Bialczak, R.C., Lenander, M., Lucero, E.,
Neeley, M., Sank, D., Wang, H., Weides, M. and Wenner, J., 2010. Quantum ground
state and single-phonon control of a mechanical resonator. *Nature*, *464*(7289), pp.697-703.

Penrose, R.,
1996. On gravity's role in quantum state reduction. *General relativity and gravitation*, *28*(5), pp.581-600.

Penrose, R.,
1999. *The Emperor's New Mind: Concerning Computers, Minds, and the Laws of
Physics*. Oxford University Press.

Tegmark, M.,
1993. Apparent wave function collapse caused by scattering. *Foundations of Physics Letters*, *6*(6), pp.571-590.

[1]
Such as in this
article

[2]
Although, to show U false, just one counterexample is necessary.

[3]
Unless you count the “projection postulate,” which some would argue is *prima
facie *evidence that the QM equations do not always evolve linearly.

[4]
Some go further and assert that the beauty and/or simplicity of linear
dynamical equations are evidence for their universal applicability. I, like Hossenfelder (2018), disagree. Aesthetic arguments are not empirical
evidence for a scientific hypothesis, despite assertions by some string
theorists to the contrary.

[5]
The characterization of light as discrete particle-like objects, thanks to
Planck’s use of to avoid the Ultraviolet Catastrophe and
Einstein’s explanation of the photoelectric effect, showed that classical
probability is inapplicable to predicting the detection outcome of individual
particles in a double-slit interference experiment.

[6]
Like all probability rules, a statistically significant ensemble is necessary
to obtain useful information. A measurement on any object will always yield a
result that is consistent with that object's not having been in a
superposition; only by measuring many identically prepared objects may the
presence of a superposition appear in the form of an interference pattern.

[7]
The theory underlying decoherence is not incompatible with the assumption of U;
in fact, many (if not most) of the proponents of decoherence specifically
endorse U. Rather, decoherence is often
used to explain why it is so difficult to macroscopic objects in (coherent)
superpositions.

[8]
Interference effects of photons are actually quite easy to observe in part
because photons do not self-interact and thus are not decohered by other
radiation. Prior to the invention of lasers, a dense source of coherent
photons, which confirmed light's wave-like behavior, came directly from the
sun.

[9]
Indeed, the existence of which-path information – that is, the existence of a
correlating fact about the passage of the electron through one slit or the
other – is incompatible with existence of a superposition at the double-slit
plane. (See, e.g., Knight (2020).)

[10]
We might alternatively say that the de Broglie wavelength of an electron can be
made sufficiently large in a laboratory so as to reveal its wave nature.

[11]
Penrose (1999) suggests that the fact that we always observe measurement
results is excellent empirical evidence that the QM wave function cannot always
evolve linearly.

[12]
Tegmark (1993) notes that macroscopic systems tend to be in “nearly minimum
uncertainty states.”

[13]
This estimate completely neglects the additional time necessary to subsequently
measure an interference pattern.

[14]
A related objection is whether it is possible to adequately isolate or shield a
macroscopic object from decoherence sources long enough for dispersion to work
its magic. The answer is no, for reasons,
including logical circularity, that exceed the scope of this paper. But, like the hypothetical proposed “fix” of
amplification, there is no actual evidence that shielding or isolation ever has
produced a measurable macroscopic superposition.

[15]
I don’t mean this in a rigorous mathematical sense. Rather there is some rough object size below
which we are able, as a practical matter, to show interference effects of the
object and above which we simply cannot.
We might loosely call this size the “Heisenberg cut.”