Showing posts with label quantum mechanics. Show all posts
Showing posts with label quantum mechanics. Show all posts

Wednesday, August 19, 2020

Explaining Quantum Mechanics

I’ve been on a quest to understand quantum mechanics (“QM”) and think I’ve actually made significant progress, even though Feynman claimed that no one understands it.  What I’m going to write in this post is not ready for an academic paper, so I’m not going to try to prove anything.  Instead, I’m just going to explain my understanding of QM and how it is consistent with what I’ve learned and the results of QM experiments. 

I’m not sure yet whether this explanation is: a) correct; b) original; or c) testable.  Of course, I think it is correct as so far it’s the only explanation that seems consistent with everything I’ve learned about physics and QM.  Not only do I believe it’s correct, but I also hope it’s correct since it has helped me to understand a lot more about the physical world; to discover that it is false would mean that I am fundamentally mistaken in a way that would take me back to the drawing board.  Whether or not it is original is less important to me; as I noted before in this post, even if I had naturally stumbled upon an explanation of QM that had been discovered and written about by someone else, such as Rovelli’s Relational Interpretation, I’d be fine with that because my primary quest is to understand the physical world.  As it turns out, the following explanation is not equivalent to the Relational Interpretation or, as far as I’m aware, any of the other existing interpretations of QM.  So, if I’ve come upon something original, I’d like to get it into the information ether for purposes of priority, sharing ideas, collaboration, etc.  Finally, if my explanation actually is both correct and original, I would really like it to be testable and not just another interpretation of QM.  Currently, all QM interpretations are empirically indistinguishable.  While hypothetical experiments have been proposed to distinguish one or more interpretations from others (such as Deutsch’s classic paper, which aims to show how a collapse interpretation might be distinguished from a non-collapse interpretation), not only are such experiments impossible for all practical purposes in the near future, but may actually be impossible in principle.  Of course, even if it turns out to be just another indistinguishable interpretation, it is already valuable (at least to me) from a pedagogical point of view, as it clarifies and simplifies QM in a way that I haven’t seen elsewhere.  Having said all that, here is my current understanding/explanation for QM.

Information, Facts, and Relativity of Superposition

First, let’s start with information.  If I throw a rock with my hand, then that fact – that is, the fact that the rock and my hand interacted in a particular way – gets embedded in a correlation between the rock and my hand so that they will both, in some sense, evidence the interaction/event.  The information about that fact gets carried in that correlation so that future interactions/events are consistent with that fact, which is what gives rise to predictability.  The trajectory of that rock can then be predicted and calculated relative to my hand because the information stored in the correlation between the rock and my hand guarantees that future events are consistent with that past event (and all past events).  The set of possible futures that are consistent with past events is so limited that the future trajectory of the rock can be calculated to (almost) limitless precision.  In fact, the precision to which a person could predict a rock’s trajectory is so good that it was thought until the discovery of quantum mechanics (and specifically Planck’s constant) that the physical world is perfectly deterministic.  Many (perhaps most) physicists are determinists, believing that the world evolves in a predetermined manner.  The notion of determinism is heavily related to Newtonian physics: If I know an object’s initial conditions, and I know the forces acting on it, then I can predict its future trajectory. 

This is certainly true to some degree.  However, due to chaos, the further in the future we go, the more sensitive those predictions are to the precision of the initial conditions.  So if we don’t know an object’s initial conditions to infinite precision, then it’s just a matter of time before chaos amplifies the initial uncertainty to the point of complete unpredictability.  This fascinating paper shows that that’s true even if we are looking at three massive black holes with initial conditions specified to within Planck’s constant.  Of course, QM asserts that we can’t specify initial conditions better than that, so this seems to me pretty good evidence that the universe is fundamentally indeterministic. 

The thing is... why should we ever have believed that infinite precision was possible, even in principle?  Instead, the amount of information in the universe is finite, a concept reasonably well established by the entropy concept of the Bekenstein Bound, and also well articulated by Rovelli’s explanation that the possible values of an object’s location in phase space cannot be smaller than a volume that depends on Planck’s constant.  However, even if we can all agree that information in the universe is finite, there is no agreement on whether it is constant.  Most physicists seem to think it’s constant, which is in part what gives rise to the so-called black hole information paradox.

Part of the motivation for believing that information is constant in the universe is that in quantum mechanics, solutions to the Schrodinger Equation evolve linearly and deterministically with time; that is, the amount of information contained in a quantum wave state does not change with time.  Of course, the problem with this is that a quantum wave state is a superposition of possible measurement outcomes (where those possible outcomes are called “eigenstates” of the chosen “measurement operator”)... and we never observe or measure a superposition.  So either the quantum wave state at some point “collapses” into one of the possible measurement outcomes (in which case the wave state is not always linear or universally valid), or it simply appears to collapse as the superposition entangles with (and gets amplified by) the measuring device and ultimately the observer himself, so that the observer evolves into a quantum superposition of having observed mutually exclusive measurement outcomes.  This second situation is called the Many Worlds Interpretation (“MWI”) of QM.

I regard MWI as a nonstarter and give specific reasons why it is nonsensical in the Appendix of this paper.  But there is another deeper reason why I reject MWI: it is a pseudoscientific religion that is lent credibility by many well-known scientists, among them Sean Carroll.  Essentially, neither MWI nor the concept of a Multiverse (which is mathematically equivalent to MWI, according to Leonard Susskind) is empirically testable, which already excludes them from the realm of science.  But more importantly, they both posit the concept of infinity to overcome the fine-tuning problem or Goldilocks Enigma in physics.  People like Carroll don’t like the notion that our universe (which appears “fine-tuned” for the existence of intelligent life) is extraordinarily unlikely, a fact that many (including me) suggest as evidence for the existence of a Creator.  So to overcome odds that approach zero, they simply assert (with no empirical evidence whatsoever) the existence of infinitely many worlds or universes, because 0 * ∞ = 1.  That is, infinity makes the impossible possible.  But just as anything logically follows from a contradiction, anything follows from infinity – thus, infinity is, itself, a contradiction.

Suffice it to say that I’m reasonably sure that at some point a quantum wave state stops being universal.  Call it “collapse” or “reduction” if you will, but the idea is that at some point an object goes from being in a superposition of eigenstates to one particular eigenstate.  (Later in this post, when I discuss the in-principle impossibility of Schrodinger’s Cat, it won’t actually make any difference whether wave state collapse is actual or merely apparent.)  With regard to information, some have characterized this as keeping information constant (e.g., Rovelli), as decreasing the amount of information (e.g., Aaronson), or as increasing the amount of information (e.g., Davies). 

Anyway, here’s what I think: the information in the universe is contained in correlations between entangled objects, and essentially every object is correlated directly or indirectly to every other object (i.e., universal entanglement).  That information is finite, but additional interactions/events between objects (and/or their fields) may increase the information.  (Quick note: whether or not “objects” exist at all, as opposed to just fields, doesn’t matter.  Art Hobson might say that when I throw a rock, all I experience is the electrostatic repulsion between the fields of the rock and my hand, but that doesn’t change that I can treat it as a physical object on which to make predictions about future interactions/events.)  I gave an example using classical reasoning in this post and in this paper, but the idea is very simple. 

For example, imagine a situation in which object A is located either 1cm or 2cm from object B, by which I mean that information in the universe exists (in the form of correlations with other objects in the universe) to localize object A relative to object B at a distance of either 1cm or 2cm.  (As wacky as this situation sounds, it’s conceptually identical to the classic double-slit interference experiment.)  That is, there is a fact – embedded in correlations with other objects – about objects A and B not being separated by 3cm, or 0.5cm, or 1000 light-years, etc., but there is not a fact about whether object A is separated from object B by a distance of 1cm or 2cm.  It’s got nothing to do with knowledge.  It’s not that object A is “secretly” located 1cm from object B and we just don’t know it.  Rather, there just is no fact about whether object A and B are separated by 1cm or 2cm.  (If it were simply a question of knowledge, then we wouldn’t see quantum interference.)

That’s quantum superposition.  In other words, if at some time t0 there is no fact about whether object A and B are located 1cm or 2cm apart, then they exist in a (location) superposition.   Object A would say that object B is in a superposition, just as object B would say that object A is in a superposition.  We might call this relativity of superposition.  It was in this post that I realized that a superposition of one object exists relative to another object, and both objects have the same right to say that the other object is in superposition.  Compare to Special Relativity: there is a fact about an object's speed relative to another, and each object can equally claim that the other is moving at that speed, even though it makes no sense to talk of an object's speed otherwise.  Similarly, there may be a fact about one object being in superposition relative to another, with each object correctly claiming that the other is in superposition, even though it makes no sense to talk of an object in superposition without reference to another object.

Whether or not a superposition exists is a question of fact.  If a superposition exists (because the facts of the universe are inadequate to reduce it), then the rules of quantum mechanics, which depend on interference, apply to probabilistic predictions; if a superposition does not exist, then ordinary classical probability will suffice because interference terms vanish.  If at some time t1 an event occurs that correlates objects A and B in a way that excludes the possibility of their being separated by 2cm, then that correlating event is new information in the universe about object A being located 1cm from object B.  Importantly, that information appears at time t1 and does not retroactively apply.  We cannot now say that objects A and B were in fact separated by 1cm at time t0 but we simply didn’t know.  Indeed, this is the very mistake often made in the foundations of physics that I addressed in this soon-to-be-published paper.  Said another way, if an object’s superposition is simply the lack of a relevant fact, then the measurement of that object in a manner (a “measurement basis”) that reduces the superposition is new information.  By “measurement,” I simply mean the entanglement of that object with other objects in the universe that are already well correlated to each other.

By the way, I have no idea how or why the universe produces new information when an event reduces a quantum superposition, but this is not a criticism of my explanation.  Either new information arises in the universe or it doesn’t, but since we have no scientific explanation for the existence of any information, I don’t see how the inexplicable appearance of all information at the universe’s birth is any more intellectually satisfying than the inexplicable appearance of information gradually over time.

I should also mention that when I say “superposition,” I almost always mean in the position basis.  The mathematics of QM is such that a quantum state (a unit-length ray in Hilbert space) can be projected onto any basis so that an eigenstate in one basis is actually a superposition in another basis.  However, the mathematics of QM has failed to solve many of the big problems in the foundations of physics and has arguably rendered many of them insoluble (at least if we limit ourselves to the language and math of QM).  I have far more confidence in the explanatory powers of logic and reason than the equations of QM.  So even though I fully understand that, mathematically, every pure quantum state is a superposition in some basis, when I say an object is in superposition, I nearly always mean that it is in a location or position superposition relative to something else.  There are lots of reasons for this choice.  First, I don’t really understand how a measurement can be made in anything but the position basis; other scholars have made the same point, so I’m not alone.  We typically measure velocity, for example, by measuring location at different times.  We could measure the velocity of an object by bouncing light off it and measuring its redshift, but without giving it a great deal of thought, I suspect that even measuring redshift ultimately comes down to measuring the location of some object that absorbs or scatters from the redshifted photon.  And when I say that we only measure things in the position basis, I don’t just mean in a lab... our entire experience all comes down to the localization of objects over time.  In other words, the most obvious way (and arguably the only way) to imagine a quantum superposition is in the position basis.

Second, the universe has clearly already chosen the position basis as a preferred basis.  Objects throughout the universe are already well localized relative to each other.  When an object exists in a (location) superposition, other objects and fields are constantly bathing that object to localize it and thereby reduce or decohere the superposition in the position basis.  In fact, it is the localizing effects of objects and fields throughout the universe that makes the creation of (location) superpositions of anything larger than a few atoms very difficult.  The concept of decoherence can explain why superpositions tend to get measured in the basis of their dominating environment, but does not explain why the universe chose the position basis to impose on superpositions in the first place.  Nevertheless, there is something obviously special about the position basis.

Transitivity of Correlation

Because information exists in the form of correlations between objects that evidence the occurrence of past events (and correspondingly limit future possible events), that information exists whether or not observers know it about their own subsystem, a different correlated subsystem, or even a different uncorrelated subsystem.

Consider an object A that is well correlated in location to an object B, by which I mean that relative to object A, there is a fact about the location of object B (within some tolerance, of course) and object B is not in a location superposition relative to object A.  (Conversely, relative to object B, there is a fact about the location of object A and object A is not in a location superposition relative to object B.)  Object A may be well correlated to object B whether or not object A “knows” the location of B or can perform an interference experiment on an adequate sampling of identically prepared objects to show that object B is not in a location superposition relative to object A.  The means by which objects A and B became well correlated is irrelevant, but may be due to prior interactions with each other and each other’s fields (electromagnetic, gravitational, etc.), mutual interaction with other objects and their fields, and so forth.  Now consider an object C that is well correlated in location to object B; object C must also be well correlated to object A.  That is, if object C is not in a location superposition relative to object B, then it is not in a location superposition relative to object A, whether or not object A “knows” anything about object C or can perform an interference experiment to test whether object C is in a location superposition relative to object A.

I’ll call this notion the transitivity of correlation.  It seems insanely obvious to me, but I can’t find it in the academic literature.  Consider a planet orbiting some random star located a billion light years away.  I certainly have never interacted directly with that planet, and I may have never even interacted with an object (such as a photon) that has been reflected or emitted by that planet.  Nevertheless, that planet is still well localized to me; that is, there is a fact about its location relative to me to within some very, very tiny Planck-level fuzziness.  I don’t know the facts about its location, but if I were to measure it (to within some tolerance far exceeding quantum uncertainty), classical calculations would suffice.  I would have no need of quantum mechanics because it is well correlated to me and not in a superposition relative to me.  This is true because of the transitivity of correlation: the planet is well correlated to its sun, which is well correlated to its galaxy, which is well correlated to our galaxy, which is well correlated to our sun, etc.

The thing is – everything in the universe is already really, really well correlated, thanks to a vast history of correlating events, the evidence of which is embedded in mutual entanglements.  But for the moment let’s imagine subsystem A that includes a bunch of well-correlated objects (including object 1) and subsystem B that includes its own bunch of well-correlated objects (including object 2), but the two subsystems are not themselves well correlated.  In other words, they are in a superposition relative to each other because information does not exist to correlate them.  From the perspective of an observer in A, the information that correlates the objects within B exists but is unknown, while information that would well-correlate objects 1 and 2 does not exist.  However, events/interactions between objects 1 and 2 creates new information to reduce their relative superpositions and make them well correlated.  Then, because objects in A are already well correlated to object 1, while objects in B are already well correlated to object 2, the events that correlate objects 1 and 2 correspondingly (and “instantaneously”) correlate all the objects in subsystem A to all the objects in subsystem B.

This is a paraphrasing of what Einstein called “spooky action at a distance” (and also what many scholars have argued is a form of weird or impermissible nonlocality in QM).  But explained in the manner above, I don’t find this spooky at all.  Rather, from the perspective of an observer in subsystem A, unknown facts of past events are embedded in entanglements within subsystem B, while there simply are no facts (or perhaps inadequate facts) to correlate subsystems A and B.  Once those facts are newly created (not discovered, but created) through interactions between objects 1 and 2, the preexisting facts between objects in subsystem B become (new) facts between objects in both subsystems.  Let me say it another way.  An observer OA in subsystem A is well correlated to object 1, and an observer OB in subsystem B is well correlated to object 2, but they are not well correlated to each other; i.e., they can both correctly say that the other observer is in a superposition.  When object 1 becomes well correlated to object 2, observer OA becomes well correlated to observer OB.  This correlation might appear “instantaneous” with the events that correlate objects 1 and 2, but there’s nothing spooky or special-relativity-violating about this.  Rather, observer OA was already well correlated to object 1 and observer OB was already well correlated to object 2, so they become well correlated to each other upon the correlation of object 1 to object 2.

Of course, because everything in the universe is already so well correlated, the above scenario is only possible if one or both of the subsystems are extremely small.  The universe acts like a superobserver “bully” constantly decohering quantum superpositions, and in the process creating new information, in its preferred (position) basis.  Still, if the universe represents subsystem A, then a small subsystem B can contain its own facts (i.e., embedded history of events) while being in superposition relative to A.  Imagine subsystem B containing two correlated particles P1 and P2 – e.g., they have opposite spin (or opposite momentum).  When the position of particle P1 is then correlated (e.g., by detection, measurement, or some other decoherence event) to objects in subsystem A (i.e., the rest of the universe), that position corresponds to a particular spin (or momentum).  But because particle P1 was already correlated to its entangled particle P2, the spin (or momentum) of particle P2 is opposite, a fact that preceded detection of particle P1 by the universe.  That fact will be reflected in any detection of particle P2 by the universe.  Further, facts do not depend on observer status.  An observer relative to particles P1 and P2 has as much right to say that the universe was in superposition and that the correlation event (between P1 and objects in the rest of the universe) reduced the superposition of the universe. 

This explanation seems so obviously correct that I don’t understand why, in all my reading and courses in QM, no one has ever explained it this way.  To buttress the notion that facts can exist in an uncorrelated subsystem (and that measurement of that subsystem by a different or larger system creates correlating facts to that subsystem but not within that subsystem), consider this.  As I walk around in the real world, thanks to natural quantum dispersion, I am always in quantum superposition relative to the rest of the world, whether we consider my center of mass or any other measurable quantity of my body.  Not by much, of course!  But given that the universe does not contain infinite information, there must always be some tiny superposition fuzziness between me and the universe – yet that doesn’t change the fact that my subsystem includes lots of correlating facts among its atoms.

Killing Schrodinger’s Cat

I tried to explain in this draft paper why Schrodinger’s Cat and Wigner’s Friend are impossible in principle, but the explanation still eludes the few who have read it.  The following explanation will add to the explanation I gave in that draft paper.  The idea of SC/WF is that there is some nonzero time period in which some macroscopic system (e.g., a cat) is in an interesting superposition (e.g., of states |dead> and |alive>) relative to an external observer.  It is always (ALWAYS) asserted in the academic literature that while detecting such a superposition would be difficult or even impossible in practice, it is possible in principle. 

Let’s start with some object in superposition over locations A and B, so that from the perspective of the box (containing the SC experiment) and the external observer, its state is (unnormalized) superposition state |A> + |B>.  However, from the perspective of the object, the box is in a superposition of being located in positions W and X in corresponding states |boxW> and |boxX> and the observer is in a superposition of being located in positions Y and Z in corresponding states |obsY> and |obsZ>.  But remember that the box and observer are, at the outset of the experiment, already well correlated in their positions, which means that from the object’s perspective, the system is in state |boxW>|obsY> + |boxX>|obsZ>.  When the object finally gets correlated to the box, it “instantly” and necessarily gets correlated to the observer.  It makes no difference whether the quantum wave state actually collapses/reduces or instead evolves linearly and universally.  Either way – whether the wave state remains |boxW>|obsY> + |boxX>|obsZ> or collapses into, say, |boxW>|obsY> – there is never an observer who is not yet correlated to the box and who can do an interference experiment on it to confirm the existence of a superposition.  Once the object “measures” the position of the box, it inherently measures the position of the observer, which means that there is never an observer for whom the box is in a superposition (unless the box is already in a superposition relative to the observer, which, as I pointed out in the draft paper, is impossible because of very fast decoherence of macroscopic objects).

In Eq. 1 of the draft paper, I show a simple von Neumann progression, with each arrow (à) representing a time evolution.  If it’s right, then there is a moment when the measuring system is in superposition but the observer is not.  The problem with that conclusion, as I’ve been trying to explain, is that because the observer is already well correlated to the measuring system (and the box, cat, etc.) to within a tolerance far better than the distance separating the object’s eigenstates |n>, correlation of the object’s location to that of the measuring device “instantly” correlates the observer and the rest of the universe.  Once there is a fact (and thus no superposition) about the object’s location relative to the measuring device, there is also a fact (and thus no superposition) about the object’s location relative to the observer.  There is no time at which the observer can attempt an interference experiment to confirm that the box (or cat or measuring device, etc.) are in a superposition.

Therefore, the SC and WF experiments are impossible in principle.  An MWI apologist might retort that Schrodinger’s cat is, indeed, in a superposition, along with the observer.  But since there is no time at which an interference experiment could be performed, even in principle, to confirm this, their claim is both unscientific and useless.  They may as well say that unicorns exist but are impossible to detect.

Other Thoughts and Implications

CCCH.  In this post, I heavily criticized a paper that incorrectly asserted that the consciousness-causes-collapse hypothesis (“CCCH”) has already been empirically falsified.  I did so because I despise bad logic and bad science, not because I endorse CCCH.  Indeed, if my explanation of QM is correct, then collapse (or apparent collapse) of the quantum wave function has nothing to do with consciousness but is rather the result of new information produced from correlating interactions with the environment/universe.

The Damn Cat.  What I haven’t discussed so far are “interesting” evolutions due to amplifications of quantum events.  The argument above, which is an extension of the argument in my draft paper on Schrodinger’s cat, is essentially that quantum amplification of a tiny object in superposition cannot place the macroscopic box containing SC in a superposition relative to the observer any more easily than the box itself can naturally disperse (via quantum uncertainty) relative to the observer.  And decoherence by objects and fields in the universe makes adequate dispersion of the box impossible even in principle.  But there’s more.  If I am right in my explanation of QM, then interacting subsystems will agree on the facts embedded in their correlations.  For example, if subsystem B is in a superposition of being located 1cm and 2cm from subsystem A, then when an observer in subsystem A “measures” B at a distance of, say, 1cm, then from the perspective of an observer in subsystem B, B measured A also at a distance of 1cm.

In the case of SC (and WF), I analyzed them in terms of fuzziness of objects, such as a particular carbon atom that would have been located in the live cat’s brain is instead located in the dead cat’s tail, thus requiring a quantum fuzziness to span a meter or so.  But of course SC is far more interesting: in one case there is a live cat and in another case there is a dead cat, represented by a vastly different set of correlations among its constituent atoms. 

In order for a SC superposition to actually be produced relative to the external observer (and the universe to which he is well correlated), it must be the case that a “comparable” superposition of the universe is produced relative to SC.  Then, when an event occurs that correlates the two systems – which, remember, can ultimately be traced back to the location of some tiny particle at A or B, separated by some tiny distance – observers in each of the two systems will agree on the correlations.  So let’s say we’ve somehow created a SC superposition.  We wait a few minutes, pop a bottle of champagne to celebrate the amazing feat, and then we open the box to look, at which point see a live cat – that is, we (and the rest of the universe) become correlated with |live>.  But since the correlations among the atoms in the box exist before we open the box, then the hypothetical state |dead> must be correlated with a universe that would have seen that exact set of facts as a dead cat.  How does one measure a live cat as dead?  Remember, we are not just talking about measuring a heartbeat, etc... we are talking about a universe that is constantly inundating the cat’s atoms in a way so that every observer in that universe would observe a dead cat.  That is, if it were possible to produce a cat in a superposition of |dead> and |alive> inside a box, then from the perspective of an observer inside the box, the universe would have to be in a superposition of being in a state that would measure a set of atoms as being a dead cat and another state that would measure the same set of atoms as being a live cat.  Ridiculous.

Saturday, June 13, 2020

Killing Schrodinger’s Cat – Once and For All

Background of Schrodinger’s Cat

Quantum mechanics stops where classical probabilities start.  In the classical world, we work with probabilities directly, while in quantum mechanics, we work with probability amplitudes, which are complex numbers (involving that weird number i), before applying the Born rule, which requires squaring the norm of the amplitude to arrive at probability.  If you don’t know anything about quantum mechanics, this may sound like gibberish, so here’s an example showing how quantum mechanics defies the rules of classical probability.

I shine light through a tiny hole toward a light detector and take a reading of the light intensity.  Then I repeat the experiment, the only difference being that I’ve punched a second tiny hole next to the first one.  Classical probability (and common sense!) tells us that the detector should measure at least the same light intensity as before, but probably more.  After all, by adding another hole, surely we are allowing more light to reach the detector... right?!  Nope.  We could actually measure less light, because through the process of interference among eigenstates in a superposition, quantum mechanics screws up classical probability.  In some sense, the violation of classical probability, which tends to happen only in the microscopic world, is really what QM is all about.  And when I say “microscopic,” what I really mean is that the largest object to which an interference experiment has been performed (thus demonstrating QM effects) is a molecule of a few hundred amu, which is much, much, much, much smaller than can be seen with the naked eye or even a light microscope.  So we have no direct empirical evidence that the rules of QM even apply to macroscopic objects.

Having said that, many physicists and philosophers insist that there’s no limit “in principle” to the size of an object in quantum superposition.  The question I’ve been wrestling with for a very long time is this: is there an actual dividing line between the “micro” and “macro” worlds at which QM is no longer applicable?  The “rules” of quantum mechanics essentially state that when one quantum object interacts with another, they just entangle to create a bigger quantum object – that is, until the quantum object becomes big enough that normal probability rules apply, and/or when the quantum object becomes entangled with a “measuring device” (whatever the hell that is).  The so-called measurement problem, and the ongoing debates regarding demarcation between “micro” and “macro,” have infested physics and the philosophy of quantum mechanics for the past century.

And no thought experiment better characterizes this infestation than the obnoxiously annoying animal called Schrodinger’s Cat.  The idea is simple: a cat is placed in a box in which the outcome of a tiny measurement gets amplified so that one outcome results in a dead cat while the other outcome keeps the cat alive.  (For example, a Geiger counter measures a radioisotope so that if it “clicks” in a given time period, a vial of poison is opened.)  Just before we open the box at time t0, there’s been enough time for the poison to kill the cat, so we should expect to see either a live or dead cat.  Here’s the kicker: the “tiny measurement” is on an object that is in quantum superposition, to which the rules of classical probability don’t apply. 

So does the quantum superposition grow and eventually entangle with the cat, in which case, just prior to time t0, the cat is itself in a superposition of “dead” and “alive” states (and to which the rules of classical probability do not apply)?  Or does the superposition, before entangling with the cat, reduce to a probabilistic mixture, such as through decoherence or collapse of the wave function?  And what the hell is the difference?  If the cat is in a superposition just prior to time t0, then there just is no objective fact about whether the cat is dead or alive, and our opening of the box at t0 is what decoheres (or collapses or whatever) the entangled wave state, allowing the universe to then randomly choose a dead or live cat.  However, if the cat is in a mixed state just prior to t0, then there is an objective fact about whether it is dead or alive – but we just don’t know the fact until we open the box.  So the question really comes down to this: do we apply classical probability or quantum mechanics to Schrodinger’s Cat?  Or, to use physics terminology, the question is whether, just prior to opening the box, Schrodinger’s Cat is in a coherent superposition or a probabilistic mixed state. 

Why is this such a hard question?

It’s a hard question for a couple reasons.  First, remember that QM is about statistics.  We never see superpositions.  The outcome of every individual trial of every experiment ever performed in the history of science has been consistent with the absence of quantum superpositions.  Rather, superpositions are inferred when the outcomes of many, many trials of an experiment on “identically prepared” objects don’t match what we would have expected from normal probability calculations.  So if the only way to empirically distinguish between a quantum cat and a classical cat requires doing lots of trials on physically identical cats... ummm... how exactly do we create physically identical cats?  Second, the experiment itself must be an “interference” experiment that allows the eigenstates in the wave state to interfere, thus changing normal statistics into quantum statistics.  This is no easy task in the case of Schrodinger’s Cat, and you can’t just do it by opening the box and looking, because the probabilities of finding the cat dead or alive will be the same whether or not the cat was in a superposition just prior to opening the box.  So doing lots of trials is not enough – they must be trials of the right kind of experiment – i.e., an interference experiment.  And in all my reading on SC, I have never – not once – encountered anything more than a simplistic, hypothetical mathematical treatment of the problem.  “All you have to do is measure the cat in the basis {(|dead> + |live>)/√2, (|dead> - |live>)/√2}!  Easy as pie!”  But the details of actually setting up such an experiment are so incredibly, overwhelmingly complicated that it’s unlikely that it is physically possible, even in principle.

There’s a further complication.  If SC is indeed in a quantum superposition prior to t0, then there is no fact about whether the cat is dead or alive.  But don’t you think the cat would disagree?  OK, so if you believe cats don’t think, an identical thought experiment involving a human is called Wigner’s Friend: physicist Eugene Wigner has a friend who performs a measurement on a quantum object in a closed, isolated lab.  Just before Wigner opens the door to ask his friend about the outcome of the measurement, is his friend in a superposition or a mixed state?  If Wigner’s Friend is in a superposition, then that means there is no fact about which outcome he measured, but surely he would disagree!  Amazingly, those philosophers who argue that WF is in a superposition actually agree that when he eventually talks to Wigner, he will insist that he measured a particular outcome, and that he remembers doing the measurement, and so forth, so they have to invent all kinds of fanciful ideas about memory alteration and erasure, retroactive collapse, etc., etc.  All this to continue to justify the “in-principle” possibility of an absolutely ridiculous thought experiment that has done little more than confuse countless physics students.

I’m so tired of this.  I’m so tired of hearing about Schrodinger’s Cat and Wigner’s Friend.  I’m so tired of hearing the phrase “possible in principle.”  I’m so sick of long articles full of quantum mechanics equations that “prove” the possibility of SC without any apparent understanding of the limits to those equations, the validity of their assumptions, or the extent to which their physics analysis has any foundation in the actual observable physical world.  David Deutsch’s classic paper is a prime example, in which he uses lots of “math” to “prove” not only that the WF experiment can be done, but that WF can actually send a message to Wigner, prior to t0, that is uncorrelated to the measurement outcome.  Then, in a couple of sentences in Section 8.1, he casually mentions that his analysis assumes that: a) computers can be conscious; and b) Wigner’s Friend’s lab can be sufficiently isolated from the rest of the universe.  Assumption a) is totally unfounded, which I discuss in this paper and this paper and in this post and this post, and I’ll refute assumption b) now.

Why the Schrodinger Cat experiment is not possible, even in principle

Let me start by reiterating the meaning of superposition: a quantum superposition represents a lack of objective fact.  I’m sick of hearing people say things like “Schrodinger’s Cat is partially alive and partially dead.”  No.  That’s wrong.  Imagine an object in a superposition state |A> + |B>.  As soon as an event occurs that correlates one state (and not the other) to the rest of the universe (or the “environment”), then the superposition no longer exists.  That event could consist of a single photon that interacts with the object in a way that distinguishes the eigenstates |A> and |B>, even if that photon has been traveling millions of years through space prior to interaction, and continues to travel millions of years more through space after interaction.  The mere fact that evidence that distinguishes |A> from |B> exists is enough to decohere the superposition into one of those eigenstates.

In the real world there could never be a SC superposition because a dead cat interacts with the universe in very different (and distinguishable) ways from a live cat... imagine the trillions of impacts per second with photons and surrounding atoms that would differ depending on the state of the cat.  Now imagine that all we need is ONE such impact and that would immediately destroy any potential superposition.  (I pointed out in this post how a group of researchers showed that relativistic time dilation on the Earth’s surface was enough to prevent macroscopic superpositions!)  And that’s why philosophers who discuss the possibility of SC often mention the requirement of “thermally isolating” it.  What they mean is that we have to set up the experiment so that not even a single photon can be emitted, absorbed, or scattered by the box/lab in a way that is correlated to the cat’s state.  This is impossible in practice; however, they claim it is possible in principle.  In other words, they agree that decoherence would kill the SC experiment by turning SC into a normal probabilistic mixture, but claim that decoherence can be prevented by the “in-principle possible” act of thermally isolating it.

Wrong.

In the following analysis, all of the superpositions will be location superpositions.  There are lots of different types of superpositions, such as spin, momentum, etc., but every actual measurement in the real world is arguably a position measurement (e.g., spin measurements are done by measuring where a particle lands after its spin interacts with a magnetic field).  So here’s how I’ll set up my SC thought experiment.  At time t0, the cat, measurement apparatus, box, etc., are thermally isolated so that (somehow) no photons, correlated to the rest of the universe, can correlate to the events inside the box and thus prematurely decohere a quantum superposition.  I’ll even go a step further and place the box in deep intergalactic space where the spacetime has essentially zero curvature to prevent the possibility that gravitons could correlate to the events inside the box and thus gravitationally decohere a superposition.  I’ll also set it up so that, when the experiment begins at t0, a tiny object is in a location superposition |A> + |B>, where eigenstates |A> and |B> correspond to locations A and B separated by distance D.  (I’ve left out coefficients, but assume they are equal.)  The experiment is designed so that the object remains in superposition until time t1, when the location of the object is measured by amplifying the quantum object with a measuring device so that measurement of the object at location A would result in some macroscopic mass (such as an indicator pointer of the measuring device) being located at position MA in state |MA>, while a measurement at location B would result in the macroscopic mass being located at position MB in state |MB>.  Finally, the experiment is designed so that location of the macroscopic mass at position MA would result, at later time t2, in a live cat in state |live>, while location at position MB would result in a dead cat in state |dead>.  Here’s the question: at time t2, is the resulting system described by the superposition |A>|MA>|live> + |B>|MB>|dead>, or by the mixed state of 50%  |A>|MA>|live> and 50% |B>|MB>|dead>?

First of all, I’m not sure why decoherence doesn’t immediately solve this problem.  At time t0, the measuring device, the cat, and the box are already well correlated with each other; the only thing that is not well correlated is the tiny object.  In fact, that’s not even true... the tiny object is well correlated to everything in the box in the sense that it will NOT be detected in locations X, Y, Z, etc.; instead, the only lack of correlation (and lack of fact) is whether it is located at A or B.  But as soon as anything in the box correlates to the tiny object’s location at A or B, then a superposition no longer exists and a mixed (i.e., non-quantum) state emerges.  So it seems to me that the superposition has already decohered at time t1 when the measuring device, which is already correlated to the cat and box, entangles with the tiny object.  In other words, it seems logically necessary that at t1, the combination of object with measuring device has already reduced to the mixed state 50%  |A>|MA> and 50% |B>|MB>, so clearly by later time t2 the cat is, indeed, either dead or alive and not in a quantum superposition.

Interestingly, even before t1, the gravitational attraction by the cat might actually decohere the superposition!  If the tiny object is a distance R>>D from the cat having mass Mcat, then the differential acceleration on the object due to its two possible locations relative to the cat is approximately GMcatD/2R3.  How long will it take for the object to then move a measurable distance δx?  For a 1kg cat located R=1m from the tiny object, t ≈ 170000 √(δx/D), where t is in seconds.  If we require the tiny object to traverse the entire distance D before we call it “measurable” (which is ridiculous but provides a limiting assumption), then t ≈ 170000 s.  However, if we allow motion over a Planck length to be “measurable” (which is what Mari et al. assume!), and letting D be something typical for a double slit experiment, such as 1μm, then t ≈ 1ns.  (This makes me wonder how much gravity interferes with maintaining quantum superpositions in the growing field of quantum computing, and whether it will ultimately prevent scalable, and hence useful, quantum computing.)

Gravitational decoherence or not, it seems logically necessary to me that by time t1, the measuring device has already decohered the tiny object’s superposition.  I’m not entirely sure how a proponent of SC would reply, as very few papers on SC actually mention decoherence, but I assume the reply would be something like: “OK, yes, decoherence has happened relative to the box, but the box is thermally isolated from the universe, so the superposition has not decohered relative to the universe and outside observers.”  Actually, I think this is the only possible objection – but it is wrong.

When I set up the experiment at time t0, the box (including the cat and measuring device inside) were already extremely well correlated to me and the rest of the universe.  Those correlations don’t magically disappear by “isolating.”  In fact, Heisenberg’s Uncertainty Principle (HUP) tells us that correlations are quite robust and long-lasting, and the development of quantum “fuzziness” becomes more and more difficult as the mass of an object increases: Δx(mΔv) ≥ ℏ/2.

Let’s start by considering a tiny dust particle, which is much, much, much larger than any object that has currently demonstrated quantum interference.  We’ll assume it is a 50μm diameter sphere with a density of 1000kg/m3 and an impact with a green photon (λ ≈ 500nm) has just localized it.  How long will it take for its location fuzziness to exceed distance D of, say, 1μm?  Letting Δv = ℏ/2mΔx ≈ 1 x 10-17 m/s, it would take 1011 seconds (around 3200 years) for the location uncertainty to reach a spread of 1μm.  In other words, if we sent a dust particle into deep space, its location relative to other objects in the universe is so well defined due to its correlations to those objects that it would take several millennia for the universe to “forget” where the dust particle is within the resolution of 1μm.  Information would still exist to localize the dust particle to a resolution of around 1μm, but not less.  But this rough calculation depends on a huge assumption: that new correlation information isn’t created in that time!  In reality, the universe is full of particles and photons that constantly bathe (and thus localize) objects.  I haven’t done the calculation to determine just how many localizing impacts a dust particle in deep space could expect over 3200 years, but it’s more than a handful.  So there’s really no chance for a dust particle to become delocalized relative to the universe.

So what about the box containing Schrodinger’s Cat?  I have absolutely no idea how large the box would need to be to “thermally isolate” it so that information from inside does not leak out – probably enormous so that correlated photons bouncing around inside the box have sufficient wall thickness to thermalize before being exposed to the rest of the universe – but for the sake of argument let’s say the whole experiment (cat included) has a mass of a few kg.  It will now take 1011 times longer, or around 300 trillion years – or 20,000 times longer than the current age of the universe – for the box to become delocalized from the rest of the universe by 1μm, assuming it can somehow avoid interacting with even a single stray photon passing by.  Impossible.  (Further, I neglected gravitational decoherence due to interaction with other objects in the universe, but 300 trillion years is a long time.  Gravity may be weak, but it's not that weak!)

What does this tell us?  It tells us that the SC box will necessarily be localized relative to the universe (including any external observer) to a precision much, much smaller than the distance D that distinguishes eigenstates |A> and |B> of the tiny object in superposition.  Thus, when the measuring device inside the box decoheres the superposition relative to the box, it also does so relative to the rest of the universe.  If there is a fact about the tiny object’s position (say, in location A) relative to the box, then there is also necessarily a fact about its position relative to the universe – i.e., decoherence within the box necessitates decoherence in general.  An outside observer may not know its position until he opens the box and looks, but the fact exists before that moment.  When a new fact emerges about the tiny object’s location due to interaction and correlation with the measuring device inside the box, then that new fact eliminates the quantum superposition relative to the rest of the universe, too.

And, by the way, the conclusion doesn’t change by arbitrarily reducing the distance D.  A philosopher might reply that if we make D really small, then eventually localization of the tiny object relative to the box might not localize it relative to the universe.  Fine.  But ultimately, to make the SC experiment work, we have to amplify whatever distance distinguishes eigenstates |A> and |B> to some large macroscopic distance.  For instance, the macroscopic mass of the measuring device has eigenstates |MA> and |MB> which are necessarily distinguishable over a large (i.e., macroscopic) distance – say 1cm, which is 104 larger than D=1μm.  (Even at the extreme end, to sustain a superposition of the cat, if there is an atom in a blood cell that would have been in its head in state |live> at a particular time that is in its tail in state |dead>, then quantum fuzziness would be required on the order of 1m.)

What this tells us is that quantum amplification doesn’t create a problem where none existed.  If there is no physical possibility, even in principle, of creating a macroscopic quantum superposition by sending a kilogram-scale object into deep space and waiting for quantum fuzziness to appear – whether or not you try to “thermally isolate” it – then you can’t stuff a kilogram-scale cat in a box and depend on quantum amplification to outsmart nature.  There simply is no way, even in principle, to adequately isolate a macroscopic object (cat included) to allow the existence of a macroscopic quantum superposition.

Thursday, June 11, 2020

Quantum Superpositions Are Relative

At 4AM I had an incredible insight.

Here’s the background.  I’ve been struggling recently with the notion of gravitational decoherence of the quantum wave function, as discussed in this post.  The idea is neither new nor complicated: if the gravitational field of a mass located in Position A would have a measurably different effect on the universe (even on a single particle) than the mass located in Position B, then its state cannot be a superposition over those two locations.

Generally, we think of impacts between objects/particles as causing the decoherence of a superposition.  For instance, in the typical double-slit interference experiment, a particle’s wave state “collapses” either when the particle impacts a detector in the far field or we measure the particle in one of the slits by bouncing a photon off it.  In either case, one or more objects (such as photons), already correlated to the environment, get correlated to the particle, thus decohering its superposition.

But what if the decohering “impact” is due to the interaction of a field on another particle far away?  Given that field propagation does not exceed the speed of light, when does decoherence actually occur?  That’s of course the question of gravitational decoherence.  Let’s say that mass A is in a superposition over L and R locations (separated by a macroscopic distance), which therefore creates a superposition of gravitational fields fL and fR that acts on a distant mass B (where masses A and B are separated by distance d).  For the sake of argument, mass B is also the closest object to mass A.  Let’s say that mass B interacts with the field at time t1 and it correlates to fL.  We can obviously conclude that the state of mass A has decohered and it is now located at L... but when did that happen?  It is typically assumed in quantum mechanics that “collapse” events are instantaneous, but of course this creates a clear conflict between QM and special relativity.  (The Mari et al. paper in fact derives its minimum testing time based on the assumption of instantaneous decoherence.)

This assumption makes no sense to me.  If mass B correlates to field fL created by mass A, but the gravitational field produced by mass A travels at light speed (c), then mass A must have already been located at L before mass B correlated to field fL – specifically, mass A must have been located at L on or before time (t1 - d/c).  Thus the interaction of mass B with the gravitational field of mass A could not have caused the collapse of the wave function of mass A (unless we are OK with backward causation).

So for awhile I tossed around the idea that whenever a potential location superposition of mass A reaches the point at which different locations would be potentially detectable (such as by attracting another mass), then it would produce something (gravitons?) that would decohere the superposition.  In fact, that’s more or less the approach that Penrose takes by suggesting that decoherence happens when the difference in the gravitational self-energy between spacetime geometries in a quantum superposition exceeds what he calls the “one graviton” level.

The problem with this approach is that decoherence doesn’t happen when differences could be detected... it happens when the differences are detected and correlated to the rest of the universe.  So, in the above example, what actual interaction might cause the state of mass A to decohere if we are ruling out the production (or even scattering) of gravitons and neglecting the effect of any other object except mass B?  Then it hit me: the interaction with the gravitational field of mass B, of course!  Just as mass A is in a location superposition relative to mass B, which experiences the gravitational field produced by A, mass B is in a location superposition relative to mass A, which experiences the gravitational field produced by B.  Further, just as from the perspective of mass B at time t1, the wave state of mass A seems to have collapsed at time (t1 - d/c)... also from the perspective of mass A at time t1, the wave state of mass B seems to have collapsed at time (t1 - d/c).

In other words, the “superposition” of mass A only existed relative to mass B (and perhaps the rest of the universe, if mass B was so correlated), but from the perspective of mass A, mass B was in a superposition.  What made them appear to be in location superpositions relative to each other was that they were not adequately correlated, but eventually their gravitational fields correlated them.  When mass B claims that the wave state of mass A has “collapsed,” mass A could have made the same claim about mass B.  Nothing actually changed about mass A; instead, the interaction between mass A and mass B correlated them and produced new correlation information in the universe.

Having said all this, I have not yet taken quantum field theory, and it’s completely possible that I’ve simply jumped the gun on stuff I’ll learn at NYU anyway.  Also, as it turns out, my revelation is strongly related, and possibly identical, to Carlo Rovelli’s Relational interpretation of QM.  This wouldn’t upset me at all.  Rovelli is brilliant, and if I’ve learned and reflected enough on QM to independently derive something produced by his genius, then I’d be ecstatic.  Further, my goal in this whole process is to learn the truth about the universe, whether or not someone else learned it first.  That said, I think one thing missing from Rovelli’s interpretation is the notion of universal entanglement that gives rise to a preferred observer status.  If the entire universe is well correlated with the exception of a few pesky microscopic superpositions, can’t we just accept that there really is just one universe and corresponding set of facts?  Another problem is the interpretation’s dismissal of gravitational decoherence.  In fact, it was my consideration of distant gravitational effects on quantum decoherence, as well as implications of special relativity, that led me to this insight, so it seems odd that Rovelli seems to dismiss such effects.  Another problem is the interpretation’s acceptance of Schrodinger’s Cat (and Wigner’s Friend) states.  I think it extraordinarily likely -- and am on a quest to discover and prove -- that macroscopic superpositions large enough to encompass a conscious observer, even a cat, are physically impossible.  Nevertheless, I still don’t know much about his interpretation so it’s time to do some more reading!

Saturday, June 6, 2020

Unending Confusion in the Foundations of Physics

Quantum mechanics is difficult enough without physicists mucking it all up.  Setting aside the problem that they speak in a convoluted language that is often independent of what’s actually happening in the observable physical world, they are sometimes fundamentally wrong about their own physics.

In 2007, a researcher named Afshar published a paper on a fascinating experiment in which he was able to infer the existence of a double-slit interference pattern when thin wires placed where destructive interference would be expected failed to significantly reduce the amount of light passing through.  It was clever and certainly worthy of publication.

But he took it a step too far and stated that the experiment showed a violation of wave-particle complementarity – in other words, he asserted that the photons showed both wave-like behavior and particle-like behavior at the same time.  The first is correct: the existence of interference in the far field of the double-slit indicated the wave behavior.  But the second (the simultaneous particle-like behavior) is not correct, as it depended on his claim that which-way information, which inherently does not and cannot exist in a superposition over two slits, exists retroactively through a later measurement.

I feel like Afshar can be excused for this mistake, for two reasons.  First, the mistake has its origins in a very reputable earlier reference by famed physicist John Wheeler.  Second, his experiment was new, useful, and elucidating for the physics community.  Having said that, the mistake represents such a fundamental misunderstanding of the very basics of quantum mechanics that it should have been immediately and unambiguously refuted – and then brought up no more.  But that’s not what happened.  What happened is this:

·         The paper is cited by over a hundred papers, very few of which refute it.
·         Among those that refute it, several refute it incorrectly.
·         Those that refute it correctly use over a hundred pages and several dozen complicated quantum mechanics equations.  Their inability to address and solve the problem clearly and succinctly only obfuscates what is already an apparently muddled issue.

Here is my two-page refutation of Afshar.

How exactly are physics students ever going to understand quantum mechanics when the literature on the foundations of physics is so confused and internally inconsistent?

Tuesday, June 2, 2020

Consciousness, Quantum Mechanics, and Pseudoscience

The study of consciousness is not currently “fashionable” in the physics community, and the notion that there might be any relationship between consciousness and quantum mechanics and/or relativity truly infuriates some physicists.  For instance, the hypothesis that consciousness causes collapse (“CCC”) of the quantum mechanical wave function is now considered fringy by many; a physicist who seriously considers it (or even mentions it without a deprecatory scowl) risks professional expulsion and even branding as a quack.

In 2011, two researchers took an unprovoked stab at the CCC hypothesis in this paper.  There is a fascinating experiment called the “delayed choice quantum eraser,” in which information appears to be erased from the universe after a quantum interference experiment has been performed.  The details don’t matter.  The point is that the researchers interpret the quantum eraser experiment as providing an empirical falsification of the CCC hypothesis.  They don’t hide their disdain for the suggestion that QM and consciousness may have a relationship.

The problem is: their paper is pseudoscientific shit.  They first make a massive logical mistake that, despite the authors’ contempt for philosophy, would have been avoided had they taken a philosophy class in logic.  They follow up that mistake with an even bigger blunder in their understanding of the foundations of quantum mechanics.  Essentially, they assert that the failure of a wave function to collapse always results in a visible interference pattern, which is just patently false.  They clearly fail to falsify the CCC hypothesis.  (For the record, I think the CCC hypothesis is likely false, but I am reasonably certain that it has not yet been falsified.)

Sure, there’s lots of pseudoscience out there, so why am I picking on this particular paper?  Because it was published in Annalen der Physik, the same journal in which Einstein published his groundbreaking papers on special relativity and the photoelectric effect (among others), and because it’s been cited by more than two dozen publications so far (often to attack the CCC hypothesis), only one of which actually refutes it.

What’s even more irritating is that the paper’s glaring errors could easily have been caught by a competent journal referee who had read the paper skeptically.  If the paper’s conclusion had been in support of the CCC hypothesis, you can bet that it would have been meticulously and critically analyzed before publication, assuming it was considered for publication at all.  But when referees already agree with a paper’s conclusion, they may be less interested in the logical steps taken to arrive at that conclusion.  A paper that comes to the correct conclusion via incorrect reasoning is still incorrect.  A scientist that rejects correct reasoning because it results in an unfashionable or unpopular conclusion is not a scientist.

Here is a preprint of my rebuttal to their paper.  Since it is intended to be a scholarly article, I am much nicer there than I’ve been here.

Monday, May 25, 2020

Speaking the Wrong Language

In my last post, I pointed out a fundamental problem in a particular paper – although the same problem appears in lots of papers: specifically, that there is no way to test whether an object is in a quantum superposition.  I feel like this is a point that many physicists and philosophers of physics overlook, so to be sure, I went ahead and posted the question on a few online physics forums, such as this one.  Here’s basically the response I got:
Every state that is an eigenstate of a first observable is obviously in a superposition of eigenstates of some second observable that does not commute with the first.  Therefore: of course you can test whether an object is in a quantum superposition.  Also, you are an idiot.
OK, so they didn’t actually say that last part, but it was clearly implied.  If you don’t speak the language of quantum mechanics, let me rephrase.  Quantum mechanics tells us that there are certain features (“observables”) of a system that cannot be measured/known/observed at the same time, thus the order of measurement matters.  For example, position and momentum are two such observables, so measuring the position and then the momentum will inevitably give different results from measuring the momentum and then the position – that is, the position and momentum operators do not commute.  And because they don’t commute, an object in a particular position (that is, “in an eigenstate of the position operator”) does not have a particular momentum, which is to say that it is in a superposition of all possible momenta.  In other words, the above response basically boils down to this: quantum mechanically, every state is a superposition.

Fine.  The problem is that this response has nothing to do with the question I was asking.  I ended up having to edit my question to ask whether any single test could distinguish between a “pure” quantum superposition versus a mixed state (which is a probabilistic mixture), and even then the responses weren't all that useful.

This is why I think the big fundamental problems in physics will probably not be solved by insiders.  They speak a very limited language that, by its nature, limits a speaker’s ability to discover and understand the flaws in the system it describes.  My original question, I thought, was relatively clear: is it actually possible, as Mari et al. suggest, to receive information by measuring (in a single test) whether an object is in a macroscopic quantum superposition?  But when the knee-jerk response of several intelligent quantum physicists is to discuss the noncommutability of quantum observables and come to the irrelevant (and, frankly, condescending) point that all states are superpositions and therefore of course we can test whether an object is in superposition – well, it makes me wonder whether they actually understand, at a fundamental level, what a quantum superposition is.  I feel like there’s a huge disconnect between the language and mathematics of physics, and the actual observable world that physics tries to describe. 

Tuesday, May 19, 2020

It is Impossible to Measure a Quantum Superposition

In a previous post, I discussed how and to what extent gravity might prevent the existence of macroscopic quantum superpositions.  There has been surprisingly little discussion of this possibility and there is still debate on whether gravity is quantized and whether gravitational fields are, themselves, capable of existing in quantum superpositions.

Today I came across a paper, "Experiments testing macroscopic quantum superpositions must be slow," by Mari et al., which proposes and analyzes a thought experiment involving a first mass mA placed in a position superposition in Alice’s lab, the mass mA producing a gravitational field that potentially affects a test mass mB in Bob’s lab (separated from Alice’s lab by a distance R), depending on whether or not Bob turns on a detector.  The article concludes that special relativity puts lower limits on the amount of time necessary to determine whether an object is in a superposition of two macroscopically distinct locations.

The paper seems to have several important problems, none of which have been pointed out in papers that cite it, notably this paper.  For example, its calculation of the entanglement time TB assumes that correlation of the location of test mass mB with the gravitational field of mass mA occurs when the change in position δx of the test mass mB exceeds its quantum uncertainty Δx, which seems like a reasonable argument – except that they failed to include the increase in quantum uncertainty due to dispersion.  (This is particularly problematic where they let Δx be the Planck length!)  Another problem is their proposed experiment in Section IV: Alice is supposed to apply a spin-dependent force on the mass mA which results in different quantum states, depending on whether or not Bob turned on the detector, but both quantum states correlate to mass mA located at L (instead of R).  The problem is that by the time she has applied the force, Bob’s test mass mB has presumably already correlated to the gravitational field produced by Alice’s mass mA located at L or R, but how could that happen before Alice applied the force that caused the mass mA to be located at L?

But the biggest problem with the paper is not in their determination of the time necessary to determine whether an object is in a superposition of two macroscopically distinct locations.  No – the bigger problem is that, as far as I understand, there is no way to determine whether an object is in a superposition at all! 

Wait, what?  Obviously quantum superpositions exist.

Yes, but a superposition is determined by doing an interference experiment on a bunch of “identically prepared” objects (or particles or masses or whatever).  The idea is that if we see an interference pattern emerge (e.g., the existence of light and dark fringes), then we can infer that the individual objects were in coherent superpositions.  However, detection of a single object never produces a pattern, so we can’t infer whether or not it was in a superposition.  Further, the outcome of every interference experiment on a superposition state, if analyzed one detection at a time, will be consistent with that object not having been in superposition.  A single trial can confirm that an object was not in a superposition (such as if we detect a blip in a dark fringe area), but no single trial can confirm that the object was in a superposition.  Moreover, even if a pattern does slowly emerge after many trials, every pattern produced by a finite number of trials – and remember that infinity does not exist in the physical world – is always a possible random outcome of measuring objects that are not in a superposition.  We can never confirm the existence of a superposition, but lots and lots of trials can certainly increase our confidence.

In other words, if I’m right, then every measurement that Alice makes (in the Mari paper) will be consistent with Bob's having turned the detector on (and decohered the field) -- thus, no information is sent!  No violation of special relativity!  No problem!

Look, I could be wrong.  I’ve been studying the foundations of quantum mechanics independently for a couple of years now, and very, very few references point out that there’s no way to determine if any particular object is in a quantum superposition, which is also why it’s taken me so long to figure it out.  So either I’m wrong about this, or there’s some major industry-wide crazy-making going on in the physics community that leads to all kinds of wacky conclusions and paradoxes... no wonder quantum mechanics is so confusing!

Is there a way to test whether a particular object is in a coherent superposition?  If so, how?  If not, then why do so few discussions of quantum superpositions mention this?

Update to this post here