Saturday, June 13, 2020

Killing Schrodinger’s Cat – Once and For All

Background of Schrodinger’s Cat

Quantum mechanics stops where classical probabilities start.  In the classical world, we work with probabilities directly, while in quantum mechanics, we work with probability amplitudes, which are complex numbers (involving that weird number i), before applying the Born rule, which requires squaring the norm of the amplitude to arrive at probability.  If you don’t know anything about quantum mechanics, this may sound like gibberish, so here’s an example showing how quantum mechanics defies the rules of classical probability.

I shine light through a tiny hole toward a light detector and take a reading of the light intensity.  Then I repeat the experiment, the only difference being that I’ve punched a second tiny hole next to the first one.  Classical probability (and common sense!) tells us that the detector should measure at least the same light intensity as before, but probably more.  After all, by adding another hole, surely we are allowing more light to reach the detector... right?!  Nope.  We could actually measure less light, because through the process of interference among eigenstates in a superposition, quantum mechanics screws up classical probability.  In some sense, the violation of classical probability, which tends to happen only in the microscopic world, is really what QM is all about.  And when I say “microscopic,” what I really mean is that the largest object to which an interference experiment has been performed (thus demonstrating QM effects) is a molecule of a few hundred amu, which is much, much, much, much smaller than can be seen with the naked eye or even a light microscope.  So we have no direct empirical evidence that the rules of QM even apply to macroscopic objects.

Having said that, many physicists and philosophers insist that there’s no limit “in principle” to the size of an object in quantum superposition.  The question I’ve been wrestling with for a very long time is this: is there an actual dividing line between the “micro” and “macro” worlds at which QM is no longer applicable?  The “rules” of quantum mechanics essentially state that when one quantum object interacts with another, they just entangle to create a bigger quantum object – that is, until the quantum object becomes big enough that normal probability rules apply, and/or when the quantum object becomes entangled with a “measuring device” (whatever the hell that is).  The so-called measurement problem, and the ongoing debates regarding demarcation between “micro” and “macro,” have infested physics and the philosophy of quantum mechanics for the past century.

And no thought experiment better characterizes this infestation than the obnoxiously annoying animal called Schrodinger’s Cat.  The idea is simple: a cat is placed in a box in which the outcome of a tiny measurement gets amplified so that one outcome results in a dead cat while the other outcome keeps the cat alive.  (For example, a Geiger counter measures a radioisotope so that if it “clicks” in a given time period, a vial of poison is opened.)  Just before we open the box at time t0, there’s been enough time for the poison to kill the cat, so we should expect to see either a live or dead cat.  Here’s the kicker: the “tiny measurement” is on an object that is in quantum superposition, to which the rules of classical probability don’t apply. 

So does the quantum superposition grow and eventually entangle with the cat, in which case, just prior to time t0, the cat is itself in a superposition of “dead” and “alive” states (and to which the rules of classical probability do not apply)?  Or does the superposition, before entangling with the cat, reduce to a probabilistic mixture, such as through decoherence or collapse of the wave function?  And what the hell is the difference?  If the cat is in a superposition just prior to time t0, then there just is no objective fact about whether the cat is dead or alive, and our opening of the box at t0 is what decoheres (or collapses or whatever) the entangled wave state, allowing the universe to then randomly choose a dead or live cat.  However, if the cat is in a mixed state just prior to t0, then there is an objective fact about whether it is dead or alive – but we just don’t know the fact until we open the box.  So the question really comes down to this: do we apply classical probability or quantum mechanics to Schrodinger’s Cat?  Or, to use physics terminology, the question is whether, just prior to opening the box, Schrodinger’s Cat is in a coherent superposition or a probabilistic mixed state. 

Why is this such a hard question?

It’s a hard question for a couple reasons.  First, remember that QM is about statistics.  We never see superpositions.  The outcome of every individual trial of every experiment ever performed in the history of science has been consistent with the absence of quantum superpositions.  Rather, superpositions are inferred when the outcomes of many, many trials of an experiment on “identically prepared” objects don’t match what we would have expected from normal probability calculations.  So if the only way to empirically distinguish between a quantum cat and a classical cat requires doing lots of trials on physically identical cats... ummm... how exactly do we create physically identical cats?  Second, the experiment itself must be an “interference” experiment that allows the eigenstates in the wave state to interfere, thus changing normal statistics into quantum statistics.  This is no easy task in the case of Schrodinger’s Cat, and you can’t just do it by opening the box and looking, because the probabilities of finding the cat dead or alive will be the same whether or not the cat was in a superposition just prior to opening the box.  So doing lots of trials is not enough – they must be trials of the right kind of experiment – i.e., an interference experiment.  And in all my reading on SC, I have never – not once – encountered anything more than a simplistic, hypothetical mathematical treatment of the problem.  “All you have to do is measure the cat in the basis {(|dead> + |live>)/√2, (|dead> - |live>)/√2}!  Easy as pie!”  But the details of actually setting up such an experiment are so incredibly, overwhelmingly complicated that it’s unlikely that it is physically possible, even in principle.

There’s a further complication.  If SC is indeed in a quantum superposition prior to t0, then there is no fact about whether the cat is dead or alive.  But don’t you think the cat would disagree?  OK, so if you believe cats don’t think, an identical thought experiment involving a human is called Wigner’s Friend: physicist Eugene Wigner has a friend who performs a measurement on a quantum object in a closed, isolated lab.  Just before Wigner opens the door to ask his friend about the outcome of the measurement, is his friend in a superposition or a mixed state?  If Wigner’s Friend is in a superposition, then that means there is no fact about which outcome he measured, but surely he would disagree!  Amazingly, those philosophers who argue that WF is in a superposition actually agree that when he eventually talks to Wigner, he will insist that he measured a particular outcome, and that he remembers doing the measurement, and so forth, so they have to invent all kinds of fanciful ideas about memory alteration and erasure, retroactive collapse, etc., etc.  All this to continue to justify the “in-principle” possibility of an absolutely ridiculous thought experiment that has done little more than confuse countless physics students.

I’m so tired of this.  I’m so tired of hearing about Schrodinger’s Cat and Wigner’s Friend.  I’m so tired of hearing the phrase “possible in principle.”  I’m so sick of long articles full of quantum mechanics equations that “prove” the possibility of SC without any apparent understanding of the limits to those equations, the validity of their assumptions, or the extent to which their physics analysis has any foundation in the actual observable physical world.  David Deutsch’s classic paper is a prime example, in which he uses lots of “math” to “prove” not only that the WF experiment can be done, but that WF can actually send a message to Wigner, prior to t0, that is uncorrelated to the measurement outcome.  Then, in a couple of sentences in Section 8.1, he casually mentions that his analysis assumes that: a) computers can be conscious; and b) Wigner’s Friend’s lab can be sufficiently isolated from the rest of the universe.  Assumption a) is totally unfounded, which I discuss in this paper and this paper and in this post and this post, and I’ll refute assumption b) now.

Why the Schrodinger Cat experiment is not possible, even in principle

Let me start by reiterating the meaning of superposition: a quantum superposition represents a lack of objective fact.  I’m sick of hearing people say things like “Schrodinger’s Cat is partially alive and partially dead.”  No.  That’s wrong.  Imagine an object in a superposition state |A> + |B>.  As soon as an event occurs that correlates one state (and not the other) to the rest of the universe (or the “environment”), then the superposition no longer exists.  That event could consist of a single photon that interacts with the object in a way that distinguishes the eigenstates |A> and |B>, even if that photon has been traveling millions of years through space prior to interaction, and continues to travel millions of years more through space after interaction.  The mere fact that evidence that distinguishes |A> from |B> exists is enough to decohere the superposition into one of those eigenstates.

In the real world there could never be a SC superposition because a dead cat interacts with the universe in very different (and distinguishable) ways from a live cat... imagine the trillions of impacts per second with photons and surrounding atoms that would differ depending on the state of the cat.  Now imagine that all we need is ONE such impact and that would immediately destroy any potential superposition.  (I pointed out in this post how a group of researchers showed that relativistic time dilation on the Earth’s surface was enough to prevent macroscopic superpositions!)  And that’s why philosophers who discuss the possibility of SC often mention the requirement of “thermally isolating” it.  What they mean is that we have to set up the experiment so that not even a single photon can be emitted, absorbed, or scattered by the box/lab in a way that is correlated to the cat’s state.  This is impossible in practice; however, they claim it is possible in principle.  In other words, they agree that decoherence would kill the SC experiment by turning SC into a normal probabilistic mixture, but claim that decoherence can be prevented by the “in-principle possible” act of thermally isolating it.

Wrong.

In the following analysis, all of the superpositions will be location superpositions.  There are lots of different types of superpositions, such as spin, momentum, etc., but every actual measurement in the real world is arguably a position measurement (e.g., spin measurements are done by measuring where a particle lands after its spin interacts with a magnetic field).  So here’s how I’ll set up my SC thought experiment.  At time t0, the cat, measurement apparatus, box, etc., are thermally isolated so that (somehow) no photons, correlated to the rest of the universe, can correlate to the events inside the box and thus prematurely decohere a quantum superposition.  I’ll even go a step further and place the box in deep intergalactic space where the spacetime has essentially zero curvature to prevent the possibility that gravitons could correlate to the events inside the box and thus gravitationally decohere a superposition.  I’ll also set it up so that, when the experiment begins at t0, a tiny object is in a location superposition |A> + |B>, where eigenstates |A> and |B> correspond to locations A and B separated by distance D.  (I’ve left out coefficients, but assume they are equal.)  The experiment is designed so that the object remains in superposition until time t1, when the location of the object is measured by amplifying the quantum object with a measuring device so that measurement of the object at location A would result in some macroscopic mass (such as an indicator pointer of the measuring device) being located at position MA in state |MA>, while a measurement at location B would result in the macroscopic mass being located at position MB in state |MB>.  Finally, the experiment is designed so that location of the macroscopic mass at position MA would result, at later time t2, in a live cat in state |live>, while location at position MB would result in a dead cat in state |dead>.  Here’s the question: at time t2, is the resulting system described by the superposition |A>|MA>|live> + |B>|MB>|dead>, or by the mixed state of 50%  |A>|MA>|live> and 50% |B>|MB>|dead>?

First of all, I’m not sure why decoherence doesn’t immediately solve this problem.  At time t0, the measuring device, the cat, and the box are already well correlated with each other; the only thing that is not well correlated is the tiny object.  In fact, that’s not even true... the tiny object is well correlated to everything in the box in the sense that it will NOT be detected in locations X, Y, Z, etc.; instead, the only lack of correlation (and lack of fact) is whether it is located at A or B.  But as soon as anything in the box correlates to the tiny object’s location at A or B, then a superposition no longer exists and a mixed (i.e., non-quantum) state emerges.  So it seems to me that the superposition has already decohered at time t1 when the measuring device, which is already correlated to the cat and box, entangles with the tiny object.  In other words, it seems logically necessary that at t1, the combination of object with measuring device has already reduced to the mixed state 50%  |A>|MA> and 50% |B>|MB>, so clearly by later time t2 the cat is, indeed, either dead or alive and not in a quantum superposition.

Interestingly, even before t1, the gravitational attraction by the cat might actually decohere the superposition!  If the tiny object is a distance R>>D from the cat having mass Mcat, then the differential acceleration on the object due to its two possible locations relative to the cat is approximately GMcatD/2R3.  How long will it take for the object to then move a measurable distance δx?  For a 1kg cat located R=1m from the tiny object, t ≈ 170000 √(δx/D), where t is in seconds.  If we require the tiny object to traverse the entire distance D before we call it “measurable” (which is ridiculous but provides a limiting assumption), then t ≈ 170000 s.  However, if we allow motion over a Planck length to be “measurable” (which is what Mari et al. assume!), and letting D be something typical for a double slit experiment, such as 1μm, then t ≈ 1ns.  (This makes me wonder how much gravity interferes with maintaining quantum superpositions in the growing field of quantum computing, and whether it will ultimately prevent scalable, and hence useful, quantum computing.)

Gravitational decoherence or not, it seems logically necessary to me that by time t1, the measuring device has already decohered the tiny object’s superposition.  I’m not entirely sure how a proponent of SC would reply, as very few papers on SC actually mention decoherence, but I assume the reply would be something like: “OK, yes, decoherence has happened relative to the box, but the box is thermally isolated from the universe, so the superposition has not decohered relative to the universe and outside observers.”  Actually, I think this is the only possible objection – but it is wrong.

When I set up the experiment at time t0, the box (including the cat and measuring device inside) were already extremely well correlated to me and the rest of the universe.  Those correlations don’t magically disappear by “isolating.”  In fact, Heisenberg’s Uncertainty Principle (HUP) tells us that correlations are quite robust and long-lasting, and the development of quantum “fuzziness” becomes more and more difficult as the mass of an object increases: Δx(mΔv) ≥ ℏ/2.

Let’s start by considering a tiny dust particle, which is much, much, much larger than any object that has currently demonstrated quantum interference.  We’ll assume it is a 50μm diameter sphere with a density of 1000kg/m3 and an impact with a green photon (λ ≈ 500nm) has just localized it.  How long will it take for its location fuzziness to exceed distance D of, say, 1μm?  Letting Δv = ℏ/2mΔx ≈ 1 x 10-17 m/s, it would take 1011 seconds (around 3200 years) for the location uncertainty to reach a spread of 1μm.  In other words, if we sent a dust particle into deep space, its location relative to other objects in the universe is so well defined due to its correlations to those objects that it would take several millennia for the universe to “forget” where the dust particle is within the resolution of 1μm.  Information would still exist to localize the dust particle to a resolution of around 1μm, but not less.  But this rough calculation depends on a huge assumption: that new correlation information isn’t created in that time!  In reality, the universe is full of particles and photons that constantly bathe (and thus localize) objects.  I haven’t done the calculation to determine just how many localizing impacts a dust particle in deep space could expect over 3200 years, but it’s more than a handful.  So there’s really no chance for a dust particle to become delocalized relative to the universe.

So what about the box containing Schrodinger’s Cat?  I have absolutely no idea how large the box would need to be to “thermally isolate” it so that information from inside does not leak out – probably enormous so that correlated photons bouncing around inside the box have sufficient wall thickness to thermalize before being exposed to the rest of the universe – but for the sake of argument let’s say the whole experiment (cat included) has a mass of a few kg.  It will now take 1011 times longer, or around 300 trillion years – or 20,000 times longer than the current age of the universe – for the box to become delocalized from the rest of the universe by 1μm, assuming it can somehow avoid interacting with even a single stray photon passing by.  Impossible.  (Further, I neglected gravitational decoherence due to interaction with other objects in the universe, but 300 trillion years is a long time.  Gravity may be weak, but it's not that weak!)

What does this tell us?  It tells us that the SC box will necessarily be localized relative to the universe (including any external observer) to a precision much, much smaller than the distance D that distinguishes eigenstates |A> and |B> of the tiny object in superposition.  Thus, when the measuring device inside the box decoheres the superposition relative to the box, it also does so relative to the rest of the universe.  If there is a fact about the tiny object’s position (say, in location A) relative to the box, then there is also necessarily a fact about its position relative to the universe – i.e., decoherence within the box necessitates decoherence in general.  An outside observer may not know its position until he opens the box and looks, but the fact exists before that moment.  When a new fact emerges about the tiny object’s location due to interaction and correlation with the measuring device inside the box, then that new fact eliminates the quantum superposition relative to the rest of the universe, too.

And, by the way, the conclusion doesn’t change by arbitrarily reducing the distance D.  A philosopher might reply that if we make D really small, then eventually localization of the tiny object relative to the box might not localize it relative to the universe.  Fine.  But ultimately, to make the SC experiment work, we have to amplify whatever distance distinguishes eigenstates |A> and |B> to some large macroscopic distance.  For instance, the macroscopic mass of the measuring device has eigenstates |MA> and |MB> which are necessarily distinguishable over a large (i.e., macroscopic) distance – say 1cm, which is 104 larger than D=1μm.  (Even at the extreme end, to sustain a superposition of the cat, if there is an atom in a blood cell that would have been in its head in state |live> at a particular time that is in its tail in state |dead>, then quantum fuzziness would be required on the order of 1m.)

What this tells us is that quantum amplification doesn’t create a problem where none existed.  If there is no physical possibility, even in principle, of creating a macroscopic quantum superposition by sending a kilogram-scale object into deep space and waiting for quantum fuzziness to appear – whether or not you try to “thermally isolate” it – then you can’t stuff a kilogram-scale cat in a box and depend on quantum amplification to outsmart nature.  There simply is no way, even in principle, to adequately isolate a macroscopic object (cat included) to allow the existence of a macroscopic quantum superposition.

Thursday, June 11, 2020

Quantum Superpositions Are Relative

At 4AM I had an incredible insight.

Here’s the background.  I’ve been struggling recently with the notion of gravitational decoherence of the quantum wave function, as discussed in this post.  The idea is neither new nor complicated: if the gravitational field of a mass located in Position A would have a measurably different effect on the universe (even on a single particle) than the mass located in Position B, then its state cannot be a superposition over those two locations.

Generally, we think of impacts between objects/particles as causing the decoherence of a superposition.  For instance, in the typical double-slit interference experiment, a particle’s wave state “collapses” either when the particle impacts a detector in the far field or we measure the particle in one of the slits by bouncing a photon off it.  In either case, one or more objects (such as photons), already correlated to the environment, get correlated to the particle, thus decohering its superposition.

But what if the decohering “impact” is due to the interaction of a field on another particle far away?  Given that field propagation does not exceed the speed of light, when does decoherence actually occur?  That’s of course the question of gravitational decoherence.  Let’s say that mass A is in a superposition over L and R locations (separated by a macroscopic distance), which therefore creates a superposition of gravitational fields fL and fR that acts on a distant mass B (where masses A and B are separated by distance d).  For the sake of argument, mass B is also the closest object to mass A.  Let’s say that mass B interacts with the field at time t1 and it correlates to fL.  We can obviously conclude that the state of mass A has decohered and it is now located at L... but when did that happen?  It is typically assumed in quantum mechanics that “collapse” events are instantaneous, but of course this creates a clear conflict between QM and special relativity.  (The Mari et al. paper in fact derives its minimum testing time based on the assumption of instantaneous decoherence.)

This assumption makes no sense to me.  If mass B correlates to field fL created by mass A, but the gravitational field produced by mass A travels at light speed (c), then mass A must have already been located at L before mass B correlated to field fL – specifically, mass A must have been located at L on or before time (t1 - d/c).  Thus the interaction of mass B with the gravitational field of mass A could not have caused the collapse of the wave function of mass A (unless we are OK with backward causation).

So for awhile I tossed around the idea that whenever a potential location superposition of mass A reaches the point at which different locations would be potentially detectable (such as by attracting another mass), then it would produce something (gravitons?) that would decohere the superposition.  In fact, that’s more or less the approach that Penrose takes by suggesting that decoherence happens when the difference in the gravitational self-energy between spacetime geometries in a quantum superposition exceeds what he calls the “one graviton” level.

The problem with this approach is that decoherence doesn’t happen when differences could be detected... it happens when the differences are detected and correlated to the rest of the universe.  So, in the above example, what actual interaction might cause the state of mass A to decohere if we are ruling out the production (or even scattering) of gravitons and neglecting the effect of any other object except mass B?  Then it hit me: the interaction with the gravitational field of mass B, of course!  Just as mass A is in a location superposition relative to mass B, which experiences the gravitational field produced by A, mass B is in a location superposition relative to mass A, which experiences the gravitational field produced by B.  Further, just as from the perspective of mass B at time t1, the wave state of mass A seems to have collapsed at time (t1 - d/c)... also from the perspective of mass A at time t1, the wave state of mass B seems to have collapsed at time (t1 - d/c).

In other words, the “superposition” of mass A only existed relative to mass B (and perhaps the rest of the universe, if mass B was so correlated), but from the perspective of mass A, mass B was in a superposition.  What made them appear to be in location superpositions relative to each other was that they were not adequately correlated, but eventually their gravitational fields correlated them.  When mass B claims that the wave state of mass A has “collapsed,” mass A could have made the same claim about mass B.  Nothing actually changed about mass A; instead, the interaction between mass A and mass B correlated them and produced new correlation information in the universe.

Having said all this, I have not yet taken quantum field theory, and it’s completely possible that I’ve simply jumped the gun on stuff I’ll learn at NYU anyway.  Also, as it turns out, my revelation is strongly related, and possibly identical, to Carlo Rovelli’s Relational interpretation of QM.  This wouldn’t upset me at all.  Rovelli is brilliant, and if I’ve learned and reflected enough on QM to independently derive something produced by his genius, then I’d be ecstatic.  Further, my goal in this whole process is to learn the truth about the universe, whether or not someone else learned it first.  That said, I think one thing missing from Rovelli’s interpretation is the notion of universal entanglement that gives rise to a preferred observer status.  If the entire universe is well correlated with the exception of a few pesky microscopic superpositions, can’t we just accept that there really is just one universe and corresponding set of facts?  Another problem is the interpretation’s dismissal of gravitational decoherence.  In fact, it was my consideration of distant gravitational effects on quantum decoherence, as well as implications of special relativity, that led me to this insight, so it seems odd that Rovelli seems to dismiss such effects.  Another problem is the interpretation’s acceptance of Schrodinger’s Cat (and Wigner’s Friend) states.  I think it extraordinarily likely -- and am on a quest to discover and prove -- that macroscopic superpositions large enough to encompass a conscious observer, even a cat, are physically impossible.  Nevertheless, I still don’t know much about his interpretation so it’s time to do some more reading!

Sunday, June 7, 2020

How Science Brought Me To God

This post was inspired by my sister, who has been struggling recently with questions about God, purpose, meaning, and many other big philosophical questions.

Let me start by saying that I’m not a Christian (or a Buddhist or a Muslim or a Jew or a Rastafarian blah blah blah), and never will be.  Christianity is a set of very specific stories and beliefs, of which the belief in a Creator is a tiny subset.  Belief in God does not imply belief in Christianity or any other religion.  It is truly astonishing how many scientists (and physicists in particular) don’t seem to understand that last sentence.  It’s incredible how often physicists will say something like: “When I was in Sunday School, I learned about Jesus walking on water.  But as a scientist, I learned that walking on water violates the laws of physics.  Therefore god does not exist.”  The conclusion simply doesn’t follow from the premises.

In my own progress in physics, I am finding much of the academic literature infested with bad logic and unsound arguments.  One of my more recent posts points to a heavily cited article that claimed to empirically refute the consciousness-causes-consciousness hypothesis (“CCCH”).  The authors started by characterizing CCCH as an if-then statement in the form of AàB (read “A implies B” or “if A, then B”), which was essentially correct.  (The actual statements are irrelevant to the point I’m making in this post, but my actual paper can be found here.)  Then, without explanation, they re-characterized CCCH as AàC, but this would only be true if BàC.  Setting aside the fact that BàC blatantly contradicts quantum mechanics, the authors didn’t even seem to notice the unfounded logical jump they had made.  Simply having taken graduate-level philosophical logic has already provided me a surprising leg-up in the study and analysis of physics.

Why do I take such pains to explain that my belief in God does not imply belief in any particular religion or set of stories?  Because my search for a physical explanation of consciousness, and my pursuit of some of the hard foundational questions in physics, already puts me on potentially thin ice in the physics academy, and mentioning God (with a capital G) may very well put me over the edge into the realm of “crackpot.”  Luckily, I’m in the position of not needing to seek anyone’s approval; having said that, I would ultimately like to collaborate with and influence other like-minded physicists and don’t want to immediately turn them off with any suggestion that I’m a Christian.  I also don’t intend to turn off any Christian readers... my wife and one of my best friends are Christians.  My point is that Christianity includes a very specific set of concepts and stories that far exceed mere theism and may be understandably off-putting to physicists.

With all the caveats in place, here’s the meat of this blog post: Science has in fact brought me to God, in large part via the Goldilocks Enigma, better known as the “fine-tuning” problem in physics.

Paul Davies, a cosmologist at Arizona State, wrote a fascinating book called The Goldilocks Enigma.  Essentially, there are more than a dozen independent parameters, based in part on the Standard Model of particle physics, that had to be “fine-tuned” to within 1% or so in order to create a universe that could create life.  (The phrase “fine-tuned” itself suggests a Creator, but that’s not how Davies means it.)  One example might be the ratio of the gravitational force to the electromagnetic force.  A star produces energy via the fusion of positively charged nuclei, primarily hydrogen nuclei.  Electrostatic repulsion makes it difficult to bring two fusible nuclei sufficiently close, but gravity solves this problem if the object is really massive, like a star.  The core of a star then experiences the quasi-equilibrium condition of gravity squeezing lots of hydrogen nuclei together counterbalanced by the outward pressure of an extremely high-temperature gas, thus producing fusion energy at more-or-less constant rate.  This balance in our Sun gives it a lifetime of something like 10 billion years before its fuel will be mostly spent.

Here’s the problem: if the gravitational force had been 1% higher than it is, then the Sun would have burned up far too quickly for life to evolve, while if the force had been 1% less than it is, the Sun would have produced far too little radiation for life to evolve.  (It is generally thought that liquid water, which exists in the narrow range of 273-373K, is a requirement for life, although this is not necessary for the current argument.)  In other words, the ratio of gravity to electric repulsion had to be in the “Goldilocks” zone: not too big, not too small... just right.

The likelihood of that ratio being “just right” is very small.  And you might think this is just a coincidence.  That’s certainly what a lot of physicists will say.  But remember that there are at least 26 such free parameters in nature that happen to be “just right” in the same way, and (small probability)^26 = (really freaking unbelievably tiny probability).  The probability is so tiny as to be effectively zero.

If you have already dismissed any possibility of a Creator, then one way – perhaps the only way – to explain away such a fantastically tiny probability is to posit the existence of infinitely many universes and then invoke the so-called “Anthropic Principle” to conclude that such an unlikely event must be possible because, if it weren’t, we wouldn’t exist to notice!  After all, if everything that is possible actually exists somewhere, then extremely unlikely events, even events whose probability is actually zero, will occur.  In other words, (infinitesimal) * (infinity) = 1.  Said another way:  0 * ∞ = 1.

For the record, I made the same argument in a book I wrote at age 13, called Knight’s Null Algebra, which claimed to “disprove” algebra.  Just as anything logically follows from a contradiction (“If up is down, then my name is Bob” is a true statement), anything follows from infinity.  Infinity makes the impossible possible.  But this is philosophical nonsense.  Infinity doesn’t exist in nature.  Nevertheless, many physicists and cosmologists with (as far as I know) functioning cerebrums actually believe in the existence of infinitely many universes, although they give it a fancy name: the Multiverse.

Here are my problems with the Multiverse:
·         There is not a shred of empirical evidence that there is such a thing.
·         Because the Multiverse includes universes that are beyond our cosmological horizon and are forever inaccessible to us, no empirical evidence ever can exist to test the concept.
·         Any concept or hypothesis that cannot be tested is not in the realm of science.
·         Any scientist who endorses the Multiverse concept is not speaking scientifically or as a scientist (even though s/he may pretend to).

Setting aside all these problems with the Multiverse concept, it should be pointed out that anyone who dismisses any possibility of a Creator, and thus desperately embraces infinity to dismiss the Goldilocks enigma, is not being scientific anyway.  One can make arguments for or against the existence of God; one can lean toward theism or atheism; but anyone who states with certainty that God does or does not exist is not speaking scientifically.  And that’s OK.  There’s nothing wrong with a scientist having opinions one way or another or with making arguments one way or another, just as I’ve done in this post.  But it is a problem when scientists speak from the academic pulpit, intimidating people with their scientific degrees and credentials, to bully people into accepting their philosophical opinions as if they were scientific facts.  (Richard Dawkins should have lost his membership to the scientific academy long ago, now that he spews untestable pseudoscientific gibberish, but has in fact been celebrated instead of ostracized by the academy.)

My point is this: I believe that the Goldilocks Enigma is a very strong reason to believe in a Creator, while the Multiverse counterargument is an untestable and nonscientific theory usually uttered by people (scientists or otherwise) who are not speaking scientifically.

I am truly and utterly amazed and overwhelmed by the vastness, beauty, and unlikeliness of the Universe.  And the more I learn about physics, the more awed I become.  For instance, if the information in the universe is related to universal entanglement, then every object is entangled with essentially every other object in the universe in ways that correlate their positions and momenta to within quantum uncertainty.  That is absolutely, utterly, incomprehensively amazing.  The more I learn about physics, the closer I come to God.

Saturday, June 6, 2020

Unending Confusion in the Foundations of Physics

Quantum mechanics is difficult enough without physicists mucking it all up.  Setting aside the problem that they speak in a convoluted language that is often independent of what’s actually happening in the observable physical world, they are sometimes fundamentally wrong about their own physics.

In 2007, a researcher named Afshar published a paper on a fascinating experiment in which he was able to infer the existence of a double-slit interference pattern when thin wires placed where destructive interference would be expected failed to significantly reduce the amount of light passing through.  It was clever and certainly worthy of publication.

But he took it a step too far and stated that the experiment showed a violation of wave-particle complementarity – in other words, he asserted that the photons showed both wave-like behavior and particle-like behavior at the same time.  The first is correct: the existence of interference in the far field of the double-slit indicated the wave behavior.  But the second (the simultaneous particle-like behavior) is not correct, as it depended on his claim that which-way information, which inherently does not and cannot exist in a superposition over two slits, exists retroactively through a later measurement.

I feel like Afshar can be excused for this mistake, for two reasons.  First, the mistake has its origins in a very reputable earlier reference by famed physicist John Wheeler.  Second, his experiment was new, useful, and elucidating for the physics community.  Having said that, the mistake represents such a fundamental misunderstanding of the very basics of quantum mechanics that it should have been immediately and unambiguously refuted – and then brought up no more.  But that’s not what happened.  What happened is this:

·         The paper is cited by over a hundred papers, very few of which refute it.
·         Among those that refute it, several refute it incorrectly.
·         Those that refute it correctly use over a hundred pages and several dozen complicated quantum mechanics equations.  Their inability to address and solve the problem clearly and succinctly only obfuscates what is already an apparently muddled issue.

Here is my two-page refutation of Afshar.

How exactly are physics students ever going to understand quantum mechanics when the literature on the foundations of physics is so confused and internally inconsistent?

Tuesday, June 2, 2020

Consciousness, Quantum Mechanics, and Pseudoscience

The study of consciousness is not currently “fashionable” in the physics community, and the notion that there might be any relationship between consciousness and quantum mechanics and/or relativity truly infuriates some physicists.  For instance, the hypothesis that consciousness causes collapse (“CCC”) of the quantum mechanical wave function is now considered fringy by many; a physicist who seriously considers it (or even mentions it without a deprecatory scowl) risks professional expulsion and even branding as a quack.

In 2011, two researchers took an unprovoked stab at the CCC hypothesis in this paper.  There is a fascinating experiment called the “delayed choice quantum eraser,” in which information appears to be erased from the universe after a quantum interference experiment has been performed.  The details don’t matter.  The point is that the researchers interpret the quantum eraser experiment as providing an empirical falsification of the CCC hypothesis.  They don’t hide their disdain for the suggestion that QM and consciousness may have a relationship.

The problem is: their paper is pseudoscientific shit.  They first make a massive logical mistake that, despite the authors’ contempt for philosophy, would have been avoided had they taken a philosophy class in logic.  They follow up that mistake with an even bigger blunder in their understanding of the foundations of quantum mechanics.  Essentially, they assert that the failure of a wave function to collapse always results in a visible interference pattern, which is just patently false.  They clearly fail to falsify the CCC hypothesis.  (For the record, I think the CCC hypothesis is likely false, but I am reasonably certain that it has not yet been falsified.)

Sure, there’s lots of pseudoscience out there, so why am I picking on this particular paper?  Because it was published in Annalen der Physik, the same journal in which Einstein published his groundbreaking papers on special relativity and the photoelectric effect (among others), and because it’s been cited by more than two dozen publications so far (often to attack the CCC hypothesis), only one of which actually refutes it.

What’s even more irritating is that the paper’s glaring errors could easily have been caught by a competent journal referee who had read the paper skeptically.  If the paper’s conclusion had been in support of the CCC hypothesis, you can bet that it would have been meticulously and critically analyzed before publication, assuming it was considered for publication at all.  But when referees already agree with a paper’s conclusion, they may be less interested in the logical steps taken to arrive at that conclusion.  A paper that comes to the correct conclusion via incorrect reasoning is still incorrect.  A scientist that rejects correct reasoning because it results in an unfashionable or unpopular conclusion is not a scientist.

Here is a preprint of my rebuttal to their paper.  Since it is intended to be a scholarly article, I am much nicer there than I’ve been here.