Friday, August 7, 2020

Finally Published!

The physics academy is a tough egg to crack.  I offered several reasons in this post why my entrance into the field has been and will continue to be an uphill battle, but it’s truly astounding how much resistance I’ve experienced in getting published.  Rejection after rejection, my confidence continued to drop, until eventually I realized that those who had reviewed my papers weren’t really understanding my points.  Perhaps I wasn’t wrong after all.

In this post, I addressed a fundamental and important error in an article that had been cited far too many times.  Like all my other papers, it was rejected.  But this time I decided to fight back.  I knew that I was right.  I appealed the rejection and made a very clear case to the editor, who eventually took my appeal to the Editor-in-Chief: a badass in the field named Carlo Rovelli, whom I referenced in this post.  Two days ago the editor let me know that Rovelli had overruled the referees and decided to publish my article.  Finally, some good news.  Finally, some validation.  Finally, some confirmation that I actually understand something about the foundations of physics.

Onward and upward.  In this post, I explained why macroscopic quantum superpositions are simply not possible.  Today I finished and posted a formal article elaborating on this and, I hope, ultimately proving that Schrodinger’s Cat and Wigner’s Friend are absolutely, completely, totally impossible, even in principle.  I’ll soon submit it to a journal. 

But this time, I’m not going to take rejection sitting down.

Saturday, August 1, 2020

COVID Madness: How Onerous Requirements Incentivize Lying

NYU is requiring essentially all non-New York students to: quarantine for 14 days; take a COVID-19 test before travel; take a test upon arrival; and take a THIRD test a week after arrival. (This is on top of requirements to wear face masks, socially distance, and fill out a form every day on campus attesting that you don't have any symptoms, haven't been in contact with anyone "suspected" of having the virus, etc., etc.) This defies common sense. The problem with catering to the most fearful is that eventually the people who engage in the riskiest behaviors will just lie. "Yeah, I've been tested." "Yeah, I quarantined." Here is my letter to NYU's VP for Student Affairs:


I am an incoming physics graduate student. My wife, a physician, will be joining the faculty at Columbia's Allen hospital. She and I both moved up here from North Carolina in mid-July. I have serious concerns about this email and the requirements of NYU.


NYU is requiring "out-of-tristate" students to quarantine for 14 days and get TWO tests and "strongly recommends" another test prior to travel. These mandates exceed legal requirements and defy both scientific recommendations and common sense, particularly given that everyone in NYU buildings will be required to: wear face masks; socially distance; and complete a daily COVID-19 "screener." These onerous requirements are obviously the result of fear, CYA culture, and litigation prevention, instead of rational thinking.


The main problem with overuse of caution is not the inconvenience and cost (in time and money) to everyone involved. That certainly is a problem, whether or not it is socially acceptable or politically correct to say so. The main problem is that there are many people who strongly disagree with the extremes to which authorities are willing to curtail personal freedoms to address COVID-19, and at some point these people may feel a line has been crossed and are no longer willing to cooperate.


The biggest red flag in your email is the statement that those who do not quarantine on campus "will be required to attest to having quarantined." Those who care about not spreading the virus to others are already acting responsibly. However, those who are more reckless in their interactions with others, faced with requirements to quarantine for 14 days (in a very expensive city) and get tested multiple times, will simply be more incentivized to lie and falsely "attest to having quarantined" or been tested multiple times. At some point, your requirements, and those of the city, state, and federal government, may become so onerous that people will simply "check the box" and say whatever they need to say to get through their day (which may include going to class, going to and getting paid for their employment, etc.).


Caring about the NYU community does not mean catering to the most fearful and litigious among them. At some point, the demands become so ridiculous that they become ineffective, ultimately resulting (ironically) in increased risk to the NYU community. Please don't let it get to that point.

Saturday, June 13, 2020

Killing Schrodinger’s Cat – Once and For All

Background of Schrodinger’s Cat

Quantum mechanics stops where classical probabilities start.  In the classical world, we work with probabilities directly, while in quantum mechanics, we work with probability amplitudes, which are complex numbers (involving that weird number i), before applying the Born rule, which requires squaring the norm of the amplitude to arrive at probability.  If you don’t know anything about quantum mechanics, this may sound like gibberish, so here’s an example showing how quantum mechanics defies the rules of classical probability.

I shine light through a tiny hole toward a light detector and take a reading of the light intensity.  Then I repeat the experiment, the only difference being that I’ve punched a second tiny hole next to the first one.  Classical probability (and common sense!) tells us that the detector should measure at least the same light intensity as before, but probably more.  After all, by adding another hole, surely we are allowing more light to reach the detector... right?!  Nope.  We could actually measure less light, because through the process of interference among eigenstates in a superposition, quantum mechanics screws up classical probability.  In some sense, the violation of classical probability, which tends to happen only in the microscopic world, is really what QM is all about.  And when I say “microscopic,” what I really mean is that the largest object to which an interference experiment has been performed (thus demonstrating QM effects) is a molecule of a few hundred amu, which is much, much, much, much smaller than can be seen with the naked eye or even a light microscope.  So we have no direct empirical evidence that the rules of QM even apply to macroscopic objects.

Having said that, many physicists and philosophers insist that there’s no limit “in principle” to the size of an object in quantum superposition.  The question I’ve been wrestling with for a very long time is this: is there an actual dividing line between the “micro” and “macro” worlds at which QM is no longer applicable?  The “rules” of quantum mechanics essentially state that when one quantum object interacts with another, they just entangle to create a bigger quantum object – that is, until the quantum object becomes big enough that normal probability rules apply, and/or when the quantum object becomes entangled with a “measuring device” (whatever the hell that is).  The so-called measurement problem, and the ongoing debates regarding demarcation between “micro” and “macro,” have infested physics and the philosophy of quantum mechanics for the past century.

And no thought experiment better characterizes this infestation than the obnoxiously annoying animal called Schrodinger’s Cat.  The idea is simple: a cat is placed in a box in which the outcome of a tiny measurement gets amplified so that one outcome results in a dead cat while the other outcome keeps the cat alive.  (For example, a Geiger counter measures a radioisotope so that if it “clicks” in a given time period, a vial of poison is opened.)  Just before we open the box at time t0, there’s been enough time for the poison to kill the cat, so we should expect to see either a live or dead cat.  Here’s the kicker: the “tiny measurement” is on an object that is in quantum superposition, to which the rules of classical probability don’t apply. 

So does the quantum superposition grow and eventually entangle with the cat, in which case, just prior to time t0, the cat is itself in a superposition of “dead” and “alive” states (and to which the rules of classical probability do not apply)?  Or does the superposition, before entangling with the cat, reduce to a probabilistic mixture, such as through decoherence or collapse of the wave function?  And what the hell is the difference?  If the cat is in a superposition just prior to time t0, then there just is no objective fact about whether the cat is dead or alive, and our opening of the box at t0 is what decoheres (or collapses or whatever) the entangled wave state, allowing the universe to then randomly choose a dead or live cat.  However, if the cat is in a mixed state just prior to t0, then there is an objective fact about whether it is dead or alive – but we just don’t know the fact until we open the box.  So the question really comes down to this: do we apply classical probability or quantum mechanics to Schrodinger’s Cat?  Or, to use physics terminology, the question is whether, just prior to opening the box, Schrodinger’s Cat is in a coherent superposition or a probabilistic mixed state. 

Why is this such a hard question?

It’s a hard question for a couple reasons.  First, remember that QM is about statistics.  We never see superpositions.  The outcome of every individual trial of every experiment ever performed in the history of science has been consistent with the absence of quantum superpositions.  Rather, superpositions are inferred when the outcomes of many, many trials of an experiment on “identically prepared” objects don’t match what we would have expected from normal probability calculations.  So if the only way to empirically distinguish between a quantum cat and a classical cat requires doing lots of trials on physically identical cats... ummm... how exactly do we create physically identical cats?  Second, the experiment itself must be an “interference” experiment that allows the eigenstates in the wave state to interfere, thus changing normal statistics into quantum statistics.  This is no easy task in the case of Schrodinger’s Cat, and you can’t just do it by opening the box and looking, because the probabilities of finding the cat dead or alive will be the same whether or not the cat was in a superposition just prior to opening the box.  So doing lots of trials is not enough – they must be trials of the right kind of experiment – i.e., an interference experiment.  And in all my reading on SC, I have never – not once – encountered anything more than a simplistic, hypothetical mathematical treatment of the problem.  “All you have to do is measure the cat in the basis {(|dead> + |live>)/√2, (|dead> - |live>)/√2}!  Easy as pie!”  But the details of actually setting up such an experiment are so incredibly, overwhelmingly complicated that it’s unlikely that it is physically possible, even in principle.

There’s a further complication.  If SC is indeed in a quantum superposition prior to t0, then there is no fact about whether the cat is dead or alive.  But don’t you think the cat would disagree?  OK, so if you believe cats don’t think, an identical thought experiment involving a human is called Wigner’s Friend: physicist Eugene Wigner has a friend who performs a measurement on a quantum object in a closed, isolated lab.  Just before Wigner opens the door to ask his friend about the outcome of the measurement, is his friend in a superposition or a mixed state?  If Wigner’s Friend is in a superposition, then that means there is no fact about which outcome he measured, but surely he would disagree!  Amazingly, those philosophers who argue that WF is in a superposition actually agree that when he eventually talks to Wigner, he will insist that he measured a particular outcome, and that he remembers doing the measurement, and so forth, so they have to invent all kinds of fanciful ideas about memory alteration and erasure, retroactive collapse, etc., etc.  All this to continue to justify the “in-principle” possibility of an absolutely ridiculous thought experiment that has done little more than confuse countless physics students.

I’m so tired of this.  I’m so tired of hearing about Schrodinger’s Cat and Wigner’s Friend.  I’m so tired of hearing the phrase “possible in principle.”  I’m so sick of long articles full of quantum mechanics equations that “prove” the possibility of SC without any apparent understanding of the limits to those equations, the validity of their assumptions, or the extent to which their physics analysis has any foundation in the actual observable physical world.  David Deutsch’s classic paper is a prime example, in which he uses lots of “math” to “prove” not only that the WF experiment can be done, but that WF can actually send a message to Wigner, prior to t0, that is uncorrelated to the measurement outcome.  Then, in a couple of sentences in Section 8.1, he casually mentions that his analysis assumes that: a) computers can be conscious; and b) Wigner’s Friend’s lab can be sufficiently isolated from the rest of the universe.  Assumption a) is totally unfounded, which I discuss in this paper and this paper and in this post and this post, and I’ll refute assumption b) now.

Why the Schrodinger Cat experiment is not possible, even in principle

Let me start by reiterating the meaning of superposition: a quantum superposition represents a lack of objective fact.  I’m sick of hearing people say things like “Schrodinger’s Cat is partially alive and partially dead.”  No.  That’s wrong.  Imagine an object in a superposition state |A> + |B>.  As soon as an event occurs that correlates one state (and not the other) to the rest of the universe (or the “environment”), then the superposition no longer exists.  That event could consist of a single photon that interacts with the object in a way that distinguishes the eigenstates |A> and |B>, even if that photon has been traveling millions of years through space prior to interaction, and continues to travel millions of years more through space after interaction.  The mere fact that evidence that distinguishes |A> from |B> exists is enough to decohere the superposition into one of those eigenstates.

In the real world there could never be a SC superposition because a dead cat interacts with the universe in very different (and distinguishable) ways from a live cat... imagine the trillions of impacts per second with photons and surrounding atoms that would differ depending on the state of the cat.  Now imagine that all we need is ONE such impact and that would immediately destroy any potential superposition.  (I pointed out in this post how a group of researchers showed that relativistic time dilation on the Earth’s surface was enough to prevent macroscopic superpositions!)  And that’s why philosophers who discuss the possibility of SC often mention the requirement of “thermally isolating” it.  What they mean is that we have to set up the experiment so that not even a single photon can be emitted, absorbed, or scattered by the box/lab in a way that is correlated to the cat’s state.  This is impossible in practice; however, they claim it is possible in principle.  In other words, they agree that decoherence would kill the SC experiment by turning SC into a normal probabilistic mixture, but claim that decoherence can be prevented by the “in-principle possible” act of thermally isolating it.

Wrong.

In the following analysis, all of the superpositions will be location superpositions.  There are lots of different types of superpositions, such as spin, momentum, etc., but every actual measurement in the real world is arguably a position measurement (e.g., spin measurements are done by measuring where a particle lands after its spin interacts with a magnetic field).  So here’s how I’ll set up my SC thought experiment.  At time t0, the cat, measurement apparatus, box, etc., are thermally isolated so that (somehow) no photons, correlated to the rest of the universe, can correlate to the events inside the box and thus prematurely decohere a quantum superposition.  I’ll even go a step further and place the box in deep intergalactic space where the spacetime has essentially zero curvature to prevent the possibility that gravitons could correlate to the events inside the box and thus gravitationally decohere a superposition.  I’ll also set it up so that, when the experiment begins at t0, a tiny object is in a location superposition |A> + |B>, where eigenstates |A> and |B> correspond to locations A and B separated by distance D.  (I’ve left out coefficients, but assume they are equal.)  The experiment is designed so that the object remains in superposition until time t1, when the location of the object is measured by amplifying the quantum object with a measuring device so that measurement of the object at location A would result in some macroscopic mass (such as an indicator pointer of the measuring device) being located at position MA in state |MA>, while a measurement at location B would result in the macroscopic mass being located at position MB in state |MB>.  Finally, the experiment is designed so that location of the macroscopic mass at position MA would result, at later time t2, in a live cat in state |live>, while location at position MB would result in a dead cat in state |dead>.  Here’s the question: at time t2, is the resulting system described by the superposition |A>|MA>|live> + |B>|MB>|dead>, or by the mixed state of 50%  |A>|MA>|live> and 50% |B>|MB>|dead>?

First of all, I’m not sure why decoherence doesn’t immediately solve this problem.  At time t0, the measuring device, the cat, and the box are already well correlated with each other; the only thing that is not well correlated is the tiny object.  In fact, that’s not even true... the tiny object is well correlated to everything in the box in the sense that it will NOT be detected in locations X, Y, Z, etc.; instead, the only lack of correlation (and lack of fact) is whether it is located at A or B.  But as soon as anything in the box correlates to the tiny object’s location at A or B, then a superposition no longer exists and a mixed (i.e., non-quantum) state emerges.  So it seems to me that the superposition has already decohered at time t1 when the measuring device, which is already correlated to the cat and box, entangles with the tiny object.  In other words, it seems logically necessary that at t1, the combination of object with measuring device has already reduced to the mixed state 50%  |A>|MA> and 50% |B>|MB>, so clearly by later time t2 the cat is, indeed, either dead or alive and not in a quantum superposition.

Interestingly, even before t1, the gravitational attraction by the cat might actually decohere the superposition!  If the tiny object is a distance R>>D from the cat having mass Mcat, then the differential acceleration on the object due to its two possible locations relative to the cat is approximately GMcatD/2R3.  How long will it take for the object to then move a measurable distance δx?  For a 1kg cat located R=1m from the tiny object, t ≈ 170000 √(δx/D), where t is in seconds.  If we require the tiny object to traverse the entire distance D before we call it “measurable” (which is ridiculous but provides a limiting assumption), then t ≈ 170000 s.  However, if we allow motion over a Planck length to be “measurable” (which is what Mari et al. assume!), and letting D be something typical for a double slit experiment, such as 1μm, then t ≈ 1ns.  (This makes me wonder how much gravity interferes with maintaining quantum superpositions in the growing field of quantum computing, and whether it will ultimately prevent scalable, and hence useful, quantum computing.)

Gravitational decoherence or not, it seems logically necessary to me that by time t1, the measuring device has already decohered the tiny object’s superposition.  I’m not entirely sure how a proponent of SC would reply, as very few papers on SC actually mention decoherence, but I assume the reply would be something like: “OK, yes, decoherence has happened relative to the box, but the box is thermally isolated from the universe, so the superposition has not decohered relative to the universe and outside observers.”  Actually, I think this is the only possible objection – but it is wrong.

When I set up the experiment at time t0, the box (including the cat and measuring device inside) were already extremely well correlated to me and the rest of the universe.  Those correlations don’t magically disappear by “isolating.”  In fact, Heisenberg’s Uncertainty Principle (HUP) tells us that correlations are quite robust and long-lasting, and the development of quantum “fuzziness” becomes more and more difficult as the mass of an object increases: Δx(mΔv) ≥ ℏ/2.

Let’s start by considering a tiny dust particle, which is much, much, much larger than any object that has currently demonstrated quantum interference.  We’ll assume it is a 50μm diameter sphere with a density of 1000kg/m3 and an impact with a green photon (λ ≈ 500nm) has just localized it.  How long will it take for its location fuzziness to exceed distance D of, say, 1μm?  Letting Δv = ℏ/2mΔx ≈ 1 x 10-17 m/s, it would take 1011 seconds (around 3200 years) for the location uncertainty to reach a spread of 1μm.  In other words, if we sent a dust particle into deep space, its location relative to other objects in the universe is so well defined due to its correlations to those objects that it would take several millennia for the universe to “forget” where the dust particle is within the resolution of 1μm.  Information would still exist to localize the dust particle to a resolution of around 1μm, but not less.  But this rough calculation depends on a huge assumption: that new correlation information isn’t created in that time!  In reality, the universe is full of particles and photons that constantly bathe (and thus localize) objects.  I haven’t done the calculation to determine just how many localizing impacts a dust particle in deep space could expect over 3200 years, but it’s more than a handful.  So there’s really no chance for a dust particle to become delocalized relative to the universe.

So what about the box containing Schrodinger’s Cat?  I have absolutely no idea how large the box would need to be to “thermally isolate” it so that information from inside does not leak out – probably enormous so that correlated photons bouncing around inside the box have sufficient wall thickness to thermalize before being exposed to the rest of the universe – but for the sake of argument let’s say the whole experiment (cat included) has a mass of a few kg.  It will now take 1011 times longer, or around 300 trillion years – or 20,000 times longer than the current age of the universe – for the box to become delocalized from the rest of the universe by 1μm, assuming it can somehow avoid interacting with even a single stray photon passing by.  Impossible.  (Further, I neglected gravitational decoherence due to interaction with other objects in the universe, but 300 trillion years is a long time.  Gravity may be weak, but it's not that weak!)

What does this tell us?  It tells us that the SC box will necessarily be localized relative to the universe (including any external observer) to a precision much, much smaller than the distance D that distinguishes eigenstates |A> and |B> of the tiny object in superposition.  Thus, when the measuring device inside the box decoheres the superposition relative to the box, it also does so relative to the rest of the universe.  If there is a fact about the tiny object’s position (say, in location A) relative to the box, then there is also necessarily a fact about its position relative to the universe – i.e., decoherence within the box necessitates decoherence in general.  An outside observer may not know its position until he opens the box and looks, but the fact exists before that moment.  When a new fact emerges about the tiny object’s location due to interaction and correlation with the measuring device inside the box, then that new fact eliminates the quantum superposition relative to the rest of the universe, too.

And, by the way, the conclusion doesn’t change by arbitrarily reducing the distance D.  A philosopher might reply that if we make D really small, then eventually localization of the tiny object relative to the box might not localize it relative to the universe.  Fine.  But ultimately, to make the SC experiment work, we have to amplify whatever distance distinguishes eigenstates |A> and |B> to some large macroscopic distance.  For instance, the macroscopic mass of the measuring device has eigenstates |MA> and |MB> which are necessarily distinguishable over a large (i.e., macroscopic) distance – say 1cm, which is 104 larger than D=1μm.  (Even at the extreme end, to sustain a superposition of the cat, if there is an atom in a blood cell that would have been in its head in state |live> at a particular time that is in its tail in state |dead>, then quantum fuzziness would be required on the order of 1m.)

What this tells us is that quantum amplification doesn’t create a problem where none existed.  If there is no physical possibility, even in principle, of creating a macroscopic quantum superposition by sending a kilogram-scale object into deep space and waiting for quantum fuzziness to appear – whether or not you try to “thermally isolate” it – then you can’t stuff a kilogram-scale cat in a box and depend on quantum amplification to outsmart nature.  There simply is no way, even in principle, to adequately isolate a macroscopic object (cat included) to allow the existence of a macroscopic quantum superposition.

Thursday, June 11, 2020

Quantum Superpositions Are Relative

At 4AM I had an incredible insight.

Here’s the background.  I’ve been struggling recently with the notion of gravitational decoherence of the quantum wave function, as discussed in this post.  The idea is neither new nor complicated: if the gravitational field of a mass located in Position A would have a measurably different effect on the universe (even on a single particle) than the mass located in Position B, then its state cannot be a superposition over those two locations.

Generally, we think of impacts between objects/particles as causing the decoherence of a superposition.  For instance, in the typical double-slit interference experiment, a particle’s wave state “collapses” either when the particle impacts a detector in the far field or we measure the particle in one of the slits by bouncing a photon off it.  In either case, one or more objects (such as photons), already correlated to the environment, get correlated to the particle, thus decohering its superposition.

But what if the decohering “impact” is due to the interaction of a field on another particle far away?  Given that field propagation does not exceed the speed of light, when does decoherence actually occur?  That’s of course the question of gravitational decoherence.  Let’s say that mass A is in a superposition over L and R locations (separated by a macroscopic distance), which therefore creates a superposition of gravitational fields fL and fR that acts on a distant mass B (where masses A and B are separated by distance d).  For the sake of argument, mass B is also the closest object to mass A.  Let’s say that mass B interacts with the field at time t1 and it correlates to fL.  We can obviously conclude that the state of mass A has decohered and it is now located at L... but when did that happen?  It is typically assumed in quantum mechanics that “collapse” events are instantaneous, but of course this creates a clear conflict between QM and special relativity.  (The Mari et al. paper in fact derives its minimum testing time based on the assumption of instantaneous decoherence.)

This assumption makes no sense to me.  If mass B correlates to field fL created by mass A, but the gravitational field produced by mass A travels at light speed (c), then mass A must have already been located at L before mass B correlated to field fL – specifically, mass A must have been located at L on or before time (t1 - d/c).  Thus the interaction of mass B with the gravitational field of mass A could not have caused the collapse of the wave function of mass A (unless we are OK with backward causation).

So for awhile I tossed around the idea that whenever a potential location superposition of mass A reaches the point at which different locations would be potentially detectable (such as by attracting another mass), then it would produce something (gravitons?) that would decohere the superposition.  In fact, that’s more or less the approach that Penrose takes by suggesting that decoherence happens when the difference in the gravitational self-energy between spacetime geometries in a quantum superposition exceeds what he calls the “one graviton” level.

The problem with this approach is that decoherence doesn’t happen when differences could be detected... it happens when the differences are detected and correlated to the rest of the universe.  So, in the above example, what actual interaction might cause the state of mass A to decohere if we are ruling out the production (or even scattering) of gravitons and neglecting the effect of any other object except mass B?  Then it hit me: the interaction with the gravitational field of mass B, of course!  Just as mass A is in a location superposition relative to mass B, which experiences the gravitational field produced by A, mass B is in a location superposition relative to mass A, which experiences the gravitational field produced by B.  Further, just as from the perspective of mass B at time t1, the wave state of mass A seems to have collapsed at time (t1 - d/c)... also from the perspective of mass A at time t1, the wave state of mass B seems to have collapsed at time (t1 - d/c).

In other words, the “superposition” of mass A only existed relative to mass B (and perhaps the rest of the universe, if mass B was so correlated), but from the perspective of mass A, mass B was in a superposition.  What made them appear to be in location superpositions relative to each other was that they were not adequately correlated, but eventually their gravitational fields correlated them.  When mass B claims that the wave state of mass A has “collapsed,” mass A could have made the same claim about mass B.  Nothing actually changed about mass A; instead, the interaction between mass A and mass B correlated them and produced new correlation information in the universe.

Having said all this, I have not yet taken quantum field theory, and it’s completely possible that I’ve simply jumped the gun on stuff I’ll learn at NYU anyway.  Also, as it turns out, my revelation is strongly related, and possibly identical, to Carlo Rovelli’s Relational interpretation of QM.  This wouldn’t upset me at all.  Rovelli is brilliant, and if I’ve learned and reflected enough on QM to independently derive something produced by his genius, then I’d be ecstatic.  Further, my goal in this whole process is to learn the truth about the universe, whether or not someone else learned it first.  That said, I think one thing missing from Rovelli’s interpretation is the notion of universal entanglement that gives rise to a preferred observer status.  If the entire universe is well correlated with the exception of a few pesky microscopic superpositions, can’t we just accept that there really is just one universe and corresponding set of facts?  Another problem is the interpretation’s dismissal of gravitational decoherence.  In fact, it was my consideration of distant gravitational effects on quantum decoherence, as well as implications of special relativity, that led me to this insight, so it seems odd that Rovelli seems to dismiss such effects.  Another problem is the interpretation’s acceptance of Schrodinger’s Cat (and Wigner’s Friend) states.  I think it extraordinarily likely -- and am on a quest to discover and prove -- that macroscopic superpositions large enough to encompass a conscious observer, even a cat, are physically impossible.  Nevertheless, I still don’t know much about his interpretation so it’s time to do some more reading!

Sunday, June 7, 2020

How Science Brought Me To God

This post was inspired by my sister, who has been struggling recently with questions about God, purpose, meaning, and many other big philosophical questions.

Let me start by saying that I’m not a Christian (or a Buddhist or a Muslim or a Jew or a Rastafarian blah blah blah), and never will be.  Christianity is a set of very specific stories and beliefs, of which the belief in a Creator is a tiny subset.  Belief in God does not imply belief in Christianity or any other religion.  It is truly astonishing how many scientists (and physicists in particular) don’t seem to understand that last sentence.  It’s incredible how often physicists will say something like: “When I was in Sunday School, I learned about Jesus walking on water.  But as a scientist, I learned that walking on water violates the laws of physics.  Therefore god does not exist.”  The conclusion simply doesn’t follow from the premises.

In my own progress in physics, I am finding much of the academic literature infested with bad logic and unsound arguments.  One of my more recent posts points to a heavily cited article that claimed to empirically refute the consciousness-causes-consciousness hypothesis (“CCCH”).  The authors started by characterizing CCCH as an if-then statement in the form of AàB (read “A implies B” or “if A, then B”), which was essentially correct.  (The actual statements are irrelevant to the point I’m making in this post, but my actual paper can be found here.)  Then, without explanation, they re-characterized CCCH as AàC, but this would only be true if BàC.  Setting aside the fact that BàC blatantly contradicts quantum mechanics, the authors didn’t even seem to notice the unfounded logical jump they had made.  Simply having taken graduate-level philosophical logic has already provided me a surprising leg-up in the study and analysis of physics.

Why do I take such pains to explain that my belief in God does not imply belief in any particular religion or set of stories?  Because my search for a physical explanation of consciousness, and my pursuit of some of the hard foundational questions in physics, already puts me on potentially thin ice in the physics academy, and mentioning God (with a capital G) may very well put me over the edge into the realm of “crackpot.”  Luckily, I’m in the position of not needing to seek anyone’s approval; having said that, I would ultimately like to collaborate with and influence other like-minded physicists and don’t want to immediately turn them off with any suggestion that I’m a Christian.  I also don’t intend to turn off any Christian readers... my wife and one of my best friends are Christians.  My point is that Christianity includes a very specific set of concepts and stories that far exceed mere theism and may be understandably off-putting to physicists.

With all the caveats in place, here’s the meat of this blog post: Science has in fact brought me to God, in large part via the Goldilocks Enigma, better known as the “fine-tuning” problem in physics.

Paul Davies, a cosmologist at Arizona State, wrote a fascinating book called The Goldilocks Enigma.  Essentially, there are more than a dozen independent parameters, based in part on the Standard Model of particle physics, that had to be “fine-tuned” to within 1% or so in order to create a universe that could create life.  (The phrase “fine-tuned” itself suggests a Creator, but that’s not how Davies means it.)  One example might be the ratio of the gravitational force to the electromagnetic force.  A star produces energy via the fusion of positively charged nuclei, primarily hydrogen nuclei.  Electrostatic repulsion makes it difficult to bring two fusible nuclei sufficiently close, but gravity solves this problem if the object is really massive, like a star.  The core of a star then experiences the quasi-equilibrium condition of gravity squeezing lots of hydrogen nuclei together counterbalanced by the outward pressure of an extremely high-temperature gas, thus producing fusion energy at more-or-less constant rate.  This balance in our Sun gives it a lifetime of something like 10 billion years before its fuel will be mostly spent.

Here’s the problem: if the gravitational force had been 1% higher than it is, then the Sun would have burned up far too quickly for life to evolve, while if the force had been 1% less than it is, the Sun would have produced far too little radiation for life to evolve.  (It is generally thought that liquid water, which exists in the narrow range of 273-373K, is a requirement for life, although this is not necessary for the current argument.)  In other words, the ratio of gravity to electric repulsion had to be in the “Goldilocks” zone: not too big, not too small... just right.

The likelihood of that ratio being “just right” is very small.  And you might think this is just a coincidence.  That’s certainly what a lot of physicists will say.  But remember that there are at least 26 such free parameters in nature that happen to be “just right” in the same way, and (small probability)^26 = (really freaking unbelievably tiny probability).  The probability is so tiny as to be effectively zero.

If you have already dismissed any possibility of a Creator, then one way – perhaps the only way – to explain away such a fantastically tiny probability is to posit the existence of infinitely many universes and then invoke the so-called “Anthropic Principle” to conclude that such an unlikely event must be possible because, if it weren’t, we wouldn’t exist to notice!  After all, if everything that is possible actually exists somewhere, then extremely unlikely events, even events whose probability is actually zero, will occur.  In other words, (infinitesimal) * (infinity) = 1.  Said another way:  0 * ∞ = 1.

For the record, I made the same argument in a book I wrote at age 13, called Knight’s Null Algebra, which claimed to “disprove” algebra.  Just as anything logically follows from a contradiction (“If up is down, then my name is Bob” is a true statement), anything follows from infinity.  Infinity makes the impossible possible.  But this is philosophical nonsense.  Infinity doesn’t exist in nature.  Nevertheless, many physicists and cosmologists with (as far as I know) functioning cerebrums actually believe in the existence of infinitely many universes, although they give it a fancy name: the Multiverse.

Here are my problems with the Multiverse:
·         There is not a shred of empirical evidence that there is such a thing.
·         Because the Multiverse includes universes that are beyond our cosmological horizon and are forever inaccessible to us, no empirical evidence ever can exist to test the concept.
·         Any concept or hypothesis that cannot be tested is not in the realm of science.
·         Any scientist who endorses the Multiverse concept is not speaking scientifically or as a scientist (even though s/he may pretend to).

Setting aside all these problems with the Multiverse concept, it should be pointed out that anyone who dismisses any possibility of a Creator, and thus desperately embraces infinity to dismiss the Goldilocks enigma, is not being scientific anyway.  One can make arguments for or against the existence of God; one can lean toward theism or atheism; but anyone who states with certainty that God does or does not exist is not speaking scientifically.  And that’s OK.  There’s nothing wrong with a scientist having opinions one way or another or with making arguments one way or another, just as I’ve done in this post.  But it is a problem when scientists speak from the academic pulpit, intimidating people with their scientific degrees and credentials, to bully people into accepting their philosophical opinions as if they were scientific facts.  (Richard Dawkins should have lost his membership to the scientific academy long ago, now that he spews untestable pseudoscientific gibberish, but has in fact been celebrated instead of ostracized by the academy.)

My point is this: I believe that the Goldilocks Enigma is a very strong reason to believe in a Creator, while the Multiverse counterargument is an untestable and nonscientific theory usually uttered by people (scientists or otherwise) who are not speaking scientifically.

I am truly and utterly amazed and overwhelmed by the vastness, beauty, and unlikeliness of the Universe.  And the more I learn about physics, the more awed I become.  For instance, if the information in the universe is related to universal entanglement, then every object is entangled with essentially every other object in the universe in ways that correlate their positions and momenta to within quantum uncertainty.  That is absolutely, utterly, incomprehensively amazing.  The more I learn about physics, the closer I come to God.

Saturday, June 6, 2020

Unending Confusion in the Foundations of Physics

Quantum mechanics is difficult enough without physicists mucking it all up.  Setting aside the problem that they speak in a convoluted language that is often independent of what’s actually happening in the observable physical world, they are sometimes fundamentally wrong about their own physics.

In 2007, a researcher named Afshar published a paper on a fascinating experiment in which he was able to infer the existence of a double-slit interference pattern when thin wires placed where destructive interference would be expected failed to significantly reduce the amount of light passing through.  It was clever and certainly worthy of publication.

But he took it a step too far and stated that the experiment showed a violation of wave-particle complementarity – in other words, he asserted that the photons showed both wave-like behavior and particle-like behavior at the same time.  The first is correct: the existence of interference in the far field of the double-slit indicated the wave behavior.  But the second (the simultaneous particle-like behavior) is not correct, as it depended on his claim that which-way information, which inherently does not and cannot exist in a superposition over two slits, exists retroactively through a later measurement.

I feel like Afshar can be excused for this mistake, for two reasons.  First, the mistake has its origins in a very reputable earlier reference by famed physicist John Wheeler.  Second, his experiment was new, useful, and elucidating for the physics community.  Having said that, the mistake represents such a fundamental misunderstanding of the very basics of quantum mechanics that it should have been immediately and unambiguously refuted – and then brought up no more.  But that’s not what happened.  What happened is this:

·         The paper is cited by over a hundred papers, very few of which refute it.
·         Among those that refute it, several refute it incorrectly.
·         Those that refute it correctly use over a hundred pages and several dozen complicated quantum mechanics equations.  Their inability to address and solve the problem clearly and succinctly only obfuscates what is already an apparently muddled issue.

Here is my two-page refutation of Afshar.

How exactly are physics students ever going to understand quantum mechanics when the literature on the foundations of physics is so confused and internally inconsistent?

Tuesday, June 2, 2020

Consciousness, Quantum Mechanics, and Pseudoscience

The study of consciousness is not currently “fashionable” in the physics community, and the notion that there might be any relationship between consciousness and quantum mechanics and/or relativity truly infuriates some physicists.  For instance, the hypothesis that consciousness causes collapse (“CCC”) of the quantum mechanical wave function is now considered fringy by many; a physicist who seriously considers it (or even mentions it without a deprecatory scowl) risks professional expulsion and even branding as a quack.

In 2011, two researchers took an unprovoked stab at the CCC hypothesis in this paper.  There is a fascinating experiment called the “delayed choice quantum eraser,” in which information appears to be erased from the universe after a quantum interference experiment has been performed.  The details don’t matter.  The point is that the researchers interpret the quantum eraser experiment as providing an empirical falsification of the CCC hypothesis.  They don’t hide their disdain for the suggestion that QM and consciousness may have a relationship.

The problem is: their paper is pseudoscientific shit.  They first make a massive logical mistake that, despite the authors’ contempt for philosophy, would have been avoided had they taken a philosophy class in logic.  They follow up that mistake with an even bigger blunder in their understanding of the foundations of quantum mechanics.  Essentially, they assert that the failure of a wave function to collapse always results in a visible interference pattern, which is just patently false.  They clearly fail to falsify the CCC hypothesis.  (For the record, I think the CCC hypothesis is likely false, but I am reasonably certain that it has not yet been falsified.)

Sure, there’s lots of pseudoscience out there, so why am I picking on this particular paper?  Because it was published in Annalen der Physik, the same journal in which Einstein published his groundbreaking papers on special relativity and the photoelectric effect (among others), and because it’s been cited by more than two dozen publications so far (often to attack the CCC hypothesis), only one of which actually refutes it.

What’s even more irritating is that the paper’s glaring errors could easily have been caught by a competent journal referee who had read the paper skeptically.  If the paper’s conclusion had been in support of the CCC hypothesis, you can bet that it would have been meticulously and critically analyzed before publication, assuming it was considered for publication at all.  But when referees already agree with a paper’s conclusion, they may be less interested in the logical steps taken to arrive at that conclusion.  A paper that comes to the correct conclusion via incorrect reasoning is still incorrect.  A scientist that rejects correct reasoning because it results in an unfashionable or unpopular conclusion is not a scientist.

Here is a preprint of my rebuttal to their paper.  Since it is intended to be a scholarly article, I am much nicer there than I’ve been here.

Monday, May 25, 2020

Speaking the Wrong Language

In my last post, I pointed out a fundamental problem in a particular paper – although the same problem appears in lots of papers: specifically, that there is no way to test whether an object is in a quantum superposition.  I feel like this is a point that many physicists and philosophers of physics overlook, so to be sure, I went ahead and posted the question on a few online physics forums, such as this one.  Here’s basically the response I got:
Every state that is an eigenstate of a first observable is obviously in a superposition of eigenstates of some second observable that does not commute with the first.  Therefore: of course you can test whether an object is in a quantum superposition.  Also, you are an idiot.
OK, so they didn’t actually say that last part, but it was clearly implied.  If you don’t speak the language of quantum mechanics, let me rephrase.  Quantum mechanics tells us that there are certain features (“observables”) of a system that cannot be measured/known/observed at the same time, thus the order of measurement matters.  For example, position and momentum are two such observables, so measuring the position and then the momentum will inevitably give different results from measuring the momentum and then the position – that is, the position and momentum operators do not commute.  And because they don’t commute, an object in a particular position (that is, “in an eigenstate of the position operator”) does not have a particular momentum, which is to say that it is in a superposition of all possible momenta.  In other words, the above response basically boils down to this: quantum mechanically, every state is a superposition.

Fine.  The problem is that this response has nothing to do with the question I was asking.  I ended up having to edit my question to ask whether any single test could distinguish between a “pure” quantum superposition versus a mixed state (which is a probabilistic mixture), and even then the responses weren't all that useful.

This is why I think the big fundamental problems in physics will probably not be solved by insiders.  They speak a very limited language that, by its nature, limits a speaker’s ability to discover and understand the flaws in the system it describes.  My original question, I thought, was relatively clear: is it actually possible, as Mari et al. suggest, to receive information by measuring (in a single test) whether an object is in a macroscopic quantum superposition?  But when the knee-jerk response of several intelligent quantum physicists is to discuss the noncommutability of quantum observables and come to the irrelevant (and, frankly, condescending) point that all states are superpositions and therefore of course we can test whether an object is in superposition – well, it makes me wonder whether they actually understand, at a fundamental level, what a quantum superposition is.  I feel like there’s a huge disconnect between the language and mathematics of physics, and the actual observable world that physics tries to describe. 

Tuesday, May 19, 2020

It is Impossible to Measure a Quantum Superposition

In a previous post, I discussed how and to what extent gravity might prevent the existence of macroscopic quantum superpositions.  There has been surprisingly little discussion of this possibility and there is still debate on whether gravity is quantized and whether gravitational fields are, themselves, capable of existing in quantum superpositions.

Today I came across a paper, "Experiments testing macroscopic quantum superpositions must be slow," by Mari et al., which proposes and analyzes a thought experiment involving a first mass mA placed in a position superposition in Alice’s lab, the mass mA producing a gravitational field that potentially affects a test mass mB in Bob’s lab (separated from Alice’s lab by a distance R), depending on whether or not Bob turns on a detector.  The article concludes that special relativity puts lower limits on the amount of time necessary to determine whether an object is in a superposition of two macroscopically distinct locations.

The paper seems to have several important problems, none of which have been pointed out in papers that cite it, notably this paper.  For example, its calculation of the entanglement time TB assumes that correlation of the location of test mass mB with the gravitational field of mass mA occurs when the change in position δx of the test mass mB exceeds its quantum uncertainty Δx, which seems like a reasonable argument – except that they failed to include the increase in quantum uncertainty due to dispersion.  (This is particularly problematic where they let Δx be the Planck length!)  Another problem is their proposed experiment in Section IV: Alice is supposed to apply a spin-dependent force on the mass mA which results in different quantum states, depending on whether or not Bob turned on the detector, but both quantum states correlate to mass mA located at L (instead of R).  The problem is that by the time she has applied the force, Bob’s test mass mB has presumably already correlated to the gravitational field produced by Alice’s mass mA located at L or R, but how could that happen before Alice applied the force that caused the mass mA to be located at L?

But the biggest problem with the paper is not in their determination of the time necessary to determine whether an object is in a superposition of two macroscopically distinct locations.  No – the bigger problem is that, as far as I understand, there is no way to determine whether an object is in a superposition at all! 

Wait, what?  Obviously quantum superpositions exist.

Yes, but a superposition is determined by doing an interference experiment on a bunch of “identically prepared” objects (or particles or masses or whatever).  The idea is that if we see an interference pattern emerge (e.g., the existence of light and dark fringes), then we can infer that the individual objects were in coherent superpositions.  However, detection of a single object never produces a pattern, so we can’t infer whether or not it was in a superposition.  Further, the outcome of every interference experiment on a superposition state, if analyzed one detection at a time, will be consistent with that object not having been in superposition.  A single trial can confirm that an object was not in a superposition (such as if we detect a blip in a dark fringe area), but no single trial can confirm that the object was in a superposition.  Moreover, even if a pattern does slowly emerge after many trials, every pattern produced by a finite number of trials – and remember that infinity does not exist in the physical world – is always a possible random outcome of measuring objects that are not in a superposition.  We can never confirm the existence of a superposition, but lots and lots of trials can certainly increase our confidence.

In other words, if I’m right, then every measurement that Alice makes (in the Mari paper) will be consistent with Bob's having turned the detector on (and decohered the field) -- thus, no information is sent!  No violation of special relativity!  No problem!

Look, I could be wrong.  I’ve been studying the foundations of quantum mechanics independently for a couple of years now, and very, very few references point out that there’s no way to determine if any particular object is in a quantum superposition, which is also why it’s taken me so long to figure it out.  So either I’m wrong about this, or there’s some major industry-wide crazy-making going on in the physics community that leads to all kinds of wacky conclusions and paradoxes... no wonder quantum mechanics is so confusing!

Is there a way to test whether a particular object is in a coherent superposition?  If so, how?  If not, then why do so few discussions of quantum superpositions mention this?

Update to this post here

Why Special Relativity Prevents Copying Conscious States

I was honored to be asked by Kenneth Augustyn to present to the 3rd Workshop on Biological Mentality on Jan. 6, 2020.  The talk was entitled, “Why Mind Uploading, Brain Copying, and Conscious Computers Are Impossible.”  While the talk addressed work in my earlier papers, it offers a clearer argument explaining why Special Relativity prevents the existence of physical copies of conscious states -- specifically, why two instances of physical copies of conscious states located at different points in spacetime, whether spacelike or timelike, would require either superluminal or backward causation.  I also show that because conscious states cannot be copied or repeated, consciousness cannot be algorithmic and cannot be created by a digital computer.  I mention some possible explanatory hypotheses, several of which are related to quantum mechanics, such as Quantum No-Cloning.  Finally, I touch on my related work of whether conscious states are history dependent.

This 36-minute video is probably my clearest video explanation so far as to why mind uploading and conscious computers are inconsistent with Special Relativity.




Friday, May 8, 2020

Into the Lion's Den

Two years ago, I sold my businesses and “retired” so that I could focus full-time on learning about, addressing, and attempting to solve some of the fundamental questions in physics and philosophy of mind... things like the physical nature of consciousness, whether we have free will, the measurement problem in quantum mechanics, etc.  What gave me the audacity to think I might be able to tackle these problems where so many have failed before?  Well, first, tackling a problem only requires desire.  I find these big-picture questions fascinating and looked forward to learning, analyzing, and at least trying.  But I did think I had a reasonable shot at actually solving some of these mysteries.  Why?

While I don’t (yet) have a degree in physics or philosophy, I do have an undergraduate and master’s degree in nuclear engineering as well as a law degree (which is certainly applicable to philosophical reasoning), and have taken lots of physics and philosophy classes along the way.  As an example, I’ve taken graduate-level quantum mechanics, or a course closely related or heavily dependent on QM, at UF, MIT, Princeton, and ECU, and even a fascinating course called Philosophy of Quantum Mechanics.  In other words, I’m no expert – and I plan to continue graduate studies in physics – but I certainly have more than a superficial understanding of physics.

It takes more than education to solve problems; it also takes creativity and a willingness to say or try things that others won’t.  As the sole inventor of 17 U.S. patents on a wide variety of inventions, from rocket engines to software to pumps to consumer products, I’ve always felt confident in my ability to solve problems creatively.  As for independence – let’s just say I’ve always been a maverick.  As an example, while in law school I realized that a loophole in American patent law allowed for the patenting of fictional storylines, so I published an article to that effect.  Over the next couple years, at least six law review articles were published specifically to argue that I was full of shit: great evidence that I was actually on to something!  (Since then the courts closed the loophole.)  I’m not trying to list my CV – just to explain my state of mind when I started this process.  I had plenty of free time, an independent spirit, a history of creativity in solving problems, and a strong and relevant educational foundation.  This gave me confidence that I was in a better position than most to actually solve an important riddle.  I also figured, perhaps naively, that the field of physics was one place where novel approaches, critical thinking, and objective analysis would be rewarded.

I jumped right in.  After extraordinary amounts of research and independent thought, I soon realized that special relativity would cause problems for copying or repeating conscious states.  I wrote my first paper on the topic; the most recent iteration is here.  Not long after that, I realized that QM would also, independently of relativity, cause problems for copying or repeating conscious states, and wrote my second paper; the most recent iteration is here.  In July, 2018, I sent my first paper to the British Journal for the Philosophy of Science; it was summarily rejected without comments or review.  Fuck them.  Over the next year and a half, I submitted it to four more journals, and despite getting close to publication with one, the paper was ultimately rejected by all.  Over the same period, I submitted my second paper to three journals and, again, despite getting close to publication with one, the paper was ultimately rejected.  What had gone wrong?  Was I in over my head? 

Regarding the first paper, the same objection kept coming up over and over: that copying the physical state of a person does not necessarily copy that person’s identity.  Without getting too technical, my argument was that whether or not a person’s identity depends on their underlying physical state, special relativity implied the same conclusion.  But no matter how I replied, the conversation always felt like this:
Them: “How do you know that copying a person’s physical state would copy their identity?” 
Me: “I don’t.  But if it does, then copying that state violates special relativity.  If it doesn’t, then there is nothing to copy.  Either way, we can’t copy a person’s identity.”
Them: “But wait.  First you need to show that copying a person’s physical state would copy their identity.”
Me: “No, I don’t.  Consider statements A and B.  If AàB, and also ¬AàB, then B is true, and we don’t need to figure out if A is true.”
Them: “Hold on.  How can you be so sure that statement A is true?...”
Me: [Banging head against wall]

It’s literally crazymaking.  No one seemed to have a problem with the physics or the implications of special relativity.  Instead, their problem almost always boiled down to the concept of identity and its relationship to physical reality.  I suspect that what’s happening is that people find a conclusion they’re uncomfortable with – such as “mind uploading is impossible” or “consciousness is not algorithmic” – and then work backward to find something they can argue with... and that something always happens to be some variation on “How do you know that statement A is true?”  I don’t know if it’s a case of intentional gaslighting or unintentional cognitive dissonance, but either way it took me a long time to finally rebuild my confidence, realize I’m not crazy, completely rewrite the paper to address the identity issue head-on, and submit it to a new journal.

Regarding the second paper, the referee of the third journal brought up what I believed, at the time, was a correct and fatal objection.  But by then, I had experienced 18 continuous months of essentially nothing but rejection, criticism, or being ignored (which is sometimes worse).  Prior to that, I’d spent so much of my life feeling confident about my ability to think clearly and rationally, to solve problems creatively, to analyze arguments skeptically, and to eventually arrive at correct conclusions.  So by the time I received that final rejection, I threw the paper aside and basically forgot about it – until about two weeks ago.  Somehow the human spirit can reawaken.  I took a look at the paper with fresh eyes, fully expecting to confirm the fatal error, but found exactly the opposite.  I (and the journal referee) had been wrong about my being wrong.  In other words, the error that had been pointed out, as it turns out, was not an error.  That isn’t to say that my reasoning and conclusions in the paper are ultimately correct – there could still be other errors – but the referee had been wrong.  What I argued in my second paper is original and it just may be right.  If so, its implications are important and potentially groundbreaking.  The paper needs to be rewritten, the physics tightened, and the arguments cleaned up: a project for another day.

As for now, here’s the problem I face.  On one hand, answers to some of the deepest and most important questions plaguing humanity for millennia are finally starting to become accessible via science, particularly physics.  On the other hand, it has become, for whatever reason, out of vogue in the physics community to research or even discuss these issues, which is odd for many reasons.  First, many of the giants of physics, even in modern history, routinely debated them, including Einstein, Bohr, Wigner, and Feynman.  Second, physics has itself produced several of these hard questions (like the QM measurement problem and the inconsistency between QM and general relativity).  But because physicists rarely talk about these big-picture and foundational questions, and because there’s essentially no funding to research them, the conversations are typically left to: a) self-made or retired mavericks who don’t need funding (e.g., Roger Penrose); b) writers who profit on popular viewpoints (e.g., Sean Carroll and Deepak Chopra); c) academic philosophers who may or may not (but typically don’t) have any formal training in physics; and d) crackpots, nutjobs, and wackadoodles.  And there are a LOT of wackadoodles; category d) might dwarf the others by a factor of 100, and occasionally even includes members of the other categories.  The Internet is teeming with “amateur physicists” with their own solutions to quantum gravity, theories about “quantum consciousness” (whatever the hell that is), yada yada.

I am in category a), but I understand, if on statistics alone, why I’d be assumed to be in category d).  The thing is, maybe I am a little crazy.  But the solutions to the big problems in physics, cosmology, and philosophy of mind are not going to come from tweaking the same old shit we’ve been tweaking for the past century.  They are going to require truly revolutionary ideas, and those ideas, when first proposed, WILL seem crazy.  I want to be openminded, diligent, and creative enough to explore the crazy, revolutionary ideas that ultimately lead to the correct solutions.  Still, the hardest challenge of all will be maintaining my confidence throughout the process.  Not only will I be continually discouraged by incorrect solutions, but I suspect that my journey will be somewhat lonely.

Blogger and theoretical physicist Sabine Hossenfelder points out that stagnation in physics is in large part due to a feedback mechanism in which those who pull the strings – journal editors, those who award grant funding, members of academic tenure committees, etc. – tend to reward what is most familiar to them and popular with their peers.  This has the effect of stifling innovation.  Her solution: “Stop rewarding scientists for working on what is popular with their colleagues.”  Lee Smolin made a similar point in his article, “Why No ‘New Einstein’?”  He says that the current system of academic promotion and publication has “the unintended side effect of putting people of unusual creativity and independence at a disadvantage.”   Despite the current publish-or-perish system that incentivizes scientists to do “superficial work that ignores hard problems,” the field of physics is actually “most often advanced by those who ignore established research programs to invent their own ideas and forge their own directions.”

In other words, even though I didn’t know it when I began this process two years ago, it was a foregone conclusion that my intention to independently and creatively attack some of the hard foundational problems in science would be met with contempt, condescension, and unresponsiveness.

I am planning to begin a master’s program in physics at NYU in the fall.  NYU has some of the world’s best (or at least most academically well regarded) faculty in the fields of cosmology, the foundations of physics, the philosophy of physics, and the philosophy of mind.  But I will be entering with eyes wide open: into the lion’s den.  I certainly hope some of the faculty will be legitimately interested in answering some of the big questions – and will be responsive to and encouraging of original approaches – but I won’t expect it.  Instead, I will enter with low expectations, understanding clearly that any progress I make in answering the big questions may be despite, not because of, the physics academy.  I will hope to remain guided by a burning curiosity, a passion to learn and understand, and a confidence in my abilities to think, analyze, and create.  Please wish me luck.