World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Wednesday, November 3, 2021

Afterlife, Reversibility, and the House of Pleasure

Eleven years ago, I posted a philosophical problem, which I called “The House of Pleasure,” on various online forums, such as this.  (The complete problem is copied at the end of this post.)  I posted this long before my foray into the philosophy of physics and consciousness, beginning in 2018, and I just realized how incredibly insightful it was, particularly regarding my recent innovations and realizations about the impossibility of physical reversibility (also here and here).

Physically reversible systems can only be made so large – and that threshold is significantly smaller than a cat, Wigner’s Friend, or any reasonably useful quantum computer.  (That de facto threshold is what renders impossible the scalability of quantum computing.)

Essentially, the House of Pleasure (“HOP”) problem asks what you would consciously experience if, after a four-hour intensely pleasurable event, your brain and body are returned to their exact physical state just prior to the event.  I realized, correctly, that you would not consciously experience the event at all; you would consciously experience “skipping over” the event as if it hadn’t happened.  Therefore, if you did consciously experience the event, you could be certain that your brain/body would not later be returned to their physical state prior to the event.

As it turns out, this insight parallels the actual reasoning for why macroscopic physical systems are irreversible.  For instance:

·       In a system (that has evolved from state Ψ(t1) to Ψ(t2)) that is time reversed back to state Ψ(t1), there remains no physical evidence of the existence of the system in state Ψ(t2); thus from a scientific standpoint, the system never evolved to state Ψ(t2) in the first place.

·       Time does not pass/progress in a system that ostensibly evolves Ψ(t1)à Ψ(t2)à Ψ(t1).  Any and every internal clock of the system (including, but not limited to, radioactive decay, entropy increases, quantum collapse events, the ticking of an actual clock, etc.), when the system is in state Ψ(t1), states the time as t1, even if external observers would disagree.

·       A conscious measurement by Wigner’s Friend is impossible as a logical contradiction.  (I’ve argued that in lots of papers and posts, but this Physical Review Letters paper makes an incredibly similar point.)

In other words, by the time an event has been consciously experienced, it is already too late to turn back time and return your physical state to an earlier state.  I’ve argued that irreversibility happens long before conscious awareness – and therefore that consciousness does not cause collapse of the wave function – but one’s conscious awareness of an event is sufficient evidence that the possibility of reversibility has been foreclosed. 

Having said that, I’ll analyze the original HOP problem and point out an error.  First, the intent of the thought experiment was to give a logical argument for the existence of an afterlife (specifically, eternal consciousness). 

When you leave after four hours, your brain will be scanned again.  It will be returned to the exact physical state it started in when you first entered.  In other words, your memory of the experience will be completely erased. 

It’s true that returning your brain/body to their exact physical states prior to entering HOP implies a complete and permanent erase of memories; however, the converse (that a complete and permanent erase of memories implies returning your brain/body to their exact physical states prior to entering HOP) is not necessarily true. 

I correctly concluded that my conscious experience of HOP precludes the possibility of my brain/body being returned to their exact physical states prior to HOP.  (My “problematic” intuition that my “perception of the experience depends on what happens afterward” is not actually problematic; it simply indicates the impossibility of physical reversibility after my conscious observation of HOP.)  However, the argument (as presented) did not properly conclude that my conscious experience of HOP precludes the possibility of complete and permanent memory erasure.  If it did, then the following argument and conclusion would have been correct:

If my memory of a time period will be permanently erased immediately after that time period, then my stream of consciousness skips over that time period…

…implies that if I am consciously aware right now (I am), then my stream of consciousness is not skipping over this time period, and my memory of this time period will not be immediately permanently erased…

…seems to imply eternal consciousness.

There is a correspondence between the history dependence inherent in physical state evolutions (that prevents physical reversibility) and the history dependence of conscious state evolutions.  In this post and this post (among others), I discuss the history dependence of conscious states, which implies that a person cannot re-experience an earlier conscious state.  (I came to a related conclusion – that special relativity requires that conscious states cannot be physically copied or created de novo – in this paper.)  Therefore, not only does my experience of HOP preclude the possibility of returning my body/brain to an earlier physical state, it also precludes the possibility of my returning to an earlier conscious state.  

A couple of questions then arise:

·       Is there a way to permanently and completely erase one’s memories of an event without returning the person’s body/brain to their exact physical state prior to the event (which is impossible)?  Without returning the person to their exact conscious state prior to the event (which is likewise impossible)?

·       Why the fixation on memories?  I used the HOP example because it’s so hard to imagine having an otherwise very memorable and intense 4-hour orgasm and then to immediately and permanently forget it.  But maybe the memory created by a conscious experience need not be the kind of explicit visualization we often associate with a memory (like envisioning the faces of the people who yelled “Surprise!” on your birthday), but rather something that affects future conscious experiences.  This notion is much more consistent with my insight that conscious states are history dependent (and embed their own history).

·       Imagine that my first conscious state was C1.  Whatever existed before that… let’s call it C0, which is certainly a state of no consciousness.  If it’s impossible to return to an earlier conscious state, then it’s impossible for me to return to state C1.  But what about C0?  And wouldn’t any state of no consciousness be identical to C0?  In some ways, I think this is just another way of saying that it’s impossible for me to (consciously) experience a state of unconsciousness, which seems both obvious and circular.  On the other hand, this may underscore the deeper insight that a conscious perception cannot subjectively end because there is no time at which that end is subjectively experienced.

·       That begs a deeper conundrum about the nature of “now”: what is now, why is it now, and by whose observation? 

_________________________________________________

“The House of Pleasure”

It’s a Saturday night and a guy is walking to a party.  On the way, he notices something he hasn’t seen before: a neon sign obnoxiously blinking “The House of Pleasure.”  Intrigued, he approaches the doorman. 

“That’ll be $100, sir.”

“What?  That’s crazy!  What is this place?”

“Oh,” the doorman says with a glimmer in his eye, “you’ve never been to The House of Pleasure?  Let me explain.  After you pay me and walk in, your brain will be scanned to identify everything that you subjectively enjoy: physically, sexually, emotionally, and intellectually.  You’ll then spend the next four hours experiencing pure, untainted pleasure based on your personal desires.  Whatever you enjoy most about life, you will experience intensely and without interruption for four hours.  Think of it as a four-hour spiritual orgasm.”

“Incredible!  This sounds great…”

“However,” the doorman warned, “there’s a catch.  When you leave after four hours, your brain will be scanned again.  It will be returned to the exact physical state it started in when you first entered.  In other words, your memory of the experience will be completely erased.  Also, your body will be returned to its original state, so any feelings of physical euphoria will likewise be eliminated.”

Should the man enter The House of Pleasure?  Assuming he could have spent the evening at a party where he would have formed lasting memories, there is both a time and a memory cost to the HOP.  Further, does the entrance fee affect whether or not the man should enter? 

My take on it is this.  If he enters HOP, his stream of consciousness experiences walking through the entrance and then immediately walking out the exit, four hours later.  In essence, his consciousness perceives nothing; it’s as if no time has passed.  He walks in and then out feeling exactly the same way, as if it never happened, except that he is out $100 and four hours’ time.

But my intuition, if correct, is problematic, because his perception of the experience depends on what happens afterward.  That his stream of consciousness seems to skip over the time at HOP depends on an event (the erasure of his memories) that occurs after leaving HOP.

My intuition further seems to imply the following oddity: If my memory of a time period will be permanently erased immediately after that time period, then my stream of consciousness skips over that time period.  Equivalently (contrapositive), if my stream of consciousness does not skip over a time period, then my memory of that time period will not be permanently erased immediately after that time period.

The above statement is strange in part because it implies that if I am consciously aware right now (I am), then my stream of consciousness is not skipping over this time period, and my memory of this time period will not be immediately permanently erased.  But, if true, I can never reach the moment just before my conscious death, because that conscious moment just before my conscious death requires that that final glimpse of consciousness not be immediately permanently erased.  In other words, my intuition regarding the House of Pleasure seems to imply eternal consciousness.

Sunday, May 30, 2021

Physics, Immortality, and the Afterlife

What does physics tell us about the possibility of immortality or an afterlife?

First, let’s address the elephant in the room.  To the physics community, “afterlife” often implies “religion” often implies “stupidity.”  Bullies like Richard Dawkins have made it very clear that anyone who even suggests the existence of God or an afterlife is intellectually inferior.  Oddly, this assertion directly conflicts with several arguments based on currently-understood physics (and often made, ironically, by atheists) that immortality is possible.  I’ll discuss below some of these arguments.  Importantly, any physicist who tells you with certainty that there is no afterlife is not only mistaken, but is ignorant of the direct logical implications of his/her own beliefs about physics.

Note: “Afterlife” and “immortality” are not technically the same.  Immortality might be interpreted as “never dying,” while an afterlife might be interpreted as “consciousness after one has died.”  However, from a physics standpoint, this is often a distinction without a difference.  For example, if mind uploading is possible, then it can be done before or after a person’s brain is dead.

Postponing Death

My close friend (and former MIT debate champion) once made the following (valid) argument:

·       Technology (e.g., medicine) is allowing humans to live longer and longer.

·       There is some tiny but nonzero probability 0<p<<1 that we will develop the technology to indefinitely postpone death.  For example, imagine that in the next 50 years we figure out how to make humans live to age 150, and in the following 50 years we figure out how to make humans live to age 200, and so on.  Then someone born today could indefinitely postpone death.

·       The universe will continue expanding forever.  (Most physicists believe that the universe has positive curvature.)

·       p * ∞ = ∞.

·       Therefore, the life expectancy of a person born today is infinite!

The argument applies equally to an afterlife as to immortality if we simply replace the second statement with “There is some tiny but nonzero probability 0<p<<1 that we will develop the technology to reanimate a dead person’s brain.”  (After all, that’s why the quacks at the Brain Preservation Foundation recommend cryogenic freezing of one’s brain, which I’m certain is not cheap.)  If we can agree that it is at least physically possible to indefinitely delay death (or to reanimate a dead brain), then physical immortality/afterlife cannot be ruled out.

Mind uploading

If you can upload your conscious awareness onto a computer, then you can live forever because a computer can be indefinitely operated and repaired.  Okay, okay, you still need energy to flow, which will stop when the universe experiences its predicted heat death in at least a googol years.  Even still, Michio Kaku in his fascinating book, Parallel Worlds, points out that conscious awareness would slow down commensurate with decreased energy transfer so that one’s subjective conscious experience wouldn’t notice.

Again, there’s no difference here between “immortality” and “afterlife” since the fundamental assumption of mind uploading (and algorithmic consciousness) is that a conscious state is just software running on a computer, and that can be done long after one’s death.  (It can also be done before one’s death, which leads to all kinds of ridiculous paradoxes that I discuss in this paper.) 

The problem is that I showed in this paper (and this) that consciousness is not algorithmic, which means it cannot be uploaded to or executed by a computer (whether digital or quantum).  Mind uploading is not physically possible.

Quantum Suicide

Max Tegmark, proponent of the wacky and unscientific Many Worlds Interpretation (“MWI”) of quantum mechanics, proposed the notion of quantum suicide (although really it’s just “quantum death” because it applies independently of intention) as an empirical test of MWI.  The idea is this:

·       Stand in front of a “quantum gun” that is designed so that when the trigger is pulled, whether a bullet actually fires from the gun (and kills you) depends on the outcome of a quantum mechanics (“QM”) event.

·       Universal linearity of QM – i.e., the assumption of U which I dispute here – implies that the quantum event entangles with the bullet, which entangles with you, to produce a Schrodinger’s-Cat-like state involving you in a superposition of states |dead> and |alive>. 

·       If MWI is correct, then both states actually occur/exist (albeit in different “worlds” that are exceedingly unlikely to quantum mechanically interfere).

·       Since you cannot consciously survive death – a huge assumption! – then the only state you can consciously observe is the one involving |alive>, which means that you are guaranteed to “survive” the pull of the quantum gun trigger.

·       You can repeat this as many times as you want, and every time you will be guaranteed to observe the outcome in which you are alive.

The argument is wrong for several reasons that I point out in the Appendix of this paper.  One problem is that every chance event is fundamentally (amplification of) a quantum event, which means that essentially every death-causing event is akin to quantum suicide/death.  But that means that, if Tegmark’s argument is correct, then nobody can actually die

But the main problem is that the argument depends on the unjustified and irrational assumption of U.  If Schrodinger’s Cat and Wigner’s Friend can’t exist, then the quantum suicide experiment can never get off the ground. 

Boltzmann Brain

The Boltzmann Brain concept is the notion that, given enough time, every physical state will repeat itself.  Or: due to random quantum fluctuations, given enough time, every possible physical configuration that can fluctuate into existence will fluctuate into existence.  So, eventually, even trillions of years after humanity has gone extinct, the conscious state you are experiencing at this moment will be recreated (and presumably re-experienced) again.  And again and again. 

Whether such experiences count as immortality or afterlife makes no difference, because the Boltzmann Brain concept is impossible.  I showed in this paper that physical instantiations of the same conscious state cannot exist at different points in spacetime. 

Tipler’s Physics of Immortality

Frank Tipler is a Christian who was largely ostracized from the academic world for showing how what is currently understood about physics could support the notions of a Christian God and afterlife.  His book, The Physics of Immortality, is interesting but dense.  At the risk of oversimplifying his analysis, I think his fundamental argument is really the Boltzmann Brain in disguise.  He relies heavily on the Bekenstein Bound (which I discussed here) and the notion of Eternal Recurrence to show that consciousness cannot end.

His analysis is wrong for several reasons.  First, the Bekenstein Bound assumes the constancy of the informational content in a given volume (i.e., that Planck’s constant is constant, which may be false).  Second, and more importantly, it assumes that a conscious state is just a list of numbers (even if it’s a huge list) that must be contained within the volume of a brain, and therefore that consciousness is algorithmic, which I’ve shown is false.  If consciousness cannot be reduced to a set of bits (and/or their algorithmic manipulation), then the number of bits that can fit in a given volume is irrelevant.

Reversibility

If physical systems are truly reversible, then – at least according to some – it should be possible in principle to physically reverse a person’s death (or even some or all of a person’s life).  It’s certainly not obvious that constantly “undoing” someone’s death results in a meaningful kind of immortality.  Still, maybe the goal of reversing someone’s death is to copy their consciousness into another physical system or upload it to a computer.  But I’ve already pointed out several times that this is impossible.  Either way, the entire argument is moot because physical reversibility of large systems is a logical contradiction.

What do I actually believe?

First, let me point out the crazy irony of this blog post so far.  I, the crackpot who believes in God, am trying to explain why several physicists’ arguments for immortality or afterlife are wrong!  In fact, the only one of the above arguments that I can’t completely rule out is Postponing Death, even though I regard it as extremely implausible to postpone death indefinitely.

Having said that, we do not understand consciousness.  No physicist, neurobiologist, physician, psychologist, computer scientist, philosopher, etc., understands consciousness, and anyone who claims to understand it is likely introducing unstated assumptions.  For example, most scientists who academically discuss consciousness assume that consciousness is created entirely by the brain, which is why the hackneyed “brain-in-a-vat” thought experiment – the namesake of this blog – is so pervasive in the literature. 

After all, if consciousness is entirely a product of the brain (or, more generally, on a local region of spacetime that may enclose the brain), then the above arguments are a lot more tenable.  That is, if a conscious state supervenes on a physical state that is entirely (locally) contained in some volume, then Tipler’s argument based on the Bekenstein Bound seems to apply; the total information specifying that conscious state is finite and, as nothing more than a string of numbers, can be copied and uploaded to a computer; and so forth.

In fact, given so many physical arguments for immortality, one might even wonder what physical arguments there are against immortality.  There is only one: assume that consciousness is entirely a product of the (living) brain; then death of the brain ends consciousness.  And if that assumption is wrong, then there is literally no existing scientific evidence against immortality or an afterlife.

But that assumption is wrong.  Conscious states are not local and they cannot be copied to different places in spacetime.  (Stoica makes a related and fascinating argument here that mental states are nonlocal.)  If they’re nonlocal, then they must physically depend on nonlocal (quantum) entanglements among objects and particles throughout the universe.  That is, what physically specifies my conscious state logically must extend beyond my brain.  There is certainly no doubt that my brain affects my consciousness, but it cannot be entirely locally responsible for it.  The fact that events and physical relationships that extend far beyond my brain are at least partially responsible for my consciousness leads me to surmise that these conscious-identity-producing physical relationships will persist long after the atoms in my brain are no longer arranged in their current configuration.  This is the beginning of an as-of-yet undeveloped physical argument for immortality/afterlife.

So, what do I really believe about an afterlife?  I won’t mince words.  I am certain that my consciousness is eternal; I am certain that my consciousness awareness will not permanently end if/when my brain dies.  In future posts, I will give logical and physical arguments to support these assertions, but I wanted first to devote a blog post to what currently-understood physics implies. 

Thursday, May 20, 2021

Quantum Computing is 99% Bullshit

 

In this post, just before beginning a class on quantum computing at NYU, I predicted that scalable quantum computing ("SQC") is in fact impossible in the physical world.

I was right.

And I can finally articulate why.  The full explanation (“Scalable Quantum Computing is Impossible”) is posted here and in the following YouTube video.

Here is the general idea.  Let me make a few assumptions:

·       A system is not “scalable” in T (where T might represent, for example, total time, number of computation steps, number of qubits, number of gates, etc.) if the probability of success decays exponentially with T.  In fact, the whole point of the Threshold Theorem (and fault-tolerant quantum error correction (“FTQEC”) in general) is to show that the probability of success of a quantum circuit could be made arbitrarily close to 100% with “only” a polynomial increase in resources.

·       Quantum computing is useless without at least a million or a billion controllably entangled physical qubits, which is among the more optimistic estimates for useful fault-tolerant quantum circuits.  (Even "useful" QC isn’t all that useful, limited to a tiny set of very specific problems.  Shor’s Algorithm, perhaps the most famous of all algorithms that are provably faster on a quantum computer than a classical computer, won’t even be useful if and when it can be implemented because information encryption technology will simply stop making use of prime factorization!)

o   There are lots of counterarguments, but they’re all desperate attempts to save QC.  “Quantum annealing” is certainly useful, but it’s not QC.  Noisy Intermediate-Scale Quantum (“NISQ”) is merely the hope that we can do something useful with the 50-100 shitty, noisy qubits that we already have.  For example, Google’s “quantum supremacy” demonstration did absolutely nothing useful, whether or not it would take a classical computer exponential time to do a similarly useless computation.  (See the “Teapot Problem.”)

Given these assumptions, what do I actually think about the possibility of SQC?

First of all, what reasons do we have to believe that SQC is possible at all?  Certainly the thousands of peer-reviewed publications, spanning the fields of theoretical physics, experimental physics, computer science, and mathematics, that endorse SQC, right?  Wrong.  As I pointed out in my last post, there is an unholy marriage between SQC and the Cult of U, and the heavily one-sided financial interest propping them up is an inherent intellectual conflict of interest.  Neither SQC nor FTQEC has ever been experimentally confirmed, and even some of their most vocal advocates are scaling back their enthusiasm.  The academic literature is literally full of falsehoods, my favorite one being that Shor’s Algorithm has been implemented on a quantum computer to factor the numbers 15 and 21.  (See, e.g., p. 175 of Bernhardt’s book.)  It hasn’t. 

Second, SQC depends heavily on whether U (the assumption that quantum wave states always evolve linearly or unitarily… i.e., that wave states do not “collapse”) is true.  It is not true, a point that I have made many, many times (here here here here here here etc.).  Technically, useful QC might still be possible even if U is false, as long as we can controllably and reversibly entangle, say, a billion qubits before irreversible collapse happens.  But here’s the problem.  The largest double-slit interference experiment (“DSIE”) ever done was on an 810-atom molecule.  I’ll discuss this more in a moment, but this provides very good reason to think that collapse would happen long before we reached a billion controllably entangled qubits.

Third, the Threshold Theorem and theories of QEC, FTQEC, etc., all depend on a set of assumptions, many of which have been heavily criticized (e.g., Dyakonov).  But not only are some of these assumptions problematic, they may actually be logically inconsistent… i.e., they can’t all be true.  Alicki shows that noise models assumed by the Threshold Theorem assume infinitely fast quantum gates, which of course are physically impossible.  And Hagar shows that three of the assumptions inherent in TT/FTQEC result in a logical contradiction.  Given that FTQEC has never been empirically demonstrated, and that its success depends on theoretical assumptions whose logical consistency is assumed by people who are generally bad at logic (which I’ve discussed in various papers (e.g., here and here) and in various blog entries (e.g., here and here)), I’d say their conclusions are likely false.

But here’s the main problem – and why I think that SQC is in fact impossible in the real world:

Noise sometimes measures, but QC theory assumes it doesn't.

In QC/QEC theory, noise is modeled as reversible, which means that it is assumed to not make permanent measurements.  (Fundamentally, a QC needs to be a reversible system.  The whole point of QEC is to “move” the entropy of the noise to a heat bath so that the evolution of the original superposition can be reversed.  I pointed out here and here that scientifically demonstrating the reversibility of large systems is impossible as a logical contradiction.)  This assumption is problematic for two huge reasons.

First, measurements are intentionally treated with a double standard in QC/QEC theory.  The theory assumes (and needs) measurement at the end of computation but ignores it during the computation.  The theory's noise models literally assume that interactions with the environment that occur during the computation are reversible (i.e., not measurements), while interactions with the environment that occur at the end of the computation are irreversible measurements, with no logical, mathematical, or scientific justification for the distinction.  This is not an oversight: QEC cannot correct irreversible measurements, so proponents of QEC are forced to assume that unintended interactions are reversible but intended interactions are irreversible.  Can Mother Nature really distinguish our intentions?  

Second, and more importantly, the history and theory of DSIEs indicates that noise sometimes measures!  All DSIEs have in fact depended on dispersion of an object’s wave packet both to produce a superposition (e.g., “cat” state) and to demonstrate interference effects.  However, the larger the object, the more time it takes to produce that superposition and the larger the cross section for a decohering interaction with particles and fields permeating the universe.  As a result, the probability of success of a DSIE decays exponentially as the square of the object’s mass (p ~ e^(-m2)), which helps to explain why despite exponential technological progress, we can't yet do a DSIE on an object having 1000 atoms, let alone a million or a billion.  What this means is that DSIEs are not scalable, and the fundamental reason for this unscalability – a reason which seems equally applicable to SQC – is that noise at least sometimes causes irreversible projective measurements.

This is fatal to the prospect of scalable quantum computing.  If a single irreversible measurement (even if such an event is rare) irreparably kills a quantum calculation, then the probability of success decays exponentially with T, which by itself implies that quantum computing is not scalable.  But DSIEs demonstrate that not only does noise sometimes cause irreversible measurement, those irreversible measurements happen frequently enough that, despite the very best technology developed over the past century, it is practically impossible to create controllably highly entangled reversible systems larger than a few thousand particles.  

Quantum computing is neither useful nor scalable.  

Monday, May 17, 2021

The (Quantum Computing) House of Cards

In the physics community, there is a house of cards built upon a nearly fanatical belief in the universality of quantum mechanics – i.e., that quantum wave states always evolve in a linear or unitary fashion.  Let’s call this fanaticism the Cult of U.

When I began this process a couple years ago, I didn’t realize that questioning U was such a sin, that I could literally be ostracized from an “intellectual” community by merely doubting U.  Having said that, there are a few doubters, not all of whom have been ostracized.  For instance, Roger Penrose, one of the people I most admire in this world, recently won the Nobel Prize in Physics, despite his blatant rejection of U.  However, he rejected U in the only way deemed acceptable by the physics community: he described in mathematical detail the exact means by which unitarity may be broken, and conditioned the rejection of U on the empirical confirmation of his theory.  As I describe in this post, Penrose proposes gravitational collapse of the wave function, a potentially empirically testable hypothesis that is being explored at the Penrose Institute.  In other words, he implicitly accepts that: a) U should be assumed true; and b) it is his burden to falsify U with a physical experiment.

I disagree.  In the past year, I’ve attempted (and, I believe, succeeded) to logically falsify U – i.e., by showing that it is logically inconsistent and therefore cannot be true – in this paper and this paper.  I also showed in this paper why U is an invalid inference and should never have been assumed true.  Setting aside that they have been ignored or quickly (and condescendingly) dismissed by nearly every physicist who glanced at them, all three were rejected by arvix.org.  This is both weird and significant.

The arXiv is a preprint server, specializing in physics (although it has expanded to other sciences), supported by Cornell University, that is not peer reviewed.  The idea is simply to allow researchers to quickly and publicly post their work as they begin the process of formal publication, which can often take years.  Although not peer-reviewed, arXiv does have a team of moderators who reject “unrefereeable” work: papers that are so obviously incorrect (or just generally shitty) that no reputable publisher would even consider it or send it for peer review by referees.  Think perpetual motion machines and proofs that we can travel faster than light.

What’s even weirder is that I submitted the above papers under the “history and philosophy of physics” category.  Even if a moderator thought the papers did not contain enough equations for classification in, say, quantum physics, on what basis could anyone say that they weren’t worthy of being refereed by a reputable journal that specializes in the philosophy of physics?  For the record, a minor variation of the second paper was in fact refereed by Foundations of Physics, and the third paper was not only refereed, but was well regarded and nearly offered publication by Philosophy of Science.  Both papers are now under review by other journals.  No, they haven’t been accepted for publication anywhere yet, but arXiv’s standard is supposed to be whether the paper is at least refereeable, not whether a moderator agrees with the paper’s arguments or conclusions! 

It was arXiv’s rejection of my third paper (“The Invalid Inference of Universality in Quantum Mechanics”) that made it obvious to me that the papers were flagged because of their rejection of U.  This paper offers an argument about the nature of logical inferences in science and whether the assumption of U is a valid inference, an argument that was praised by two reviewers at a highly rated journal that specializes in the philosophy of physics.  No reasonable moderator could have concluded that the paper was unrefereeable. As a practical matter, it makes no difference, as there are other preprint servers where I can and do host my papers.  (I also have several papers on the arXiv, such as this – not surprisingly, none of them questions U.)

But the question is: if my papers (and potentially others’ papers) were flagged for their rejection of U… why?!

You might think this is a purely academic question.  Who cares whether or not quantum wave states always evolve linearly?  For example, the possibilities of Schrodinger’s Cat and Wigner’s Friend follow from the assumption of U.  But no one actually thinks that we’ll ever produce a real Schrodinger’s Cat in a superposition state |dead> + |alive>, right?  This is just a thought experiment that college freshmen like to talk about while getting high in their dorms, right? 

Is it possible that there is a vested interest… perhaps a financial interest… in U?

Think about some of the problems and implications that follow from the assumption of U.  Schrodinger’s Cat and Wigner’s Friend, of course, but there’s also the Measurement Problem, the Many Worlds Interpretation of quantum mechanics, the black hole information paradox, physical reversibility, and – oh yeah – scalable quantum computing. 

Since 1994, with the publication of Shor’s famous algorithm, untold billions of dollars have flowed into the field of quantum computing.  Google, Microsoft, IBM, and dozens of other companies, as well as the governments of many countries, have poured ridiculous quantities of money into the promise of quantum computing. 

And what is that promise?  Well, I have an answer, which I’ll detail in a future post.  But here’s the summary: if there is any promise at all, it depends entirely on the truth of U.  If U is in fact false, then a logical or empirical demonstration that convincingly falsifies U (or brings it seriously into question) would almost certainly be catastrophic to the entire QC industry. 

I’m not suggesting a conspiracy theory.  I’m simply pointing out that if there are two sides to a seemingly esoteric academic debate, but one side has thousands of researchers whose salaries and grants and reputations and stock options depend on their being right (or, at least, not being proven wrong), then it wouldn’t be surprising to find their view dominating the literature and the media.  The prophets of scalable quantum computing have a hell of a lot more to lose than the skeptics.

That would help to explain why the very few publications that openly question U usually do so in a non-threatening way: accepting that U is true until empirically falsified.  For example, it will be many, many years before anyone will be able to experimentally test Penrose’s proposal for gravitational collapse.  Thus it would be all the more surprising to find articles in well-ranked, peer-reviewed journals that question U on logical or a priori grounds, as I have attempted to do.

Quoting from this post:

As more evidence that my independent crackpot musings are both correct and at the cutting edge of foundational physics, Foundations of Physics published this article at the end of October that argues that “both unitary and state-purity ontologies are not falsifiable.”  The author correctly concludes then that the so-called “black hole information paradox” and SC disappear as logical paradoxes and that the interpretations of QM that assume U (including MWI) cannot be falsified and “should not be taken too seriously.”  I’ll be blunt: I’m absolutely amazed that this article was published, and I’m also delighted. 

Today, I’m even more amazed and delighted.  In the past couple of posts, I have referenced an article (“Physics and Metaphysics of Wigner’s Friends: Even Performed Premeasurements Have No Results”), which was published in perhaps the most prestigious and widely read physics journal, Physical Review Letters, but only in the past few days have I really understood its significance.  (The authors also give a good explanation in this video.)

What the authors concluded about a WF experiment is that either there is “an absolutely irreversible quantum measurement [caused by an objective decoherence process] or … a reversible premeasurement to which one cannot ascribe any notion of outcome in logically consistent way.”

What this implies is that if WF is indeed reversible, then he does not make a measurement, which is very, very close to the logical contradiction I pointed out here and in Section F of this post.  While the authors don’t explicitly state it, their article implies that U is not scientific because it cannot (as a purely logical matter) be empirically tested at the size/complexity scale of WF.  This is among the first articles published in the last couple decades in prestigious physics journals that make a logical argument against U.

What’s even more amazing about the article is that it explicitly suggests that decoherence might result in objective collapse, which is essentially what I realized in my original explanation of why SC/WF are impossible in principle, even though lots of physicists have told me I’m wrong.  Further, the article openly suggests a relationship between (conscious) awareness, the Heisenberg cut between the microscopic and macroscopic worlds, and the objectivity of wave function collapse below that cut.  All in an article published in Physical Review Letters!

Now, back to QC.  After over two decades of hype that the Threshold Theorem would allow for scalable quantum computing (by providing for fault-tolerant quantum error correction (“FTQEC”)), John Preskill, one of the most vocal proponents of QC and original architects of FTQEC, finally admitted in this 2018 paper that “the era of fault-tolerant quantum computing may still be rather distant.”  As a consolation prize, he offered up NISQ, an acronym for Noisy Intermediate-Scale Quantum, which I would describe as: “We’ll just have to try our best to make something useful out of the 50-100 shitty, noisy, non-error-corrected qubits that we’ve got.”

Despite what should have been perceived as a huge red flag, more and more money keeps flowing into the QC industry, leading Scott Aaronson to openly muse just two months ago about the ethics of unjustified hype: “It’s genuinely gotten harder to draw the line between defensible optimism and exaggerations verging on fraud.”

Fraud??!!

The quantum computing community and the academic members of the Cult of U are joined at the hip, standing at the top of an unstable house of cards.  When one falls, they all do.  Here are some signs that their foundation is eroding:

·       Publication in reputable journals of articles that question or reject U on logical bases (without providing any mathematical description of collapse or means for empirically confirming it).

·       Hints and warnings among leaders in the QC industry that promises of scalable quantum computing (which inherently depends on U) are highly exaggerated.

I am looking forward to the day when the house of cards collapses and the Cult of U is finally called out for what it is.

Friday, May 14, 2021

Another Comment on “Physical Reversibility is a Contradiction”

Scott Aaronson, whose argument on reversibility of quantum systems I mentioned in this post, responded to it (and vehemently disagreed with it).  Here is his reply:

Your argument is set out with sufficient clarity that I can unequivocally say that I disagree.

Reversibility is just a formal property of unitary evolution.  As such, it has the same status as countless other symmetries of the equations of physics that seem to be broken by phenomena (charge,
parity, even just Galilean invariance).  I.e., once you know that the equations have some symmetry, you then reframe your whole problem as how it comes about that observed phenomena break the symmetry anyway.

And in the case of reversibility, I find the usual answer -- that it all comes down to the Second Law, or equivalently, the "specialness" of the universe's past state -- to be really compelling.  I don't see anything wrong with that answer.  I don't think there's something obvious here that the physics community has overlooked.

And yes, you can confirm by experiments that dynamics are reversible. To do so, you (for example) apply a unitary transformation U to an initial state |Ψ>.  You then CHOOSE whether to
(1) apply U-1, the inverse transformation, and check that the state returned to |Ψ>, or

(2) measure immediately (in various bases that you can choose on the fly), in order to check if the system is in the state U|Ψ>.

Provided we agree that Nature had no way to know in advance whether you were going to apply (1) or (2), the only way to explain all the results -- assuming they're the usual ones predicted by QM -- is that |Ψ> really did get mapped to U|Ψ>, and that that map was indeed reversible.  In your post, you briefly entertain this obvious answer (when you talk about lots of identically prepared systems), but reject it on the grounds that making identical systems is physically impossible.

And yet, things equivalent to what I said above -- by my lights, a "direct demonstration of reversibility" -- are now ROUTINELY done, with quantum states of thousands of atoms or even billions of
electrons (as with superconducting qubits).  Of course, maybe something changes between the scale of a superconducting qubit and the scale of a cat (besides the massive increase in technological difficulty), but I'd say the burden is firmly on the person proposing that to explain where the change happens, how, and why.


I sincerely appreciated his response... and of course disagree with it!  I’m going to break this down to several points:

You then CHOOSE whether to
(1) apply U-1, the inverse transformation, and check that the state returned to |Ψ>,

First, I think he is treating U-1 as a sort of deus ex machina.  If you don’t know whether a system is reversible, or how it can be reversed, just reduce it all down to a mathematical symbol corresponding to an operator (such as H, for Hamiltonian) and its inverse, despite the fact that this single operator might correspond to complicated and correlated interactions between trillions of trillions of degrees of freedom.  Relying on oversimplified symbol manipulation makes it harder to pinpoint potentially erroneous assumptions about the physical world.

Second, and more importantly, if you apply U-1, you cannot check that the state returned to |Ψ>.  Maybe (MAYBE!) you can check to see that the state is |Ψ>, but you cannot check to see that it “returned” to that state.  And while you may think I’m splitting hairs here, this point is fundamental to my argument, and his choice of this language indicates that he really doesn’t understand the argument, despite his compliment that I had set it out “with sufficient clarity.”

The reason you cannot check to see if the state “returned” to |Ψ> is because that requires knowing that the state was in U|Ψ> at some point.  But you can’t know that, nor can any evidence exist anywhere in the universe that such an evolution occurred, because then the state would no longer be reversible.  (You also can’t say that the state was in U|Ψ> by asserting that, “If I had measured it, prior to applying U-1, then I would have found it in state U|Ψ>,” because measurements that are not performed have no results.  This is the “counterfactuals” problem in QM that confuses a lot of physicists as I pointed out in this paper on the Afshar experiment.)  So if you actually apply U and then U-1 to an isolated system, this is scientifically indistinguishable from having done nothing at all to the system. 

or
(2) measure immediately (in various bases that you can choose on the fly), in order to check if the system is in the state U|Ψ>.  …In your post, you briefly entertain this obvious
answer (when you talk about lots of identically prepared systems), but reject it on the grounds that making identical systems is physically impossible.  And yet, things equivalent to what I said above -- by my lights, a "direct demonstration of reversibility" -- are now ROUTINELY done, with quantum states of thousands of atoms or even billions of electrons (as with superconducting qubits). 

In this blog post, I pointed out that identity is about distinguishability.  I didn’t say that it’s impossible to make physically identical systems.  It’s easy to make two electrons indistinguishable.  By cooling them to near absolute zero, you can even make lots of electrons indistinguishable.  But the only way to create Schrodinger’s Cat is to create two cats that even the universe can’t distinguish – i.e., not a single bit of information in the entire universe can distinguish them.  In other words, for Aaronson's argument (about superpositions of billions of electrons in superconducting qubits) to have any relevance to the question of SC, we would have to be able to create a cat out of fermions that even the universe can’t distinguish. 

Tell me how!  Don't just tell me that this is a technological problem that the engineers need to figure out.  And do it without resorting to mathematical symbol manipulation.  I'll make it "easy."  Let's just start with a single hair on the cat's tail.  Simply explain to me how the wave function of that single hair could spread sufficiently (say, 1mm) to distinguish a dead cat from a live cat.  Or, equivalently, explain to me how the wave functions of two otherwise identical hairs, separated by 1mm, could overlap.  Tell me how to do this in the actual universe in which even the most remote part of space is still constantly bombarded with CMB, neutrinos, etc.  So far, no one has ever explained how to do anything like this.

Of course, maybe something changes between the scale of a superconducting qubit and the scale of a cat (besides the massive increase in technological difficulty), but I'd say the burden is firmly on the person proposing that to explain where the change happens, how, and why.

I strongly disagree!  As I point out in “The Invalid Inference of Universality in Quantum Mechanics,” the assumption that QM always evolves in a unitary/reversible manner is an unjustified and irrational belief.  Anyway, my fundamental argument about reversibility, which apparently wasn’t clear, is perhaps better summarized as follows:

1)     You cannot confirm the reversibility of a QM system by actually reversing it, as it will yield no scientifically relevant information.

2)     The only way to learn whether a system has evolved to U|Ψ> is to infer that conclusion by doing a statistically significant number of measurements on physically identical systems.  That’s fine for doing interference experiments on photons and Buckyballs, but not cats. 


Wednesday, April 28, 2021

Comment on "Physical Reversibility is a Contradiction"

Someone famous in the field of philosophy of mind (although I’m not at liberty to say) asked me the following question regarding my most recent blog post on the logical contradiction of quantum mechanical reversibility:

If one can't prove that Schrodinger’s Cat was in a superposition, I presume the same goes for “Schrodinger’s Particle.”  But we seem to get that evidence all the time in interference experiments.  Are particles different in principle from cats, or what else is going on?

 Here’s my reply:

That's kind of a technical question about how superpositions are "seen."  Of course, we never see a superposition... that's the heart of the measurement problem.  

What we do in a typical double-slit interference experiment is start with a bunch of "identically prepared" particles and then measure them on the other side of the slits.  The distribution we get is consistent with the particles having been in a linear superposition at the slits, where the amplitudes are complex numbers.  The fact that they are complex numbers allows for "negative" probabilities, which is at the heart of (the mathematics of) QM.

The key is that no particular particle is (or can be) observed in superposition... rather, it's from the measurement of lots of identically prepared particles that we infer an earlier superposition state.

The problem is that it's technologically (and I would argue, in-principle) impossible to create multiple "identically prepared" cats.  If you could, you would just do lots of trials of an interference experiment until you could statistically infer a SC state.  But since you can't, you have to rely on doing a single experiment on a cat, by controlling all its degrees of freedom, so as to reverse any correlations between the cat and the quantum event.  But by doing so (assuming it was even possible), there remains no evidence that the cat was ever in a SC superposition at all.  So, since science depends on evidence, it's not logically possible to scientifically show that a SC ever existed... and no one seems to have addressed this in the literature.

Amazingly, this paper just came out in Physical Review Letters, so it's something that people in the physics community are just now starting to wrap their heads around.  The paper doesn’t go far enough, but it at least points out that if WF makes a “measurement” but then is manipulated to show that WF was in a superposition, then even that “measurement” has no results.