I don’t understand double-slit interference. Do you?


First, some background.  It was found, empirically, that when we send a certain kind of stuff (“particles,” such as photons or electrons) through a very narrow slit in a plane, and we detect them on a screen that is parallel to and far away from the plane, we find that individual particles are detected, and if we detect enough of them, their distribution forms what is called the Fraunhofer diffraction approximation:



In the above example, we assume that the source of the particles is either narrow enough or sufficiently far away that the transverse momentum component of the incoming particles is sufficiently close to zero so that the diffraction pattern is primarily the result of a change in the transverse momentum caused by passing through the slit.

It was also found, empirically, that if we send the same kind of particles through two narrow slits (say, a left slit and a right slit) in a plane, separated by small distance, we find that the particles detected on a far-away screen that is parallel to the plane form what is called an interference pattern:
  


Notice that the interference pattern seems like it could fit inside the diffraction pattern shown earlier; we call this the diffraction envelope.  In the above example, the distance between the slits was about four times the slit width, and the greater this ratio, the narrower the distance between peaks inside the diffraction envelope. 

Let me reiterate something: individual particles are detected at the screen.  And if we slow down the experiment adequately, we will see individual “blips” on the screen.  None of these blips is, itself, a distribution pattern; rather, the distribution pattern (diffraction in the case of one slit or interference in the case of two) becomes apparent only after measuring lots of lots of blips.

Immediately a problem arises in the case of interference.  Since individual particles are both emitted by the source and detected at the screen, it certainly seems plausible that individual particles pass through the slits.  However, if a particle passes through, say, the left slit, then it should produce a single-slit diffraction pattern, unless the particle somehow “knows” about the existence of the right slit.  Because an interference pattern is actually created, then either:
a) The particle, as it passes through the left slit, must “instantly” know about the existence and size of the right slit, which is located some distance away; or
b) It is not the case that the particle passes through the left slit (or the right slit, by similar reasoning).

The problem with a) is nonlocality.  Special Relativity asserts that information cannot travel faster than the speed of light, which implies that instantaneous transfer of information is impossible.  Historically, Special Relativity was proposed by Einstein in 1905, about two decades before the formal creation of quantum wave mechanics.  So option a) was summarily dismissed on the grounds that the path of a particle (in this case the transverse momentum component of a particle passing through the left slit) could not possibly be affected by nonlocal information about the right slit located some arbitrary distance away.

Consequently, we have been stuck, for nearly a century, with option b).  How is it the case that there is no particle that passes the left slit or the right slit, even though a particle was emitted by the source and detected at the screen?  Herein lie both the mathematical beauty and the philosophical wackiness of quantum mechanics.

Essentially, quantum wave mechanics begins by assuming that the likelihood of finding a particle at a location in space is related to the magnitude of a wave at that point.  Because one-dimensional waves are sinusoidal, standing waves have the form eikx.  In the case of single-slit diffraction, a wave originating at the slit will spread out radially so that the wave, as measured along the transverse direction, will vary sinusoidally but will also decrease linearly as the radial distance r from the slit.  When the distance from the slit is determined almost entirely by the distance between the slit and the screen (i.e., the diffraction angle is small, such as 1°), then the wave in the x direction varies like sin(αx)/αx (also known as sinc(αx)).  Then, the likelihood of actually detecting a particle on the screen between two locations is just found by integrating the probability distribution ρ(x) = Ψ*(x) Ψ(x), which looks like the experimentally observed Fraunhofer diffraction, shown previously.  

Now let’s apply this mathematical formalism to the double-slit problem.  We now assume that at the location of the plane of the slits we can represent the (particle?) system as a wave Ψ consisting of the superposition of a left-slit wave ΨL and a right-slit wave ΨR so that Ψ(x) = ΨL(x) + ΨR(x).  The beauty of this equation is that if we now plot the probability distribution ρ(x) = Ψ*(x) Ψ(x), we get what looks like the experimentally observed interference distribution, shown previously.  In other words, if we assume that the system at the location of the slits is not a particle, but a wave that later determines probabilities of detection, then we successfully predict the empirically observed probability distributions.  

The reason this works, mathematically, is because quantum wave mechanics allows “negative” probabilities.  Look back at the interference distribution and choose some place on it where the probability is zero.  If only one slit had been open, the probability of detecting a particle at this point would have been nonzero.  So how is it that by adding another slit – by adding another possible path through which a particle could reach that point – that we decrease its likelihood to reach that point?  The answer, mathematically, is that by adding waves prior to taking their magnitude, terms that are out of phase can cancel each other, resulting in a sort of negative probability.

However, something doesn’t make sense.  Remember that the wave ΨL(x) is associated with a particle that travels through the left slit and wave ΨR(x) is associated with a particle that travels through the right slit.  But what can this possibly mean if we have already assumed that it is not the case that the particle passes through the left slit or the right slit?

This is the very heart of the so-called “measurement problem.”  By localizing the particle (or whatever the hell it is) within the two slits, we assume that it is in superposition Ψ(x) = ΨL(x) + ΨR(x).  But if we subsequently measure the particle as having come from one of the slits (called a “which-way” measurement), then we were wrong about its earlier state.  And if we were right about its earlier state, then it will forever remain in a superposition, unless we allow for nonlinear, irreversible "collapse," whatever the hell that is.

So there is something very weird, and possibly wrong, with option b).  So maybe option a) is right and we just have to accept nonlocality.  After all, quantum entanglement also seems to require nonlocality, so maybe that’s just a fact about the quantum world.  Physicist Yakir Aharonov has written a lot on the topic of nonlocality in quantum measurement, such as this.

By the way, treating a system with two slits as the superposition of two waves still does not solve the nonlocality problem.  After all, consider a single slit of width Δx.  This produces a single-slit Fraunhofer diffraction distribution and certainly no one would object to the assertion that every particle detected on the screen actually passed through the slit.  (Right??)  Of course, if Δx is zero, then there’s no problem with Special Relativity.  However, no slit has zero width, so let’s divide the slit into a left half and a right half.  Now, we can associate a wave with each half and treat each half as producing its own Fraunhofer diffraction envelope having double the width.  The interference between these two waves then produces an interference pattern that, incredibly enough, is identical to the single-slit Fraunhofer diffraction distribution of the entire slit.  In other words, a single-slit diffraction is double-slit interference for side-by-side slits.  So we are again left with options a) and b), even for a single slit.

It is important to remember that the representation of the system as a complex superposition of mutually exclusive possibilities was (and remains) an assumption.  Of course, it is an assumption whose numerical predictions have been empirically tested and confirmed to staggering precision.  However, if there is an understanding of the quantum world that yields the same or better predictions while avoiding sloppy philosophical paradoxes, then might that be preferred?

I’m proposing a different approach.  I do not think it’s original, but frankly after lots of research, I can’t find this approach.

First, consider a localization experiment of a particle, and let’s assume that the particle actually is located somewhere.  In the case of a single slit, let’s assume that at some time the particle is, in fact, located somewhere in the slit with constant probability; in the case of the double slit, it is located in either slit with equal probability; and so forth.

Next, take the Fourier transform of the entire location distribution, then take its magnitude.  For some reason, for a square function (corresponding to a single slit), this yields exactly the Fraunhofer diffraction distribution in momentum space.  We can then empirically find the relationship between the momentum space and position space by noting that the spread of the distribution in momentum space is inversely proportional to the spread in position space, and their product is on the order of Planck’s constant. 

By the way, I don’t yet understand why the magnitude of the Fourier transform of a square function yields the sinc2(αx) distribution so typical of Fraunhofer diffraction, although I suspect it generalizes by starting from a “perfect” localization down to the Planck length (and the resulting complete lack of knowledge one could have about momentum at this scale).  In any event, not only do I find this fact amazing, but I frankly wasn’t convinced that p=ℏk was the same momentum as p=mv of a massive object until I noticed that distributions of particles passing through a slit correspond to the magnitude of the Fourier transform of that slit!

Finally, assume each particle passing through the localized region can take on transverse momenta according to this distribution, and then integrate this value over the entire localized region.  This, I believe, may yield the actual distribution of detected particles.

To test my results, I used Mathematica to simulate the situations numerically.  In each case I divided the localization region into lots of smaller regions.  In one case (called “Adding Fields”), I added the fields of all the regions first before calculating intensities/probabilities; in the other case (called “Adding Probabilities”), I calculated intensities/probabilities first and then added the contributions by each region.  I made a few assumptions:
·         The incoming particles had momentum such that the central diffraction envelope spreads at an angle of 1°.
·         The incoming particles were assumed to come from a point source with effectively no spread in momentum (which, I think, is another way of saying they are assumed to be monochromatic and spatially coherent). 

For diffraction, I divided the single slit into n regions.  In the Adding Fields simulation, I calculated the Fourier transform of each region to find their fields, added the fields of each region, and then plotted the magnitude of this sum for various parameters.  In the Adding Probabilities simulation, I calculated the Fourier transform of the entire slit, assumed that each region produces an intensity based on the fields in this total Fourier transform, and then plotted the sum of these intensities for various parameters.  Here is a typical example, in which the slit is divided into 20 regions and the screen is a distance of 50 times the slit width:

Adding Fields:

Adding Probabilities:

A distance of 50 times the slit width is very much in the near field, where we would expect the distribution to be relatively flat (corresponding to the width of the slit), with edge effects that reflect the 1° spread.  Only the Adding Probabilities distribution satisfies these expectations.  The situation is worse when the slit is divided into 100 regions:

Adding Fields:

Adding Probabilities:

In the far field, such as where the screen is 10,000 times the slit width, both simulations converge to the expected Fraunhofer diffraction distribution:


To simulate double-slit interference in the Adding Fields simulation, I simply added another slit of equal width, some distance away, and broken into n regions, and continued the analysis by first adding the fields of each region and then finding the magnitude of their sum.  In the Adding Probabilities simulation, I calculated the Fourier transform of both slits together (i.e., the entire localization space), assumed that each region produces an intensity based on the fields in this total Fourier transform, and then plotted the sum of these intensities for various parameters.  Here is a typical example, in which the slits are each divided into 100 regions, the slit separation is 10 times the slit width, and the screen is located a distance of 10 times the slit width.  The plots are also shown zoomed in to the left peak:

Adding Fields:


Adding Probabilities:


Either of these distributions might fit experimental data, however the Adding Probabilities distributions are more plausible.  In the far field, starting at around a million times the slit width, both simulations converge to the expected interference pattern:


So what’s the answer?  Is the Adding Probabilities method wrong? 

For the life of me, I CANNOT FIND THE ANSWER.  I have read dozens of papers and scoured the internet, and basically every source says that you add the fields first and then find the probabilities, instead of just doing a Fourier transform on the entire localization space and assuming that each localized particle assumes the resulting momentum distribution.  That, or I'm just not understanding what I'm reading.  This method is also pretty simple, so I seriously doubt I’ve discovered something new... which means I must have made a mistake somewhere.  There are certainly references (such as this) that say that you add probabilities when the source particles are incoherent, but my analysis seems to apply to any source, including a laser.

PLEASE HELP!!


Is It Possible to Copy the Brain?

The science fiction plot involving copying brains or uploading minds onto computers or fighting conscious AI or teleportation yada yada yada is everywhere.  Black Mirror wouldn't even exist without these fascinating ideas.

Every one of these plots depends on the assumed ability to copy brains or consciousness, or on the assumption that consciousness is algorithmic, like software running on a computer.  These are very related assumptions: all algorithms can be copied and executed on any general-purpose computer, so if consciousness is algorithmic, then it should be possible to copy conscious states and/or duplicate brains.

Let me be blunt: every science nerd on the planet (including me) has, at some point, wondered about and been intrigued by the possibility and implications of "brain copying."  (Although really I mean the more general notion that one's consciousness can be copied, whether by digitizing consciousness, physically copying the brain, whatever.) 

But here's something weird.  VERY few scientists have actually questioned the assumptions that conscious states can be copied or that consciousness is fundamentally computational.  For instance, if you Google the exact phrase "impossible to copy the brain" a total of ZERO results are found, but if you don't question the possibility of brain copying, then the exact phrase "copy the brain" yields over a MILLION results.  Does it seems strange that despite our fascination with AI, teleportation, mind uploading, and so forth, that this particular post might be the very first in the entire history of the Internet to state, in these words, that it might be impossible to copy the brain?  Really?!  No one has ever said that phrase on the Internet before?  (BTW there are lots of other such phrases, printed at the bottom of this post.)

These assumptions are so ingrained within the scientific community that most young physicists, neurobiologists, engineers, etc., don't even realize that they are making such assumptions, and those that do are unlikely to question them.  Famed philosopher John Searle once pointed out that "to deny that the brain is computational is to risk losing your membership in the scientific community."  Entire industries are even being launched (mind uploading, digital immortality, etc.) on the underlying supposition that it's just a matter of time before we'll be able to digitize the brain, or create a conscious computer, or create a perfect duplicate of the brain.  Are these assumptions valid?

Sir Roger Penrose (Oxford) argues that consciousness cannot be simulated on a computer because, he claims, humans are able to discover truths that cannot be discovered by any algorithm running on a Turing machine.  However, despite his eminence in the fields of mathematics and physics, he is still criticized by the "mainstream" scientific community for this suggestion.

Scott Aaronson (U. Texas @ Austin) asks in his paper, "Does quantum mechanics ... put interesting limits on an external agent's ability to scan, copy, and predict human brains ... ?"  He says he regards this "as an unsolved scientific question, and a big one," and then gives one possible explanation of how physics might explain that conscious brains can't be copied (if in fact they can't).  In a blog post, he points to an empirical fact "about the brain that currently separates it from any existing computer program.  Namely, we know how to copy a computer program ... how to rerun it ... how to transfer it from one substrate to another.  With the brain, we don't know how to do any of those things."  In both works, he is careful not to offend the majority, with self-deprecating comments about expecting to be "roasted alive" for his dissension from "the consensus of most of my friends and colleagues."

There are a few other scientists who cautiously suggest that brains can't be copied or that brains aren't computers (one example here).  I myself have written a paper (preprint here or related YouTube videos herehere and here) that argues that consciousness is not algorithmic and can't be copied, in part because consciousness correlates to quantum measurement events that occur outside the body.  But, let's face it: for the most part, very few scientists question these assumptions.

I assert that the following assumptions pervade academia and popular science, and that they are unfounded and unsupported by empirical evidence:
a) That consciousness is computational/algorithmic;
b) That consciousness can be duplicated; and
c) That brains can be copied.

Here is my question: What empirical evidence do we currently have for making any of these assumptions?  I think the answer is "none," but I could be wrong. 

If you are going to answer this question, please consider these guidelines:
*  Please provide actual empirical evidence to support your point.  For example, if you think that brains can be copied, then linking to a bunch of papers in which neurobiologists have sliced rat brains (or whatever) is inadequate, because that says nothing to support the assumption that brains can be copied over the assumption that brains cannot be copied.   And extrapolating into the future ("If we can slice rat brains today, then in 50-100 years we'll be able to digitize them and copy them...") is not evidence for your point.  On that note...
*  Please do not talk about what is expected, or what "should" happen, or what you think is possible in principle.  (The phrase "in principle" should be banned from the physicist's lexicon.)  Please focus on what is actually known today based on scientific inquiry and discovery. 
*  Please do not bully with hazy notions of "consensus."  Scientific truth does not equal consensus.  I don't care (and nor should you) what a "majority" of scientists believe if those beliefs are not founded on scientific data and evidence.  Further, considering that anyone who openly questions these assumptions has to apologetically tiptoe on eggshells, for fear of offending the majority, it's difficult or impossible to know whether there really is any consensus on this issue.
*  Please be aware of your own assumptions.  For example, if you reply that "consciousness must be capable of being simulated because it is part of the universe, which is itself being capable of being simulated," note that the latter statement is itself an unproven assumption.



Additional comments:
The following search terms in Google yield either zero or just a few results, which underscores how pervasive the assumptions about brains and consciousness are:

“impossible to copy conscious”
“impossible to copy consciousness”
“not possible to copy consciousness”
“possible to copy consciousness”
“possible to copy conscious”
"cannot copy conscious"
"cannot copy consciousness"
"impossible to duplicate conscious"
"possible to duplicate conscious"
"possible to duplicate consciousness"
"impossible to duplicate consciousness" 
"cannot duplicate consciousness" 
"cannot duplicate conscious"
 “not possible to copy brain”
“impossible to copy the brain”
“not possible to copy the brain”
“cannot copy the brain”
"cannot duplicate the brain"
"impossible to duplicate the brain"
"possible to duplicate the brain" 
 “consciousness cannot be algorithmic” 

The Physics of Free Will

I know the topic of free will has been debated endlessly for millennia, and everyone has their own opinion.  However, I’ve read and searched endlessly, and I can’t find anyone who addresses or answers the following problem.

Let’s say that I perceive that I have the choice to press button A or B.  There are only three possibilities:
a) There is no actual branching event.  The perception is an illusion.  The button I press is entirely predetermined.  (That doesn’t imply that the universe as a whole is deterministic, but that indeterminacy is irrelevant to my perception of a free choice.)
b) There is a branching event, but it is quantum mechanical in nature.  In other words, the button I press actually depends on some QM event (whether you call it measurement, reduction, or collapse), so while the outcome is not predetermined, it is random.  The perception that a branching event was about to happen was correct, but the perception that I can control it is an illusion.
c) There is a branching event, and my free will caused the outcome.

In case a), my “choice” is simply a prediction about the future.  But there are several problems with this:
1) Why would I ever perceive as possible an event that is actually impossible?  (If pressing button A was predetermined, then pressing B is an impossible event.)
2) What is the advantage of making a prediction if awareness of the predicted outcome will not affect anything that will happen in the future?  In other words, if I can’t DO anything to change anything (because I don’t have free will), what’s the point in predicting? 
3) What is the advantage of perceiving free will when I am actually making a prediction?  When I drop a ball, I predict it will accelerate downward toward the Earth.  But imagine if I (falsely) believed I had free will over that ball... “OK, am I going to drop the ball UP or DOWN?  Hmmm... today I’ll decide to drop it DOWN.”  What would be the point of that false perception? 

The case of b) isn’t much better, because my “choice” is, again, just a prediction about the future (possibly coupled with measurement of a random QM event).  The same problems arise.

Note that my perception of free will is limited to my body, and not even my entire body (for example, I don’t think I can consciously control my digestion process).  In fact, I only perceive “free will” with regard to a few aspects of my body, such as motions of my hands and fingers.  But what is true is that I have never EVER once observed the experience of NOT having free will over those parts.  For example, I have never decided to raise my right hand, but then my left hand rises instead.  I never raise my hand and then say, “I didn’t do that!” 

But that COULD have been the case.  I could have been born into a world in which I just observed things happening... where my body was no different from a dropping ball or a planet orbiting a star... where it’s just an object that moves on its own and I experience it.  In other words, why am I not just experiencing the world through a body that moves on its own as if I were just watching an immersive (five-sense) movie?  It’s not like we need to believe in free will.  For example, we are perfectly fine watching movies or riding roller coasters, full well knowing that we can’t control them.  Why couldn’t we just be passing through the world moment-to-moment, just experiencing the ride, without any perception that we have free choices?  In other words, if a) or b) above is true, we need to explain WHY I perceive the freedom to press button A or B, but also why my choices are always 100% consistent with the outcome.

That’s a real problem.  Because now we have to explain why the universe would conspire to:
* Fool me into believing that I have a choice when I don’t; AND
* Fool me into believing that the outcome is always consistent with what I (mistakenly) thought I chose!

Why would the universe fool us like that? 

As an aside, please don’t answer with “compatibilism,” which is the philosopher’s way of avoiding the question of free will.  You can look it up, but I regard it as a non-answer.  Even famed philosopher JohnSearle agrees that philosophers haven’t made any progress on the free will question in the past hundred years.

Why Mind Uploading and Brain Copying Violate the Physics of Consciousness

I just finished creating a video, now posted on YouTube, that attempts to prove why the laws of physics, particularly Special Relativity and Quantum Mechanics, prohibit the copying or repeating of conscious states.  This time, I introduce the Unique History Theorem, which essentially states that every conscious state uniquely determines its history from a previous conscious state.  If true, then the potential implications are significant: consciousness is not algorithmic; computers (including any artificial intelligence) will never become conscious; mind uploading, as well as digital immortality, will never be possible; and teleportation and any form of brain copying or digitization will remain science fiction.

The video, which lasts about an hour and a half, is here:




However, if you want a brief SUMMARY of the two main videos, a 17-minute video is posted on YouTube here:




Please keep in mind that the above summary video is a great introduction to the proofs and arguments in the main videos, but that the arguments themselves are truncated.  

Can Physics Answer the Hardest Questions of the Universe?

At some point during an intro to philosophy class in college, I was first exposed to the classic "Brain in a vat" thought experiment: how do I know I'm not just a brain in a vat of goo with a bunch of wires and probes poking out, being measured and controlled by some mad scientist?  That was a few years before The Matrix came out, which asked essentially the same question.

So -- are you a brain in a vat?  And how could you know?

This is just the tip of the iceberg; once we start down this path, we come face-to-face with more difficult questions.  "What creates consciousness?"  "Can consciousness be simulated?"  "If I copy my brain, will it create another me, and what would that feel like?"  And once we've fallen down the rabbit hole, we see that there are a thousand other seemingly unanswerable questions... questions about free will, the arrow of time, the nature of reality, and so forth.

I think physics can help answer these questions, and in fact I think I have answered a couple of them to some degree.  For example, I don't know (yet) whether I'm in a simulation, but I think I do know whether or not I am a simulation.  Here is my first YouTube talk in which I explain why consciousness cannot be algorithmic, conscious states cannot be copied or repeated, and computers will never be conscious:




If you prefer a written explanation, here is a preprint of my article, "Refuting Strong AI: Why Consciousness Cannot Be Algorithmic."

I am also working on another proof of the same conclusions from a different angle.  Here is a preprint of my article, "Killing Science Fiction: Why Conscious States Cannot Be Copied or Repeated."  I'll post a link to a YouTube talk on this paper as soon as it's available.