Showing posts with label quantum mechanics. Show all posts
Showing posts with label quantum mechanics. Show all posts

Wednesday, May 6, 2020

The Effect of Gravity in Preventing Macroscopic Quantum Superpositions

In a recent post, I posited that a quantum superposition just exists if and only if the facts of the universe are consistent with the superposition; i.e., a system described at time t by state |A> + |B> just means that there is no fact about whether the system is in state |A> or |B> at time t.  In other words, had the system been measured at time t in a basis that includes elements |A> and |B>, then either outcome A or outcome B would have been measured (with probabilities according to the Born rule), but since it was not measured, then information regarding whether the system was in state |A> or |B> did not exist at time t and no future measurement/observation/fact can contradict that fact.  The production of facts (or the happening of events) over time creates new information that reduces future possibilities.

Thinking about quantum mechanics in this way has helped me immensely in understanding and solving many of the various philosophical problems in QM.  To get feedback on it, I submitted a version of the explanation to an essay contest of the Foundational Questions Institute, entitled “Interpreting Quantum Mechanics and Predictability in Terms of Facts About the Universe,” and a preprint is also available here.

However, apparently this point of view is more revolutionary than I had originally thought.  The typical way to think about or describe a quantum superposition described by state |A> + |B> is that it “is kind of in state |A> and kind of in state |B>” or that it “is in both state |A> and state |B> simultaneously” or something like that.  But these descriptions are inaccurate, sloppy, and just plain wrong. 

For example, it is typical in QM to work with expectation values, such as the expectation of position <X>, which is found by taking a weighted average of an object’s position distribution (i.e., weighted by probability, which is the square of amplitude).  The problem arises when this is treated as something real as opposed to something simply mathematically useful for making predictions.  For instance, if a particle whose position expectation is <X> is actually measured/detected at a location X0 that is somewhere far away from <X>, then do we say there’s been a violation in conservation of energy if X0 and <X> are at different potentials?  Likewise if an object having momentum expectation <P> is measured having momentum P0, but <P>2/2m ≠ P02/2m.

The problem is that there was nothing real about the particle’s location when we calculated <X>.  If we were right that the particle was in a location superposition at time t, then there is no fact, nor will there ever be, about the particle’s location at time t, so there can’t be a violation of conservation of energy by detecting the particle at X0 at a later time if there is no fact about where the particle came from.

For instance, when Roger Penrose, whom I greatly admire, tried to analyze the effect of gravity on quantum state reduction, he postulated that the difference in gravitational self-energy (EΔ) between the spacetime geometries of a quantum superposition “in which one lump [of mass] is in two spatially displaced locations” produces an instability that results in a decay into one or the other of the spacetime eigenstates.  He even goes so far as to give a decay time T ≈ ℏ/EΔ, reminiscent of the quantum uncertainty principle.  The problem, as I see it, is that he treated the “two” lumps (in the superposition) as real, so real in fact that he requires taking into account “the gravitational interaction effects between the pair of lumps.”  What pair of lumps?!  There is only one lump!

But this (mis)understanding of QM seems to permeate the field.  So far, I have been unable to find my characterization of QM in the academic literature.  It certainly may be out there, but I feel comfortable in saying that nearly all characterizations of a quantum superposition treat it as if the terms represent something real.  For instance, in the classic Schrodinger’s Cat thought experiment (which is essentially the same as the Wigner’s Friend thought experiment), we are given a quantum state of the cat |Ψ> = |alive> + |dead>, which is a linear superposition of a state in which the cat is alive and one in which it is dead.  QM tells us that the likelihood of finding the cat in one state or another depends on the square of the amplitudes, which I’ve left out for simplicity.

So here’s the classic conundrum: before we look, is the cat dead or alive?  The answer: there is no fact about it being dead or alive until evidence exists (in the form of a correlation somewhere in the universe) that it is one or the other.  Until that information exists, there simply is no fact.  The real difficulty in this thought experiment, which almost no one points out, is the extreme difficulty (and likely impossibility) of creating state |Ψ> = |alive> + |dead> in the first place.  To do so requires that there is no evidence anywhere (beyond the cat itself, of course, which we assume is thermally isolated) of the cat’s being dead or alive.  Even a single photon bouncing off the cat – and keep in mind that the universe is inundated with radiation, such as CMB – would almost certainly provide evidence correlated to its being either dead or alive.

Getting back to Penrose’s paper, in making his argument about a superposition of spacetimes, he points out that “these two space-time geometries differ significantly from each other.”  But my question is this: how could such a superposition arise in the first place?  If I am right that a superposition exists if and only if the facts of the universe are consistent with the superposition, then what would it mean if there was a “significant difference” between two (or more) eigenstates?  If we say, “There would have been a significant difference had that difference been measured but it wasn’t actually measured,” then that does not justify Penrose’s treatment of the spacetime geometries as being actually significantly different.  But to say “There is a significant difference” is wrong because: by whose standards?  By what measure?  After all, if there is a measure (in the form of evidence anywhere in the universe) by which the spacetime geometries are different, then there could not have been a superposition!

The thing is – gravity may be weak (e.g., the electromagnetic attraction between a proton and electron in a hydrogen atom is something like 1040 greater than their gravitational attraction), but it is ubiquitous in the universe and always attractive.  So my question is this: wouldn’t gravity effectively prevent any macroscopic superposition?  To use Penrose’s example, imagine a macroscopic lump of matter near Earth that we are somehow able to perfectly isolate from the universe (already a ridiculous assumption) to allow it to enter a superposition of macroscopically distinct positions.  A lump creates a gravitational field that is tiny but – as far as we know – potentially affects everything in the universe.  If the gravitational field of the lump located at position A affects even a single particle differently than the field of the lump located at B, then the lump at one of these two positions will be correlated with the rest of the universe and a quantum superposition of the lump at position A and position B cannot exist.  Note that the speed of light is irrelevant here; if the lump’s gravity takes 20 years to affect the trajectory of a particle 20 light-years away, that correlation is enough to ensure that there could not have been a superposition at the time.  (This argument may be related to the production of gravity waves, which I know little about.)

Anyway, my point is that when Penrose discusses a superposition of spacetime geometries that “differ significantly from each other,” then wouldn’t significant differences correlate to measurable differences in effects, events, and/or interactions elsewhere (i.e., outside the isolated system)?  If so, such a superposition could never exist.  Which is to say, as soon as there is a fact in the universe that differentiates the two possibilities, they are no longer both possibilities and there is no superposition. 

I haven’t done the calculation yet, but I suspect that gravity would destroy a macroscopic superposition very quickly.  Interestingly, a group of researchers showed that relativistic time dilation at different heights on the Earth’s surface was enough to decohere a macroscopic quantum superposition pretty quickly.  They showed that an isolated gram-scale object in a superposition of locations vertically separated near Earth’s surface by 1mm would decohere in around a microsecond.  This implies that even a “perfectly isolated” Schrodinger’s Cat experiment could never even get off the ground if located anywhere near a planet; however it says little about performing such an experiment in deep space with flat spacetime curvature.  But even though the word “gravitational” appeared in its title, the article was really about time dilation.  So far, I haven’t found an article that deals with how the gravitational effects of a macroscopic object in different locations would correlate to measurable differences elsewhere in the universe, and how this would prevent macroscopic quantum superpositions.  If it were the case that an isolated system described by |dead> caused some correlated event different than an isolated system described by |alive>, then the superposition |Ψ> = |alive> + |dead> could not exist. 

Of course, the question is not really whether gravitational effects are relevant to the existence of quantum superpositions.  Of course they are.  The sun could not exist in a superposition of a state in which it is located at the center of our solar system and a state in which it is located a light-year away, as the gravitational differences between such states would be heavily correlated to measurable differences in other places in the universe.  (Obviously, other differences besides gravitational differences would decohere any potential superposition long before this point.)  The question is at what scale are gravitational effects relevant to the existence of quantum superpositions.  That may place an upper limit to the size of quantum superpositions and the applicability of QM.  (This whole notion that there is no limit, in principle, to the size of objects in interference experiments is driving me crazy, but I’ll save that rant for another time.)  If the answer happens to be such as to prevent any kind of Schrodinger’s Cat or Wigner’s Friend experiment anywhere in the universe, no matter how isolated, then we can finally stop being confused by (and hearing about) these thought experiments.

Before I spend time doing these calculations or trying to reinvent the wheel, it would be great to know if it’s already been done.  Do you know of any such calculation, article, or research? 

Tuesday, February 25, 2020

Interpreting Quantum Mechanics in Terms of Facts About the Universe

Like so many, I am trying to understand quantum mechanics – or, at least, to explain it in a way that makes sense to me.

I’ve taken graduate-level quantum mechanics, or a course that intimately depends on quantum mechanics, at four universities, including MIT and Princeton.  I’ve read countless books and journal articles on quantum mechanics and its various interpretations.  But I’ve never seen quantum mechanics characterized or explained the way I am about to explain it, so I sincerely hope that: a) if it is incorrect, someone can (kindly) point out the flaw; b) if it is correct but is equivalent to another interpretation (e.g., Consistent/Decoherent Histories), someone can expound on the equivalence; or c) if it is correct and novel, that it helps other people to understand quantum mechanics.  If c), then I’d like to submit this to a journal on physics education.

My Interpretation/Understanding

I am attempting to characterize, interpret, and understand quantum mechanics using the following set of propositions, and then more deeply explain this interpretation using a specific example.

The state of the universe is a particular chronological set of facts/events, and the relationships between objects in the universe are the information storing/instantiating those facts.  Those facts must be consistent throughout the entire universe.

A fact occurs exactly when the number (or density) of future possibilities decreases.  Every fact limits future facts and is limited by prior facts.  A fact does not necessarily require an “impact” or “interaction” as colloquially understood.[1]

A (quantum) superposition exists if and only if the facts of the universe are consistent with the superposition.  For example, in the case of the classic two-slit interference experiment with the particle passing the double slit at time T0, the particle is in a superposition of passing through both slits if and only if there is no fact about the particle’s location in one slit or another at time T0.  If even a single photon, correlated to the location of the particle in one slit or the other at time T0, scurries away at light speed, there is a fact about the location of the particle and it cannot be in a superposition at time T0.[2]  In the unlikely event that the experiment is set up so that that photon later gets uncorrelated such that no “which-path” information is ever available, then the particle is, amazingly, in superposition at time T0.  Such a “delayed-choice quantum eraser experiment” (See, e.g., Aspect et al., 1982) demonstrates that whether an event occurs seems to depend on the future permanence of a correlating fact.  In reality, the “window of opportunity” to prevent the decoherence of a superposition is extremely short, so we don’t generally need to wait long before we can officially declare the happening of an event.

Quantum uncertainty (e.g., in the form of the Heisenberg Uncertainty Principle) is simply one type of superposition, in which a spread of possible positions and a spread of possible momenta are related.  For instance, if a particle is tightly localized at time T0, then the facts of the universe at that time are consistent with a wide spread of possible momenta – i.e., a superposition of many momenta exists at T0. 

Explanation of this Interpretation

I’ll try to explain this interpretation with a specific example.  Imagine N objects ({O1, ..., ON}), which need not be microscopic “particles,” distributed in three-dimensional space discretized into M possibilities per dimension.  Assume also that velocity is discretized into M possibilities per dimension.  Each possible combination of location (X) and momentum (P) vectors for each and every object might be considered a single point in classical phase space, yielding a total of M^(6N) such points/possibilities.  A fact (or event) is anything that reduces the number of such possibilities, so one example of a fact is an impact between two objects.  Assume for simplicity that an impact between two objects is always repulsive and their masses are equal, so an impact just has the effect of swapping the objects’ velocities.  Assume also that an impact occurs only when two objects are at the same location at the same time; we will neglect fields.

Let us choose one set of possibilities at time T0, specifically the set in which O1 has a particular position X1 and three possible momenta P11, P12, P13, and O2 has a particular position X2 and three possible momenta P21, P22, P23, as shown in Fig. 1 below.  For the sake of demonstration, these values are chosen such that O1 with P11 will, at time T1, reach the same location in space as O2 with P21; also, O1 with P12 will, at time T2 (which may or may not be different from T1), reach the same location in space as O2 with P23; but every other combination always results in non-coinciding future locations.


Fig. 1.  Nine possibilities for two objects.


There are no restrictions on the possible locations and momenta of other objects, so for each of the nine combinations of O1 and O2, there are M^[6(N-2)] possibilities involving the remaining (N-2) objects. For simplicity, let’s ignore those other combinations and simply write the nine points in phase space as {X1, P11, X2, P21}, {X1, P11, X2, P22}, {X1, P11, X2, P23}, {X1, P12, X2, P21}, etc. 

We now add the following fact about the universe: by time T3 (which is after T1 and T2), O1 and O2 have interacted with each other but not with any other objects.  (That is, they reach the same location in space and then repel, thus swapping their momenta.)  Notice that this fact has the effect of reducing the number of possible combinations that can exist at T3.  Specifically, only the two possibilities, {X1, P11, X2, P21} and {X1, P12, X2, P23} as they existed at time T0, can now exist at T3.  Note that at time T3, the objects O1 and O2 in each of the two combinations have swapped momenta and are in different locations.  For clarity, let’s assume that possibilities {X1, P11, X2, P21} and {X1, P12, X2, P23} at time T0 evolve, respectively, to {X1’, P21, X2’, P11} and {X1’’, P23, X2’’, P12} at time T3.

This reduction in the number of combinations has two features.  First, there are broad categories of individual momenta that simply cannot exist: specifically, at time T3, O1 cannot have a position/momentum combination that traces it back to (or is correlated to) the combination {X1, P13} at time T0, just as O2 cannot be traced back or correlated to the combination {X2, P22} at T0, and no future measurement can contradict this.  (Note that I’m not asserting that an event after T0 retroactively eliminates possibilities at T0.  Rather, while at T0 there were nine possibilities, there are only two at T3.)  Second, while other broad categories of individual momenta may not be ruled out, there are now correlations between the possible momenta of the objects.  For example, if an evolution of O1 from state {X1’, P21} exists at some later time, then a corresponding evolution of O2 from state {X2’, P11} must also exist.  If a future fact rules out one, then it rules out both.  Similarly, if an evolution of O1 from state {X1’’, P23} exists at some later time, then a corresponding evolution of O2 from state {X2’’, P12} must also exist.  These two objects are now entangled, no matter the distance between them.

Let me further clarify.  For the moment, let’s only consider the nine original possible configurations of objects O1 and O2.  By time T3 the only remaining possibilities are: O1 having P21 AND O2 having P11; or O1 having P23 AND O2 having P12.  If at some later time (but before the objects have had a chance to interact with other objects), Alice measures the momentum of object O1 to be P21, it will necessarily be the case that the momentum of object O2, if measured by Bob, would be found to be P11.  Even if the Alice and Bob are far apart, their measurements will be perfectly correlated.  Even if the measurement events are spacelike separated – i.e., there is no fact about which measurement happens first – object O1 having momentum P21 will correspond to object O2 having momentum P11 and not P12.  In other words, among the nine possibilities at time T0, the first fact (O1 interacts with O2) eliminates all but two, and the second fact (O1 has momentum P21) eliminates one.  Thus, these facts make future facts incompatible with all but one of those original nine possibilities, specifically {X1, P11, X2, P21} at T0.[3]

Notice that the reduction in possibilities – and the resulting correlations – have nothing to do with whether Alice or Bob knows about the correlations.  I think there’s been a lot of experimental research and discussion regarding how measurements on systems with known entanglements correlate to each other, as if entanglement were some rare, almost magical quantum configuration created only in expensive labs.  Instead, I think entanglement is ubiquitous.  If every (or almost every) impact between objects results in a new correlation between them, then isn’t every object entangled with every other?  The universe goes on creating new facts, reducing future possibilities, correlating the possibilities of one system with those of another, so that the possibilities for any one object depend, in some sense, on the possibilities of every other.  The notion of universal entanglement is far more important and useful, I think, than has been discussed in the scientific literature.

Of course, this example is insanely oversimplified.  My goal is simply to show how the quantity/density of possible combinations in phase space gets reduced by facts.  For instance, as discussed above, the fact that O1 interacts with O2 implies that O1 cannot have a state after T3 that traces it back or correlates it to the state {X1, P13} at time T0.  However, this does NOT imply that O1 can’t have momentum P13 after T3.  The analysis considered only a tiny (TINY!) subset of possibilities at time T0 in which O1 was located at X1 and O2 was located at X2.  To determine whether O1 might have momentum P13 after T3, we have to consider every other possible combination in which O1 is not at X1 at T0.  Looking back at Fig. 1, we can obviously move O1 to some other location so that, with momentum P13, it does impact O2.

Now that I’ve explained the example, the primary questions I want to consider are the effect of facts on the universe in reducing the entire phase space of possibilities, and whether any interesting or large-scale pattern or structure emerges.  For example, if it turned out, after several events, that O1 having momentum P13 does not appear in any of the possible combinations at T3, then we can state with certainty that O1 does not have momentum P13 at T3.  And if in every possible combination after T3 in which O1 has momentum P21 we find that O2 has momentum P11, then we can say with certainty that if Alice measures the momentum of O1 as P21 and Bob, who is several light-years away from Alice, measures the momentum of O2, he will measure P11.[4]

I think the most interesting question is: as the phase space of possibilities gets reduced in time by facts, does any structure or pattern emerge in the distributions of object locations and/or momenta?  For example, if after lots of events involving objects O4 and O7, do we find, among the remaining possibilities in phase space, that the locations of O7 relative to O4 start to converge?  If so, does the spread of the distribution (e.g., standard deviation) get tighter with the addition of subsequent facts?

Computer Simulation and Questions

I tried programming a simulating and answering the above questions with Mathematica, but quickly realized that even the simplest possible analysis (three objects in one dimension discretized into 10 possibilities, repeating universe, no gravity) took about 10 seconds to analyze the one million points of phase space.  Imagine trying to do a more reasonable analysis of, say, 100 objects in two-dimensional space discretized to 1000 places per dimension; we’re now at 1000^400 possibilities, which significantly exceeds the computational power of the entire universe, estimated at 10^122.  (See, e.g., Davies, 2007.)

There are a variety of mathematical tools and shortcuts that could help with the analysis.  For example, I suspect that an interesting analysis could be done with a Monte Carlo simulation, essentially by just randomly selecting initial states.  I could start with a set of chronological facts/events (e.g., O1 impacts O5, then O3 impacts O9, then O5 impacts O6, etc.) and then run a Monte Carlo simulation to find a statistically useful set of initial states that satisfy the facts.  Then, I’d like to see what kind of patterns and/or localizations, if any, emerge.  I suspect that after enough events, some objects would start to appear fixed relative to some other objects, and once all objects are entangled/correlated, they would all begin to show a (potentially fuzzy) localization relative to each other.  Further, I suspect that if we were to look at the fuzziness of, say, object #74, we would find a particular spread in its location and momentum, but if we were to look only at the distribution of momenta of object #74 in particular locations, we would find a larger spread.  If so, then such an analysis might numerically demonstrate quantum uncertainty.  Of course, I could be wrong about all this, but won’t know until I can do some sort of simulation or analysis.

Another question that might be answered by such an analysis is whether the times of events must be inputted (e.g., O1 impacts O5 at T=35 units) or whether time itself is emergent.  I suspect the latter.  In the previous example, O1 having P21 at T3 is correlated with O2 having P11, but it is also correlated with an impact at T1, while O1 having P23 at T3 is correlated with an impact with O2 at T2.  Thus, the later fact about the universe causes the time of the earlier impact to emerge.  I suspect that when the phase space specifies velocity, event times are emergent; likewise, if the set of possibilities includes only locations but event times are specified, velocities would emerge. 

Another issue that might be addressed by such an analysis is the relationship of objects to the underlying grid.  Objects shouldn’t leave the grid, so will objects wrap around or should we include a gravitation force sufficient to prevent their reaching the edge?  And suddenly an analysis of quantum mechanics necessitates general relativity and the curvature of space!

Finally, I don’t have the math background to figure out how to do the analysis with continuous initial states (versus discrete states).  I suspect that there is no fundamental discretization of spacetime, but rather the “resolution” of the universe increases with more facts/events.  That is, there is no fundamental limit to the precision of a measurement, except to the extent that facts just don’t (yet) exist to answer questions that probe beyond a certain scale.  One scale, quantum uncertainty, involves a tradeoff between an object’s location precision and momentum precision, while another, the Planck length, implies an energy sufficient to create a black hole if a distance smaller than the Planck length is probed.  Both scales are related to Planck’s constant. 

But if every interaction between objects creates a new fact that slightly increases the universe’s resolution, then Planck’s constant is actually decreasing with time.  As Planck’s constant continues to decrease, the energy of a photon at a given wavelength decreases, so shorter lengths can be probed before reaching a black-hole-inducing energy.  Also as uncertainty decreases, the momentum-changing kick given by that photon to probe the location of an object would have less of an effect on the measured object. 

Objections

I’ll try to address a few potential objections to this interpretation.

Implies Planck’s constant is not a constant.  The time scale of this interpretation by which new facts increase the resolution of the universe (and decrease Planck’s constant) is sufficiently slow that there is no reason to think that any change could have been detected in the last century, although improving measurement precision may allow this prediction to be tested in the future.  If Planck's constant is decreasing with time, one way to test this hypothesis without doing further measurements might be to retrodict the number of facts/events and/or correlations/entanglements that would be necessary to bring quantum uncertainty to within the scale of Planck’s constant, and then determine whether the actual number of such events and/or entanglements in the universe is consistent with this retrodiction.  In other words, it may be the case that Planck’s constant is actually decreasing if it emerges from variations among possibilities, the number (or density) of which decrease with the happening of events.

In any event, despite some debate as to its implications, there is already strong evidence that correlation/entanglement within a system reduces its quantum uncertainty.  (See, e.g., Rigolin, 2002.)  If indeed universal entanglement correlates every object in the universe directly or indirectly to every other, it should not be surprising that increasing correlations further reduce quantum uncertainties, an hypothesis that would be verified by observing a change in Planck’s constant.

Implies that the wave state Ψ is not the full description of a system.  An underlying assumption of our current understanding of quantum mechanics is that a system’s wave state is its complete description, and that “the momentum wave packet for a particular quantum state [is] equal to the Fourier transform of the position wave packet for the same state.”  (Griffiths, Ch. 2.)  These are assumptions that, so far, have provided excellent agreement with observation, but have also given rise to confusion and a variety of seeming paradoxes.  It may be that the current computational power of quantum mechanics is an approximation that results from the convergence of remaining possibilities after facts of the universe eliminate the vast majority.  As an analogy, one may use a very high precision thermometer to obtain the temperature of a system to many significant figures, but its temperature is not its complete description.

Treating objects classically.  My example in Fig. 1 treats objects macroscopically as they bounce off each other classically.  But that was just an example to show how facts reduce possibilities and that the remaining possibilities inherently embed evidence of those facts.  That is essentially tautological: it must be true that impacts between systems produce facts that reduce possibilities, because otherwise what would it mean that an impact occurred?  Any event must distinguish possibilities in which the event happens and those in which it doesn’t.  Rather, my point (I think!) is that is the history of facts in the universe is instantiated in the form of correlations/entanglements between objects, localizes the positions and momenta of objects relative to each other, and gives rise to (or eliminates the possibilities of ) superpositions.

Identity.  My interpretation requires that objects have identity.  For example, if two of the facts of the universe are that object O9 impacts object O4 at time T0 and then O4 impacts O12 at time T1, then the possible locations and momenta of object O4 after time T1 (along with, of course, its correlations with O9 and O12) effectively embed the history of these facts.  This can only be true if object O4 at T0 is the same as object O4 at T1 – i.e., objects must maintain their identity.  However, as currently understood, many quantum mechanical objects don’t have identities; they are indistinguishable in principle.  For instance, if two helium nuclei (which are bosons) are exchanged in a superfluid represented by wave state Ψ, then the state (and any predictive power we possess) will remain unchanged.  How can a particular helium nucleus (and its entanglements with other objects) embed a history of facts if there’s no such thing as a “particular” helium nucleus? 

I’ll provide several responses.  First, the examples I gave were generically about objects; I did not specify that they were particles or microscopic.  They’re true of baseballs, which clearly can be treated classically.  If it turns out that protons cannot be treated classically (e.g., if protons do not maintain identity), then there may not be a fact about one particular proton impacting another particular proton.  But there may be a fact about a group of protons (for example) creating some lasting correlation in the universe, a fact that would be reflected in reducing possibilities.  Second, the objection is based on the assumption that Ψ contains all information about a system; as discussed above, this assumption may be merely a convenient approximation.  Finally, we already know that entanglement is possible between such particles; what would this mean if they didn’t have identity?  For instance, imagine two entangled photons (A and B) such that their polarizations are perfectly correlated.  If photon A is mixed up with lots of other “identical” photons, doesn’t photon A still perfectly correlate to photon B?  Don’t photon A and B (or, perhaps the universe as a whole) still “know” they are entangled, whether or not we can distinguish photon A from others?


References

Aspect, A., Dalibard, J. and Roger, G., 1982. Experimental test of Bell's inequalities using time-varying analyzers. Physical review letters49(25), p.1804.

Davies, P.C.W., 2007. The implications of a cosmological information bound for complexity, quantum information and the nature of physical law. Cristian S. Calude, p.69.

Elitzur, A.C. and Vaidman, L., 1993. Quantum mechanical interaction-free measurements. Foundations of Physics23(7), pp.987-997.

Griffiths, R.B., 2003. Consistent quantum theory. Cambridge University Press.

Haroche, S., 1998. Entanglement, decoherence and the quantum/classical boundary. Physics today51(7), pp.36-42.

Rigolin, G., 2002. Uncertainty relations for entangled states. Foundations of Physics Letters15(3), pp.293-298.



[1] Elitzur et al. (1993) unintentionally gives a great argument as to how quantum mechanical events can occur without an “interaction.”  Whether or not the suggested method disturbs a measured system’s internal quantum state, it undoubtedly produces facts that reduce the number of future possibilities.
[2] “The coherence vanishes as soon as a single quantum is lost to the environment.”  (Haroche, 1998.)
[3] I don’t think it matters, scientifically, whether we say that all nine combinations truly were possibilities at time T0 and future facts narrow down possibilities when the facts occur, or that eight of the nine combinations were not actually possible at T0 and future facts simply clarify past possibilities.  The predictive power of both ideas is the same.
[4] So long as Alice measures after T3 in her frame of reference but before O1 has impacted another object and Bob measures after T3 in his frame of reference but before O2 has impacted another object.


Tuesday, October 8, 2019

I don’t understand double-slit interference. Do you?

Update:
So, yes, the “Adding Probabilities” method is wrong.  As it turns out, the reason I was getting such bad distributions when using Mathematica to produce graphs for the (correct) “Adding Fields” method is that I had not properly adjusted the phase for each field point source within each of the slits.  When I do that, it produces distributions in the near- and far-fields that, I think, are consistent with what would be observed in actual experiments.

But this essay is important to me for several reasons.  First, it underscores one of the problematic assumptions I’d been making, namely the assumption that there is some reality about where, in each slit, a particle is located.  Identifying that as incorrect helped me come to what I believe is a better understanding/interpretation of QM, which I describe here, in which a superposition is indicative of a lack of a fact.  Second, writing it helped me to understand the relationship between single-slit Fraunhofer distributions and double-slit interference distributions.  Third, it makes some good points about problems in QM, and is mostly correct if you’ll ignore any nonphysical wackiness in the “Adding Fields” graphs.

I have made huge progress in understanding physics over the past couple years and wouldn’t be where I am today without the experiences of yesterday.