World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

Thursday, February 3, 2022

Does the Brain Cause Consciousness? Part 2

Is there an afterlife?  Can a computer be conscious?  In Part 1, I pointed out that the popular science answers to these questions depend on an often unstated assumption:

Assumption: The brain causes consciousness.

I am going to show in this and subsequent posts why there is very good reason to doubt this assumption, and why it’s almost certainly false.  To do that, I’m going to try to convince you of two statements [1] which, taken together, imply that the brain does not cause consciousness:

1)     A brain can be copied.  (Even if it cannot be done today due to technological limitations, there is no physical law preventing the physical state of a brain from being copied.)

2)     A person’s conscious state cannot be copied.

In today’s post, I’ll address Statement 1.  First of all, I think most people, particularly scientists, would already agree with it.  And since my goal is to convince you, the reader, then if you already agree with it, there’s no need to read further.  Instead, just move on to the next post in this series, where I’ll address Statement 2.

Of course, no one thinks that a brain can be copied today.  But what physical law prevents copying a brain in the future?  The only known physical principle of which I’m aware is the quantum no-cloning theorem, which says that a quantum state cannot be copied.  And a brain, like all things in the universe, is presumably in a quantum state, so in that sense it can never be perfectly copied.  But that doesn’t matter as long as quantum effects are not relevant to the brain and its functions.  In other words, the only thing that would prevent a brain from being copied adequately to replicate consciousness is if consciousness depends on quantum effects. 

For example, if a conscious state depended on quantum entanglements with objects outside the brain, then there is inadequate information in the brain to specify a conscious state.  Quantum entanglement is “nonlocal,” which means that Object A can affect entangled Object B instantaneously, even if they are separated by a large distance, and the effect is not limited by the speed of light.  So if my current conscious state depends at least in part on an event in another galaxy (which we cannot detect until we receive light from the event), then consciousness is nonlocal.  This recent paper argues that consciousness is nonlocal, but I doubt many in the scientific community have taken notice.

Another way that consciousness may depend on quantum effects is if, to copy the brain, you’d have to measure the state of objects in the brain (like neurons) so precisely that the Heisenberg Uncertainty Principle kicks in, and the measurement itself starts changing the brain’s physical state.  For example, Scott Aaronson suggests in this paper that if a brain is “unclonable for fundamental physical reasons,” then that unclonability could be a consequence of quantum no-cloning if the granularity a brain would need to be simulated at in order to duplicate someone’s subjective identity was down to the quantum level. 

In general, though, few scientists believe that consciousness or brain function depend on quantum effects, and most who discuss the possibility are quickly dismissed as mystics or pseudoscientists.[2]  As long as consciousness does not depend on quantum effects, then we don’t need to worry about quantum no-cloning, and there is nothing that would prevent a future engineer from scanning a person’s brain and then reproducing a functional duplicate with the same conscious state.

Are you convinced of Statement 1 yet?… that a brain can be copied in principle?  Maybe you’re still concerned about possible quantum effects.  OK, here’s another argument.

The amount of information that can be contained in a volume of space is limited.  This is called the Bekenstein bound.  It’s a ridiculously large number but it’s still finite.  For example, the Bekenstein bound Wikipedia page calculates that the maximum information necessary to recreate a human brain, including its entire quantum state, is on the order of 10^42 bits (where a single “bit” of information is either a 0 or 1).  That’s a huge number… it looks like 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000, but it’s still much, much smaller than the number of particles in the universe.  Also, the Bekenstein bound for the brain is an upper physical limit that’s based on a brain so dense with information that it’s right on the verge of collapsing into a black hole!  I think it’s reasonable to surmise that we aren’t walking around with potential black holes in our skulls, so the actual information necessary to specify the quantum state of a brain is probably much, much, much, much smaller than 10^42 bits.  But it doesn’t actually matter.  Here’s why.

Even if we can’t in practice copy a human brain, the universe should be able to.  I’m referring to a Boltzmann Brain.  Physicists currently believe that essentially any physical state can be created by randomness (i.e., accident).  So even though it’s extremely unlikely, a physicist will say that there is some chance that atoms and particles will accidentally come together somewhere in the universe to create your brain.  And even if we include quantum effects, and even if that accidental collection of atoms has to specify the 10^42 bits that could potentially be specified in the physical state of your brain, there is some nonzero probability that it will occur. 

In other words, there is no known physical law that will prevent the exact recreation of your brain elsewhere.  The universe can copy your brain, even if your brain’s function depends on quantum effects.  Therefore, a brain can be copied.  Statement 1 is true.

In my next post, I’ll address Statement 2.  As for now, do you have any questions or concerns about Statement 1?


[1] As I mentioned previously, I would ordinarily try to be more precise with my words, arguments, and proofs.  But the purpose of this and subsequent posts is to write more colloquially without alienating lay readers.  Better precision can be found, e.g., in my papers.

[2] Don’t forget that consensus does not equal truth.  There is, and perhaps always has been, a bully culture in science, which is why scientific paradigms tend to be changed only by independent mavericks.

Wednesday, February 2, 2022

Does the Brain Cause Consciousness? Part 1

I have spent so much time and effort trying (and ultimately failing) to successfully communicate with people in the physics and philosophy academies, using their complicated and abstruse language and math equations, that I’ve made many of my insights, discoveries, and contributions completely inaccessible to the rest of the world, including my own friends and family.

My close friend Adam recently asked me some important questions, like whether computers could be conscious.  Of course, I’ve answered this question many times, and in great detail, on this blog and in my papers (particularly this and this).  But I realized that I really only addressed people who already knew the language of quantum mechanics, computer science, philosophical logic, and so forth.  So in this and subsequent posts, I’m going to try to address some important questions in direct, ordinary language without all the bullshit jargon.

Today, I want to mention two such questions:

·       Is there an afterlife?

·       Can a computer be conscious?

Ask these questions of a physicist, biologist, or computer scientist, and probably the vast majority will answer firmly and with conviction: No, there is no afterlife; Yes, a computer can be conscious.  And if you probe them further as to why they are so certain of these answers, you’ll find that there is an (often unstated) assumption that pervades the scientific community about consciousness:

Assumption: The brain causes consciousness.[1]

Is that assumption true?  If it is, then it’s not unreasonable to believe that consciousness ends when the brain dies.  Or that someday we’ll be able to copy the brain and recreate a person’s consciousness.  Or that a person’s brain could be simulated in a computer, thus producing consciousness in a computer.

But again, all these popular ideas stem from that one assumption, and there aren’t many scientists who question it (or even acknowledge it as an assumption).  So that’s where I’ll start.  Consider, again, the assumption:

Assumption: The brain causes consciousness.

Several questions for you about that assumption:

·       Do you believe it?

·       If so, why?  What evidence do you have that it is true?

·       What evidence has the scientific community offered to support it?

·       Which beliefs depend on it?  For example, anyone who believes that consciousness ends with brain death necessarily makes the above assumption.  Anyone who believes that a computer will someday be conscious by simulating a brain also makes the above assumption.  Many, many other popular science beliefs depend on this assumption.

·       What if the assumption is incorrect?  Is it possible to prove that it is false?  How might it be disproven?  If the assumption could actually be disproven, how might that impact your beliefs?  How might it impact the popular scientific beliefs about consciousness?



Next in this series: Part 2
Last in this series: Part 3


[1] Note on this post: Ordinarily, I would try to be more precise with my words.  For example, the assumption is actually that a conscious state entirely depends on the physical state of a living brain, but this is where the eyes of ordinary readers start to glaze over.  So I won’t be so precise in this and related future blog posts.

Friday, March 19, 2021

The Folly of Brain Copying: Conscious Identity vs. Physical Identity

The notion of “identity” is a recurring problem both in physics and in the nature of consciousness.  Philosophers love to discuss consciousness with brain-in-a-vat type thought experiments involving brain copying.  The typical argument goes something like this:

i)          The brain creates consciousness.

ii)         It is physically possible to copy the brain and thereby create two people having the same conscious states.

iii)        Two people having the same conscious states each identifies as the “actual” one, but at least one is incorrect.

iv)        Therefore, conscious identity (aka personal identity) is an illusion.

I spent a long time in Section II of this paper explaining why questioning the existence of conscious identity is futile and why the above logic is either invalid or inapplicable.  Yes, we have a persistent (or “transtemporal”) conscious identity; doubting that notion would unravel the very nature of scientific inquiry.  Of course, you might ask why anyone would actually doubt if conscious identity exists.  Suffice it to say that this wacky viewpoint tends to be held by those who subscribe to the equally wacky Many Worlds Interpretation (“MWI”) of quantum mechanics, which is logically inconsistent with a transtemporal conscious identity.

I showed in Section III of the above paper why special relativity prevents the existence of more than one instantiation of a physical state creating a particular conscious state.  In other words, at least one of assumptions i) and ii) above is false.  For whatever reason, the universe prohibits the duplication or repeating of consciousness-producing physical states.  In Section IV(A) of the same paper, I suggested some possible explanatory hypotheses for the mechanism(s) by which such duplications may be physically prevented, such as quantum no-cloning. 

Nevertheless, the philosopher’s argument seems irresistible... after all, why can’t we make a “perfect” copy of a brain?  If multiple instances of the same conscious state are physically impossible then what is the physical explanation for why two consciousness-producing physical states cannot be identical?  I finally realized that conscious identity implies physical identity.  In other words, if conscious identity is preserved over time, then physical identity must also be preserved over time, and this may help explain why the philosopher’s brain-copying scheme is a nonstarter.

I’d been struggling for some time with the notion of physical identity, such as in this blog post and this preprint.  The problem can be presented a couple ways:

·         According to the Standard Model of physics, the universe seems to be made up of only a handful of fundamental particles, and each of these particles is “identical” to another.  For example, any two electrons are identical, as are any two protons, or any two muons, etc.  The word “identical” is a derivative of “identity,” so it’s easy to confuse two “identical” electrons as being indistinguishable and thus having the same (or indistinct) identities.  So if all matter is made up of atoms comprising electrons, protons, and neutrons, then how can any particular clump of atoms have a different identity than another clump made of the same type of atoms?

·         Let’s assume that consciousness is created by physical matter and that physical matter is nothing but a collection of otherwise identical electrons, protons, and neutrons.  In the above paper I showed that if conscious identity exists, then conscious states cannot be copied or repeated.  And that means there is something fundamentally un-copiable about the physical state that creates a particular conscious state, which would seem odd if all matter is fundamentally identical. 

·         Consciousness includes transtemporal identity.  Assuming physicalism is true, then conscious states are created by underlying physical states, which means those physical states must have identity.  But physics tells us that physical matter comprises otherwise identical particles.

I finally realized that this problem can be solved if particles, atoms, etc., can themselves have identity.  (I do not mean conscious identity... simply that it makes sense to discuss Electron “Alice” and Electron “Bob” and keep track of them separately... that they are physically distinguishable.)  An object’s identity can be determined by several factors (e.g., position, entanglements and history of interactions, etc.) and therefore can be distinguished from another object that happens to comprise the same kind of particles.  Two physically “identical” objects can still maintain separate “identities” to the extent that they are distinguishable.  And we can distinguish (or separately identify) two objects, no matter how physically similar they may otherwise be, by their respective histories and entanglements and how those histories and entanglements affect their future states. 

Where does physical identity come from?  It is a necessary consequence of the laws of physics.  For instance, imagine we have an electron source in the center of a sphere, where the sphere’s entire surface is a detector (assume 100% efficiency) that is separated into hemispheres A and B.  The detector is designed so that if an electron is detected in hemisphere A, an alarm immediately sounds, but if it is detected in hemisphere B, a delayed alarm sounds one minute later.  The source then emits an electron, but we do not immediately hear the alarm.  What do we now know?  We know that an electron has been detected in hemisphere B and that we will hear an alarm in one minute.  Because we know this for certain, we conclude that the detected electron is the same as the emitted electron.  It has the same identity.  The following logical statement is true:

(electron emitted) ∩ (no detection in hemisphere A) à (detection in hemisphere B)

But more importantly, the fact that the above statement is true itself implies that the electron has identity.  In other words:

[(electron emitted) ∩ (no detection in hemisphere A) à (detection in hemisphere B)]

à (the electron emitted is the electron detected in hemisphere B)

(On retrospect, I feel like this is obvious.  Of course physical identity is inherent in the laws of physics.  How could Newton measure the acceleration of a falling apple if it’s not the same apple at different moments in time?)

So if electrons can have identity, then in what sense are they identical?  Can they lose their identity?  Yes.  Imagine Electron Alice and Electron Bob, each newly created by an electron source and having different positions (i.e., their distinct wave packets providing their separate identities).  The fact that they are distinguishable maintains their identity.  For example, if we measure an electron where Electron Bob cannot be found, then we know it was Electron Alice.  However, electrons, like all matter, disperse via quantum uncertainty.  So what happens if their wave functions overlap so that an electron detection can no longer distinguish them?  That’s when Bob and Alice lose their identity.  That’s when there is no fact about which electron is which.  (As a side note, Electron Bob could not have a conscious identity given that when he becomes indistinguishable with Electron Alice, even he cannot distinguish Bob from Alice.  This suggests that conscious identity cannot even arise until physical identity is transtemporally secured.)

This realization clarified my understanding of conscious identity.  My body clearly has an identity right now in at least the same sense that Electron Bob does.  What would it take to lose that physical identity?  Well, it wouldn’t be enough to make an atom-by-atom copy of the atoms in my body (call it “Andrew-copy”), because Andrew-copy would still be distinguishable from me by nature, for example, of its different location.  Rather, the wave functions of every single particle making up my body and the body of Andrew-copy would have to overlap so that we are actually indistinguishable.  But, as I showed in this paper, that kind of thing simply can’t happen with macroscopic objects in the physical universe because of the combination of slow quantum dispersion with fast decoherence.

What would it take for me to lose my conscious identity (or copy it, or get it confused with another identity, etc.)?  Given that conscious states cannot be physically copied or repeated, if conscious identity depends only the particular arrangement of otherwise identical particles that make up matter, then we need a physical explanation for what prevents the copying of that particular arrangement.  But if conscious identity depends on not just the arrangement of those (otherwise identical) particles but also on their physical distinguishability, then the problem is solved.  Here’s why.  Two macroscopic objects, like bowling balls, will always be physically distinguishable in this universe.  Bowling Ball A will always be identifiably distinct from Bowling Ball B, whether or not any particular person can distinguish them.  So if my conscious identity depends at least in part on the physical distinguishability of the particles/atoms/objects that create my consciousness, then that fact alone would explain why conscious states (and their corresponding transtemporal identity) cannot be copied.

Let me put this another way.  Identity is about distinguishability.  It is possible for two electrons to be physically indistinguishable, such as when the wave states of two previously distinguishable electrons overlap.  However, it is not possible, in the actual universe, for a cat (or any macroscopic object) and another clump of matter to be physically indistinguishable because it is not possible for the wave states of these two macroscopic objects to overlap, no matter how physically similar they may otherwise be.  A cat’s physical identity cannot be lost by trying to make a physical copy of it.  It is not enough to somehow assemble a set of ≈10^23 atoms that are physically identical to, and in a physically identical arrangement as, the ≈10^23 atoms comprising the cat.  Each of those constituent atoms also has a history of interactions and entanglements that narrowly localize their wave functions to such an extent that overlap of those wave functions between corresponding atoms of the original cat and the copy cat is physically impossible.  (See note below on the Myth of the Gaussian.)

Imagine that someone has claimed to have made a “perfect copy” of me in order to prove that conscious identity is just an illusion.  He claims that Andrew-copy is indistinguishable from me, that no one else can tell the difference, that the copy looks and acts just like me.  Of course, I will know that he’s wrong: even if no one else can distinguish the copy from me, I can.  And that alone is enough to establish that Andrew-copy is not a perfect copy.  But now I understand that my conscious identity implies physical identity – that my ability to distinguish Andrew-copy from me also implies physical distinguishability.  There is no such thing as a perfect physical copy of me.  Even if the atoms in Andrew-copy are in some sense the same and in the same configuration as those in my body, and even if some arbitrary person cannot distinguish me from Andrew-copy, the universe can.  The atoms in Andrew-copy have a history and entanglements that are distinguishable from the atoms in my body, the net result being that the two bodies are physically distinguishable; their separate physical identities are embedded as facts in the history of the universe.

So if the universe can distinguish me from Andrew-copy, then why should it be surprising that I can distinguish myself from Andrew-copy and that I have an enduring conscious identity?  The question is not whether some evil genius can make a physical copy of my body that is indistinguishable to others.  The question is whether he can make a copy that is indistinguishable to me or the universe.  And the answer is that he can’t because making that copy violates special relativity. 

 

Note on the Myth of the Gaussian:

Physicists often approximate wave functions in the position basis as Gaussian distributions, in large part because Gaussians have useful mathematical properties, notably that the Fourier transform of a Gaussian is another Gaussian.  Because the standard deviation of a Gaussian is inversely related to the standard deviation of its Fourier transform, it clearly demonstrates the quantum uncertainty principle whereby the commutator of two noncommuting operators is nonzero.  An important feature of a Gaussian is that it is never zero for arbitrarily large distances from the mean.  This treatment of wave functions often misleads students into believing that wave functions are or must be Gaussian and that: a) an object can be found anywhere; and b) the wave states of any two arbitrary identical objects always overlap.  Neither is true. 

Regarding a), physics students are often given the problem of calculating the probability that his/her body will quantum mechanically tunnel through a wall, or even tunnel to Mars; the calculation (which is based on the simple notion of a particle of mass M tunneling through a potential barrier V) always yields an extremely tiny but nonzero probability.  But that’s wrong.  Setting aside the problem with special relativity – i.e., if I am on Earth now, I can’t be measured a moment later on Mars without exceeding the speed of light – the main problem is physical distinguishability.  The future possibilities for my body (and its physical constituents) are limited by their histories and entanglements. 

While some electron may, due to quantum dispersion or being trapped in a potential well, develop a relatively wide quantum wave packet over time whose width “leaks” to the other side of the wall/potential barrier, this requires that the electron remain unmeasured (i.e., with no new correlations) during that time period.  But the particles and atoms in a human body are constantly “measuring each other” through decoherence so that their individual wave packets remain extremely tightly localized.  In other words, my body doesn’t get quantum mechanically “fuzzy” or “blurry” over time.  Thus none of the wave packets of the objects comprising my body get big enough to leak through (or even to) the wall.  More to the point, the QM “blurriness” of my body is significantly less than anything that can be seen... I haven’t done the calculation, but the maximum width of any wave packet (not the FWHM of a Gaussian, which extends to infinity, but the actual maximum extent) is much, much, much smaller than the wavelength of light. 

As I showed above, physical distinguishability is an inherent feature of the physical world.  An object that appeared on the other side of the wall that happened to look like my body would be physically distinguishable from my body and cannot be the same.  That is, there is no sense in which the body that I identify as mine could quantum mechanically tunnel to Mars or through a wall – that is, there is ZERO probability of me tunneling to Mars or through a wall.  If I have just been measured in location A (which is constantly happening thanks to constant decohering interactions among the universe and the objects comprising my body), then tunneling to location B requires an expansion of the wave packets of those objects to include location B – i.e., my tunneling to B requires a location superposition in which B is a possibility.  But past facts, including the fact that I am on Earth (or this side of the wall) right now have eliminated all configurations in which my body is on Mars (or on the other side of the wall) a moment later.

Monday, February 22, 2021

Does Consciousness Cause Collapse of the Quantum Mechanical Wave Function?

No.

First, at this point I am reasonably confident that collapse actually happens.  Either it does or it doesn’t, and non-collapse interpretations of QM are those that have unfounded faith that quantum wave states always evolve unitarily.  As I argued in this paper, that assumption is a logically invalid inference.  So given that we don’t observe quantum superpositions in the macroscopic world, I’d wager very heavily on the conclusion that collapse actually happens.

But what causes it?  Since we can’t consciously observe a (collapsed) quantum mechanical outcome without being conscious – duh! – many have argued that conscious observation actually causes collapse.  (Others have argued that consciousness and collapse are related in different ways, such as collapse causing consciousness.)  In this blog post, I discussed the consciousness-causes-collapse hypothesis (“CCCH”) in quantum mechanics.  I pointed out that even though I didn’t think CCCH was correct, it had not yet been falsified, despite an awful paper that claimed to have falsified it (which I refuted in this paper).

Two things have happened since then.  First, I showed in this paper that the relativity of quantum superpositions is inconsistent with the preparation of macroscopic quantum superpositions, which itself implies that CCCH is false. 

Second, this paper was published a few days ago.  Essentially, it’s a Wigner’s-Friend-esque thought experiment in which a poison-containing breaks or does not break at 12pm, per a QM outcome, but the person in the room will be unconscious until 1pm.  That’s it.  If CCCH is correct, then collapse of the wave function will not occur until the person is conscious at 1pm... but if he is conscious at 1pm, how could the wave state possibly collapse to an outcome in which the person dies at noon?  It’s a very simple logical argument (even though it is not explained well in the paper) that is probably valid, given some basic assumptions about CCCH.

So when does collapse actually occur?  I’ve been arguing that it happens as soon as an event or new fact (i.e., new information) eliminates possibilities, and the essentially universal entanglement of stuff in the universe (due to transitivity of correlation) makes it so that macroscopically distinct possibilities are eliminated very, very quickly.  For example, you might have a large molecule in a superposition of two macroscopically distinct position eigenstates, but almost immediately one of those possible states gets eliminated by some decoherence event, in which new information is produced in the universe that actualizes the molecule’s location in one of those position eigenstates.  That is the actual collapse, and it happens long before any quantum superposition could get amplified to a macroscopic superposition.

Friday, January 22, 2021

Do Conscious States Depend on History?

I’ve had a few additional thoughts further to my recent post on counting conscious states, particularly on the extent to which a given conscious state is history-dependent (i.e., depends on its history of prior conscious states) and whether a particular conscious state can be created de novo (i.e., from scratch, without the person experiencing that state having actually experienced previous conscious states).

Imagine that a person has actually experienced a particular series of conscious states (which of course depend, at least in part, on the stimuli sensed).  For the sake of simplicity, I’ll just assume that there’s a conscious state for each stimulus “frame,” and for ≈10 distinct frames/second, there are about 300 million stimulus frames per year.  I’m 43 now, and not sure whether we should start counting conscious frames from birth or sometime later, but let’s say that I’m just about to experience my ten billionth conscious state.  In my last post, I gave a (very, very) rough estimate for the minimum number of information bits necessary to specify such a state.  That number may be large – say, on the order of a trillion bits – but it’s not ridiculous and is less information capacity than many people have on their mobile phones.  Whatever that number happens to be – that is, the minimum number (B) of bits necessary to specify a particular conscious state – the point is this: By assumption, the instantiation of those B bits in the configuration necessary to create conscious state C1 will indeed create that state C1.  (For the following argument, it doesn’t matter whether the mere existence/instantiation of that particular configuration of bits is adequate, or whether that configuration must be executed on some general-purpose computer/machine.)

In other words, by assumption, some conscious state C1 is sufficiently encoded by some series of B bits that may look like: 0011010100110111110... (trillions of bits later)... 10001001111100011010.  There may be a lot of bits, but the idea is that if physicalism is true and the information content of any given volume is finite, then any particular conscious state must be encoded by some string of bits.  If this seems odd to you, it’s definitely the majority opinion among physicists and computer scientists who actually think about this kind of stuff.  For example, Scott Aaronson characterizes the situation this way:

“Look—I don’t know if any of you are like me, and have ever gotten depressed by reflecting that all of your life experiences, all your joys and sorrows and loves and losses, every itch and flick of your finger, could in principle be encoded by a huge but finite string of bits, and therefore by a single positive integer. (Really? No one else gets depressed about that?)”

For the record, I don’t get depressed about that because I don’t believe it’s true, although I’m still trying to formulate my reasoning for why.  OK, so let’s assume that I have in fact experienced ten billion conscious states.  The state I am currently experiencing is C10,000,000,000 (let’s call it CT), and a tenth of a second ago I experienced conscious state C9,999,999,999 (let’s call it CT-1), and a tenth of a second before that I experienced C9,999,999,998 (let’s call it CT-2), and so on back.  Again, by assumption, each of these states is encoded by a particular string of bits.  So here’s my question: is it possible to just recreate state CT  de novo without, in the process, also producing state CT-1?

Here’s another way of phrasing the question.  Is the person who is experiencing conscious state CT someone who actually experienced CT-1 (and CT-2 and so on back), or someone who just thinks/believes that he experienced CT-1?  Is there a way to produce someone in state CT without first producing someone in state CT-1?  I don’t think so; I think that state CT is history dependent and literally cannot be experienced unless and until preceding state CT-1 is experienced.  After all, if conscious states are indeed history independent, then the experience of CT is precisely the same no matter what precedes it, and that could lead to some odd situations.  Imagine this series of conscious experiences:

Series #1

C1000: sees alligator in the distance

C1008: gets chomped by alligator

C1045: puts tourniquet on chomped arm

C2000: eats own birthday cake

C3000: rides on small plane to experience skydiving

C3090: jumps out of airplane to experience thrilling freefall

C3114: pulls rip cord of parachute

C3205: lands safely on ground

 

If conscious states are history independent, then the person’s experience at C3205 is precisely the same even if the physical evolution of the world actually caused the following ordering of conscious states:

Series #2

C1045: puts tourniquet on chomped arm

C3000: rides on small plane to experience skydiving

C1008: gets chomped by alligator

C3090: jumps out of airplane to experience thrilling freefall

C2000: eats own birthday cake

C3114: pulls rip cord of parachute

C1000: sees alligator in the distance

C3205: lands safely on ground

 

I can’t see how the above series would make any sense, but more importantly I can’t see how, even if it did make sense, the experience of conscious state C3205 could possibly be the same in both cases.  If I’m right, it’s because conscious states are history dependent and state C3205 actually cannot be experienced immediately after C1000.

I’m not sure where I’m going with this.  If conscious states are history dependent (which is what I’ve suspected all along, as in this paper) then lots of interesting implications follow, such as that conscious states cannot be copied, consciousness is not algorithmic, etc.  (I believe I've already independently shown these implications in this paper.)  The above analysis certainly suggests history dependence but is not a proof.  Maybe the way to prove it is by first assuming that conscious states are independent of history – in which case conscious state C3205, for example, can be created de novo without first creating conscious state C3204 (which can be created de novo without first creating conscious state C3203, etc.) – and then see whether that assumption conflicts with observations and facts about the world.

Remember that, by assumption, state C3205 is just instantiated by a (very long but finite) string of bits, say 0011010100110111110...  So imagine if we start with a long series of on-off switches all initially switched off.  We turn some of them on until eventually we have instantiated the correct series (0011010100110111110...), which encodes state C3205.  But it does not (and cannot) matter the order in which we flip those switches.  I have to think more about the mathematics, but I suspect that in guaranteeing that C3205 is independent of history, so that it and every preceding conscious state can be instantiated independently of its own history, we will end up needing far, far more bits than my original estimate of N^T.  I suspect that even the most conservative estimate will show that if conscious states are history independent, then consciousness will require far more information storage than is currently believed to reside in the brain. 

Then again, I really don’t know.  This is still just the initial seed of a thought. 

Sunday, January 17, 2021

Counting Conscious States

Information is physical, which means there is a limit to the amount of information that can fit in a given volume.  If you try to cram more information into that volume, the physical mass of that information will literally collapse into a black hole.  That limit is called the Bekenstein bound and is truly a massive limit.  For instance, the total informational bound that could be contained in a volume the size of the human brain is around 10^42 bits, which means that the total number of possible brain states is around 2^(10^42).  The total informational capacity of the entire visible universe is on the order of 2^(10^120) states.

Why does this matter?  Physicalism (as contrasted with dualism) says that conscious states are produced by physical states; if a first conscious state is distinct from a second conscious state, then they must be produced by different physical states.  All of my papers (and most or all of my blog posts) so far have assumed physicalism is true, in part because anyone who doubts physicalism is usually condescendingly dismissed, ignored, or scoffed at by the scientific community, and in part because I don’t see why the Creator of the already ridiculously complicated universe would have omitted a physical explanation/mechanism for consciousness.  In other words, unless there is a reason to believe that consciousness does not entirely depend on underlying physical state, I see no need, for now, to reject physicalism.  Nevertheless, physicalism would be falsified if one could show that the number of distinct conscious states exceeded the number of physical states, because that would require that a single physical state could produce more than one distinct conscious state.

One avenue for evaluating physicalism, then, is to literally count distinct conscious states.  For example, if one could show that the total number of possible distinct conscious states by a particular person exceeded 2^(10^42), then that would prove that consciousness cannot depend (entirely) on the brain; if one could show that the number of possible distinct conscious states exceeded 2^(10^120), then that would literally falsify physicalism. 

A few years ago, Doug Porpora wrote a fascinating paper that attempted to prove that the total number of distinct conscious states is actually infinite.  One of his arguments, for instance, is that if we assume that there are some natural numbers that we cannot think about, then there must be a maximum number (call it Max) that we can think about and a minimum number (call it Min) that we cannot think about.  But if we can think about Max, certainly we can think about Max+1 or Max^2, which means that Max is not the maximum number we can think about and the original assumption (that there are some natural numbers that we cannot think about) is false.  A related argument is that by identifying the minimum number that we cannot think about (and even naming it Min), we are thinking about Min, which means that Min is not in the set of numbers that we cannot think about!  Again, the original assumption is false.  There is more to the argument than this but it gives you the general flavor of its proof-by-contradiction strategy.  One commenter has attempted to refute Porpora’s argument in this paper, and Porpora may be working on a reply. 

This got me thinking again about the importance of counting distinct conscious states, which very few people seem to have attempted.  Of course, if Porpora’s logical argument is correct, then physicalism is false because even though 2^(10^120) is a ridiculously and incomprehensively large number, it is still trumped by ∞.  But we should also realize that both of the quantities we are considering are extremes.  Infinity is extreme, of course, but so is the Bekenstein bound.

Let’s take a more realistic approach.  There are something like 100 billion neurons in the human brain.  If each neuron acts like a digital bit, then the total number of distinct brain states is 2^(100 billion).  Neurons are actually complex cells with very complicated connections to each other.  I don’t think any neuroscientist seriously regards neurons as acting in any way like digital bits.  However, I do think it is interesting to ask the question of whether or not the number of distinct conscious states exceeds 2^(100 billion).  If there were a way to answer that question – by somehow counting conscious states – then it would do a couple things:

·         Assuming physicalism is true, discovering that the number of distinct conscious states exceeds 2^(100 billion) would confirm that the brain is not a digital computer with neurons acting as digital bits.

·         It would provide a methodology for counting conscious states that may provide further insights about the physical nature of consciousness.

On that note, let me suggest such a method.  First, let me start with the notion of one stimulus “frame,” which is the particular collection of physical stimuli that one might detect through the five senses at any given moment.  Let’s assume that there are N consciously distinct (frames of) stimuli.  What I mean by that is that there are N different combinations of stimuli from the person’s senses that the person would be able to distinguish.  Consider these different sets of stimuli:

·         Watching a sunset while hearing crashing waves while tasting white wine while smelling the salty ocean while feeling sand under one’s feet;

·         Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet;

·         Watching a sunset while hearing seagulls while tasting white wine while smelling the salty ocean while feeling sand under one’s feet.

If we actually took the time to list them, we could certainly produce a very, very long list of consciously distinct stimuli.  Some of them might differ very subtly, such as two stimuli that are identical except for the temperature of the sand differing by one degree, or a slight difference in sound frequency distribution from the seagulls, or a slight but perceptible difference in the cloud distribution above the sunset.

What matters, in enumerating consciously distinct stimuli, is whether a person could distinguish them, not whether he actually does.  If he could distinguish two stimuli, either by consciously noticing the difference or simply having a (slightly) different conscious experience based on the difference, then that difference must be reflected in the underlying physical state.

So how many such distinct stimuli are there?  Lots.  One could certainly distinguish millions of different visual stimuli, many thousands of different sounds and tactile sensations, and at least hundreds of different tastes and smells.  This is a ridiculously conservative claim, of course; there are professional chefs, for example, who can probably differentiate millions of different tastes and smells.  On this very conservative basis, there are probably far, far more than 10^18 (around 2^60) distinct stimuli for any given person.  If there were only 10^18 distinct conscious states or experiences, then in principle it would only require about 60 bits to specify them. 

However, history matters.  Conscious experience does not depend just on one’s stimuli in the moment, but also on prior stimuli (as well as prior conscious experience).  To specify a person’s conscious experience, it is not enough to specify his current stimuli, as his experience will also depend on past stimuli.  For example, imagine the different conscious experiences at time t1:

Case A – No significant change from t0 to t1:

t0: Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet.

t1: Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet.

Case B – Significant change from t0 to t1:

t0: Watching a sunset while hearing crashing waves while tasting white wine while smelling the salty ocean while feeling sand under one’s feet.

t1: Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet.

The stimulus at t1 is the same in both cases, but the conscious experience would clearly be different.  In Case A, the person may simply be enjoying the surroundings, while in Case B, he may be confused/surprised that his wine has suddenly changed flavor and color.

What that means is even if the information necessary to specify the particular stimulus at time t1 is 60 bits, that information is not sufficient to specify the person’s conscious experience at that time.  In other words, history matters, and instead of just counting the number of possible distinct stimuli, we need to consider their order in time. 

So, for N consciously distinct stimuli, let’s assume that one’s conscious experience/state at a given time is sensitive to (i.e., depends on) the time-ordering of M of these stimuli.  The total number of possible states, then, is just the permutation N!/(N-M)!, but assuming that N>>M, this total number of states ≈ N^M.  

So in the above example, the number of possible physical states necessary to allow the person to consciously distinguish Case A from Case B is not N, but N^2.  If N requires, say, 60 bits of information, then at least 120 bits are required to specify his conscious state at time t1.  But of course the situation is far worse.  We can imagine a series of ten consecutive stimuli, ending at time t9, which the person would consciously experience in a manner that depended on all ten stimuli and their order.  It makes no difference whether the person actually remembers the particular stimuli or their order of progression.  As long as he has a conscious experience at t9 that is in some (even miniscule) manner dependent on the particular stimuli and their order, then that conscious state is one of at least N^10 states, requiring at least 600 bits to specify.

Now note that his experience at t9 is a unique one of at least N^10 states, just as his experience at later time t19 is a unique one of at least N^10 states, and so forth until time t99.  But if his conscious experience at time t99 is sensitive to the ordering of his conscious experiences at t9, t19, t29, etc., then the conscious state at t99 is one of at least N^100 states, requiring at least 6000 bits to specify.  Once again, this analysis has nothing to do with whether the person remembers any specifics about his prior stimuli or experiences; all that matters is that his conscious experience at t99 depends to some degree on the ordering of experiences at t9, t19, etc., and that his experience at t9 depends to some degree on the ordering of stimuli at t0, t1, etc.

It’s easy to show, then, that the total number of possible conscious states is N^T, where T is the total number of individual “frames” of stimulus that one experiences over his life.  How many is that?  Well, 100 years is about 3 billion seconds, and we certainly experience more than one “frame” of stimulus per second.  (Otherwise, TVs would not need a refresh rate of around 30 frames/second.)  So, for 10 frames/second, we might estimate the total number of possible conscious states at about N^(30 billion).  If N is 2^60, then the total number of conscious states is 2^(1.8 trillion), requiring at least 1.8 trillion bits to specify.

I find it fascinating how close this is to the number of neurons (100 billion) in the human brain.  For extremely rough back-of-the-envelope calculations like this, an order or two of magnitude is certainly “close.”  The storage capacity of the human brain has been estimated somewhere in the tens to thousands of terabytes, and once again the above rough estimate is within a couple of orders of magnitude of this amount.

What this tells me is that this method of counting distinct conscious states is viable and potentially useful and valuable.  By getting better estimates for the number of stimuli that a person can distinguish, for example, we might find that the rough estimate above (≈ trillion bits) is far too high or far too low, which could then provide insights on our understanding of the brain as: a computer; a digital computer; a digital computer with neurons acting as bits; and the independent source of consciousness.  Of course, such an analysis will never get us anywhere near the Bekenstein bound or infinity, as addressed by Porpora’s paper, but I still think we can learn interesting and important things about the physical nature of consciousness by counting distinct conscious states.

Finally, I think the above analysis hints at something fundamental: that consciousness is history-dependent.  This is something I discuss at length in my paper on the Unique History Theorem, but the above arguments suggest a similar conclusion by a very different analysis.  If one’s conscious experience at time t99 depends to some degree on his experience at t98, which in turn depends on his experience at t97, and so on back, then it may not be possible to produce a person de novo in a particular conscious state C1 who has not already experienced the particular sequence of conscious states on which state C1 depends.

In any event, I think it makes sense to seriously consider and estimate the number of potentially distinct conscious states, taking into account a human’s sensitivity to different stimuli and the extent to which ordering of stimuli affect conscious states.  I think this approach could yield potentially fascinating knowledge and implications about the brain and the physical nature of consciousness.

Tuesday, June 2, 2020

Consciousness, Quantum Mechanics, and Pseudoscience

The study of consciousness is not currently “fashionable” in the physics community, and the notion that there might be any relationship between consciousness and quantum mechanics and/or relativity truly infuriates some physicists.  For instance, the hypothesis that consciousness causes collapse (“CCC”) of the quantum mechanical wave function is now considered fringy by many; a physicist who seriously considers it (or even mentions it without a deprecatory scowl) risks professional expulsion and even branding as a quack.

In 2011, two researchers took an unprovoked stab at the CCC hypothesis in this paper.  There is a fascinating experiment called the “delayed choice quantum eraser,” in which information appears to be erased from the universe after a quantum interference experiment has been performed.  The details don’t matter.  The point is that the researchers interpret the quantum eraser experiment as providing an empirical falsification of the CCC hypothesis.  They don’t hide their disdain for the suggestion that QM and consciousness may have a relationship.

The problem is: their paper is pseudoscientific shit.  They first make a massive logical mistake that, despite the authors’ contempt for philosophy, would have been avoided had they taken a philosophy class in logic.  They follow up that mistake with an even bigger blunder in their understanding of the foundations of quantum mechanics.  Essentially, they assert that the failure of a wave function to collapse always results in a visible interference pattern, which is just patently false.  They clearly fail to falsify the CCC hypothesis.  (For the record, I think the CCC hypothesis is likely false, but I am reasonably certain that it has not yet been falsified.)

Sure, there’s lots of pseudoscience out there, so why am I picking on this particular paper?  Because it was published in Annalen der Physik, the same journal in which Einstein published his groundbreaking papers on special relativity and the photoelectric effect (among others), and because it’s been cited by more than two dozen publications so far (often to attack the CCC hypothesis), only one of which actually refutes it.

What’s even more irritating is that the paper’s glaring errors could easily have been caught by a competent journal referee who had read the paper skeptically.  If the paper’s conclusion had been in support of the CCC hypothesis, you can bet that it would have been meticulously and critically analyzed before publication, assuming it was considered for publication at all.  But when referees already agree with a paper’s conclusion, they may be less interested in the logical steps taken to arrive at that conclusion.  A paper that comes to the correct conclusion via incorrect reasoning is still incorrect.  A scientist that rejects correct reasoning because it results in an unfashionable or unpopular conclusion is not a scientist.

Here is a preprint of my rebuttal to their paper.  Since it is intended to be a scholarly article, I am much nicer there than I’ve been here.