World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

Monday, February 22, 2021

Does Consciousness Cause Collapse of the Quantum Mechanical Wave Function?

No.

First, at this point I am reasonably confident that collapse actually happens.  Either it does or it doesn’t, and non-collapse interpretations of QM are those that have unfounded faith that quantum wave states always evolve unitarily.  As I argued in this paper, that assumption is a logically invalid inference.  So given that we don’t observe quantum superpositions in the macroscopic world, I’d wager very heavily on the conclusion that collapse actually happens.

But what causes it?  Since we can’t consciously observe a (collapsed) quantum mechanical outcome without being conscious – duh! – many have argued that conscious observation actually causes collapse.  (Others have argued that consciousness and collapse are related in different ways, such as collapse causing consciousness.)  In this blog post, I discussed the consciousness-causes-collapse hypothesis (“CCCH”) in quantum mechanics.  I pointed out that even though I didn’t think CCCH was correct, it had not yet been falsified, despite an awful paper that claimed to have falsified it (which I refuted in this paper).

Two things have happened since then.  First, I showed in this paper that the relativity of quantum superpositions is inconsistent with the preparation of macroscopic quantum superpositions, which itself implies that CCCH is false. 

Second, this paper was published a few days ago.  Essentially, it’s a Wigner’s-Friend-esque thought experiment in which a poison-containing breaks or does not break at 12pm, per a QM outcome, but the person in the room will be unconscious until 1pm.  That’s it.  If CCCH is correct, then collapse of the wave function will not occur until the person is conscious at 1pm... but if he is conscious at 1pm, how could the wave state possibly collapse to an outcome in which the person dies at noon?  It’s a very simple logical argument (even though it is not explained well in the paper) that is probably valid, given some basic assumptions about CCCH.

So when does collapse actually occur?  I’ve been arguing that it happens as soon as an event or new fact (i.e., new information) eliminates possibilities, and the essentially universal entanglement of stuff in the universe (due to transitivity of correlation) makes it so that macroscopically distinct possibilities are eliminated very, very quickly.  For example, you might have a large molecule in a superposition of two macroscopically distinct position eigenstates, but almost immediately one of those possible states gets eliminated by some decoherence event, in which new information is produced in the universe that actualizes the molecule’s location in one of those position eigenstates.  That is the actual collapse, and it happens long before any quantum superposition could get amplified to a macroscopic superposition.

Friday, January 22, 2021

Do Conscious States Depend on History?

I’ve had a few additional thoughts further to my recent post on counting conscious states, particularly on the extent to which a given conscious state is history-dependent (i.e., depends on its history of prior conscious states) and whether a particular conscious state can be created de novo (i.e., from scratch, without the person experiencing that state having actually experienced previous conscious states).

Imagine that a person has actually experienced a particular series of conscious states (which of course depend, at least in part, on the stimuli sensed).  For the sake of simplicity, I’ll just assume that there’s a conscious state for each stimulus “frame,” and for ≈10 distinct frames/second, there are about 300 million stimulus frames per year.  I’m 43 now, and not sure whether we should start counting conscious frames from birth or sometime later, but let’s say that I’m just about to experience my ten billionth conscious state.  In my last post, I gave a (very, very) rough estimate for the minimum number of information bits necessary to specify such a state.  That number may be large – say, on the order of a trillion bits – but it’s not ridiculous and is less information capacity than many people have on their mobile phones.  Whatever that number happens to be – that is, the minimum number (B) of bits necessary to specify a particular conscious state – the point is this: By assumption, the instantiation of those B bits in the configuration necessary to create conscious state C1 will indeed create that state C1.  (For the following argument, it doesn’t matter whether the mere existence/instantiation of that particular configuration of bits is adequate, or whether that configuration must be executed on some general-purpose computer/machine.)

In other words, by assumption, some conscious state C1 is sufficiently encoded by some series of B bits that may look like: 0011010100110111110... (trillions of bits later)... 10001001111100011010.  There may be a lot of bits, but the idea is that if physicalism is true and the information content of any given volume is finite, then any particular conscious state must be encoded by some string of bits.  If this seems odd to you, it’s definitely the majority opinion among physicists and computer scientists who actually think about this kind of stuff.  For example, Scott Aaronson characterizes the situation this way:

“Look—I don’t know if any of you are like me, and have ever gotten depressed by reflecting that all of your life experiences, all your joys and sorrows and loves and losses, every itch and flick of your finger, could in principle be encoded by a huge but finite string of bits, and therefore by a single positive integer. (Really? No one else gets depressed about that?)”

For the record, I don’t get depressed about that because I don’t believe it’s true, although I’m still trying to formulate my reasoning for why.  OK, so let’s assume that I have in fact experienced ten billion conscious states.  The state I am currently experiencing is C10,000,000,000 (let’s call it CT), and a tenth of a second ago I experienced conscious state C9,999,999,999 (let’s call it CT-1), and a tenth of a second before that I experienced C9,999,999,998 (let’s call it CT-2), and so on back.  Again, by assumption, each of these states is encoded by a particular string of bits.  So here’s my question: is it possible to just recreate state CT  de novo without, in the process, also producing state CT-1?

Here’s another way of phrasing the question.  Is the person who is experiencing conscious state CT someone who actually experienced CT-1 (and CT-2 and so on back), or someone who just thinks/believes that he experienced CT-1?  Is there a way to produce someone in state CT without first producing someone in state CT-1?  I don’t think so; I think that state CT is history dependent and literally cannot be experienced unless and until preceding state CT-1 is experienced.  After all, if conscious states are indeed history independent, then the experience of CT is precisely the same no matter what precedes it, and that could lead to some odd situations.  Imagine this series of conscious experiences:

Series #1

C1000: sees alligator in the distance

C1008: gets chomped by alligator

C1045: puts tourniquet on chomped arm

C2000: eats own birthday cake

C3000: rides on small plane to experience skydiving

C3090: jumps out of airplane to experience thrilling freefall

C3114: pulls rip cord of parachute

C3205: lands safely on ground

 

If conscious states are history independent, then the person’s experience at C3205 is precisely the same even if the physical evolution of the world actually caused the following ordering of conscious states:

Series #2

C1045: puts tourniquet on chomped arm

C3000: rides on small plane to experience skydiving

C1008: gets chomped by alligator

C3090: jumps out of airplane to experience thrilling freefall

C2000: eats own birthday cake

C3114: pulls rip cord of parachute

C1000: sees alligator in the distance

C3205: lands safely on ground

 

I can’t see how the above series would make any sense, but more importantly I can’t see how, even if it did make sense, the experience of conscious state C3205 could possibly be the same in both cases.  If I’m right, it’s because conscious states are history dependent and state C3205 actually cannot be experienced immediately after C1000.

I’m not sure where I’m going with this.  If conscious states are history dependent (which is what I’ve suspected all along, as in this paper) then lots of interesting implications follow, such as that conscious states cannot be copied, consciousness is not algorithmic, etc.  (I believe I've already independently shown these implications in this paper.)  The above analysis certainly suggests history dependence but is not a proof.  Maybe the way to prove it is by first assuming that conscious states are independent of history – in which case conscious state C3205, for example, can be created de novo without first creating conscious state C3204 (which can be created de novo without first creating conscious state C3203, etc.) – and then see whether that assumption conflicts with observations and facts about the world.

Remember that, by assumption, state C3205 is just instantiated by a (very long but finite) string of bits, say 0011010100110111110...  So imagine if we start with a long series of on-off switches all initially switched off.  We turn some of them on until eventually we have instantiated the correct series (0011010100110111110...), which encodes state C3205.  But it does not (and cannot) matter the order in which we flip those switches.  I have to think more about the mathematics, but I suspect that in guaranteeing that C3205 is independent of history, so that it and every preceding conscious state can be instantiated independently of its own history, we will end up needing far, far more bits than my original estimate of N^T.  I suspect that even the most conservative estimate will show that if conscious states are history independent, then consciousness will require far more information storage than is currently believed to reside in the brain. 

Then again, I really don’t know.  This is still just the initial seed of a thought. 

Sunday, January 17, 2021

Counting Conscious States

Information is physical, which means there is a limit to the amount of information that can fit in a given volume.  If you try to cram more information into that volume, the physical mass of that information will literally collapse into a black hole.  That limit is called the Bekenstein bound and is truly a massive limit.  For instance, the total informational bound that could be contained in a volume the size of the human brain is around 10^42 bits, which means that the total number of possible brain states is around 2^(10^42).  The total informational capacity of the entire visible universe is on the order of 2^(10^120) states.

Why does this matter?  Physicalism (as contrasted with dualism) says that conscious states are produced by physical states; if a first conscious state is distinct from a second conscious state, then they must be produced by different physical states.  All of my papers (and most or all of my blog posts) so far have assumed physicalism is true, in part because anyone who doubts physicalism is usually condescendingly dismissed, ignored, or scoffed at by the scientific community, and in part because I don’t see why the Creator of the already ridiculously complicated universe would have omitted a physical explanation/mechanism for consciousness.  In other words, unless there is a reason to believe that consciousness does not entirely depend on underlying physical state, I see no need, for now, to reject physicalism.  Nevertheless, physicalism would be falsified if one could show that the number of distinct conscious states exceeded the number of physical states, because that would require that a single physical state could produce more than one distinct conscious state.

One avenue for evaluating physicalism, then, is to literally count distinct conscious states.  For example, if one could show that the total number of possible distinct conscious states by a particular person exceeded 2^(10^42), then that would prove that consciousness cannot depend (entirely) on the brain; if one could show that the number of possible distinct conscious states exceeded 2^(10^120), then that would literally falsify physicalism. 

A few years ago, Doug Porpora wrote a fascinating paper that attempted to prove that the total number of distinct conscious states is actually infinite.  One of his arguments, for instance, is that if we assume that there are some natural numbers that we cannot think about, then there must be a maximum number (call it Max) that we can think about and a minimum number (call it Min) that we cannot think about.  But if we can think about Max, certainly we can think about Max+1 or Max^2, which means that Max is not the maximum number we can think about and the original assumption (that there are some natural numbers that we cannot think about) is false.  A related argument is that by identifying the minimum number that we cannot think about (and even naming it Min), we are thinking about Min, which means that Min is not in the set of numbers that we cannot think about!  Again, the original assumption is false.  There is more to the argument than this but it gives you the general flavor of its proof-by-contradiction strategy.  One commenter has attempted to refute Porpora’s argument in this paper, and Porpora may be working on a reply. 

This got me thinking again about the importance of counting distinct conscious states, which very few people seem to have attempted.  Of course, if Porpora’s logical argument is correct, then physicalism is false because even though 2^(10^120) is a ridiculously and incomprehensively large number, it is still trumped by ∞.  But we should also realize that both of the quantities we are considering are extremes.  Infinity is extreme, of course, but so is the Bekenstein bound.

Let’s take a more realistic approach.  There are something like 100 billion neurons in the human brain.  If each neuron acts like a digital bit, then the total number of distinct brain states is 2^(100 billion).  Neurons are actually complex cells with very complicated connections to each other.  I don’t think any neuroscientist seriously regards neurons as acting in any way like digital bits.  However, I do think it is interesting to ask the question of whether or not the number of distinct conscious states exceeds 2^(100 billion).  If there were a way to answer that question – by somehow counting conscious states – then it would do a couple things:

·         Assuming physicalism is true, discovering that the number of distinct conscious states exceeds 2^(100 billion) would confirm that the brain is not a digital computer with neurons acting as digital bits.

·         It would provide a methodology for counting conscious states that may provide further insights about the physical nature of consciousness.

On that note, let me suggest such a method.  First, let me start with the notion of one stimulus “frame,” which is the particular collection of physical stimuli that one might detect through the five senses at any given moment.  Let’s assume that there are N consciously distinct (frames of) stimuli.  What I mean by that is that there are N different combinations of stimuli from the person’s senses that the person would be able to distinguish.  Consider these different sets of stimuli:

·         Watching a sunset while hearing crashing waves while tasting white wine while smelling the salty ocean while feeling sand under one’s feet;

·         Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet;

·         Watching a sunset while hearing seagulls while tasting white wine while smelling the salty ocean while feeling sand under one’s feet.

If we actually took the time to list them, we could certainly produce a very, very long list of consciously distinct stimuli.  Some of them might differ very subtly, such as two stimuli that are identical except for the temperature of the sand differing by one degree, or a slight difference in sound frequency distribution from the seagulls, or a slight but perceptible difference in the cloud distribution above the sunset.

What matters, in enumerating consciously distinct stimuli, is whether a person could distinguish them, not whether he actually does.  If he could distinguish two stimuli, either by consciously noticing the difference or simply having a (slightly) different conscious experience based on the difference, then that difference must be reflected in the underlying physical state.

So how many such distinct stimuli are there?  Lots.  One could certainly distinguish millions of different visual stimuli, many thousands of different sounds and tactile sensations, and at least hundreds of different tastes and smells.  This is a ridiculously conservative claim, of course; there are professional chefs, for example, who can probably differentiate millions of different tastes and smells.  On this very conservative basis, there are probably far, far more than 10^18 (around 2^60) distinct stimuli for any given person.  If there were only 10^18 distinct conscious states or experiences, then in principle it would only require about 60 bits to specify them. 

However, history matters.  Conscious experience does not depend just on one’s stimuli in the moment, but also on prior stimuli (as well as prior conscious experience).  To specify a person’s conscious experience, it is not enough to specify his current stimuli, as his experience will also depend on past stimuli.  For example, imagine the different conscious experiences at time t1:

Case A – No significant change from t0 to t1:

t0: Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet.

t1: Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet.

Case B – Significant change from t0 to t1:

t0: Watching a sunset while hearing crashing waves while tasting white wine while smelling the salty ocean while feeling sand under one’s feet.

t1: Watching a sunset while hearing crashing waves while tasting red wine while smelling the salty ocean while feeling sand under one’s feet.

The stimulus at t1 is the same in both cases, but the conscious experience would clearly be different.  In Case A, the person may simply be enjoying the surroundings, while in Case B, he may be confused/surprised that his wine has suddenly changed flavor and color.

What that means is even if the information necessary to specify the particular stimulus at time t1 is 60 bits, that information is not sufficient to specify the person’s conscious experience at that time.  In other words, history matters, and instead of just counting the number of possible distinct stimuli, we need to consider their order in time. 

So, for N consciously distinct stimuli, let’s assume that one’s conscious experience/state at a given time is sensitive to (i.e., depends on) the time-ordering of M of these stimuli.  The total number of possible states, then, is just the permutation N!/(N-M)!, but assuming that N>>M, this total number of states ≈ N^M.  

So in the above example, the number of possible physical states necessary to allow the person to consciously distinguish Case A from Case B is not N, but N^2.  If N requires, say, 60 bits of information, then at least 120 bits are required to specify his conscious state at time t1.  But of course the situation is far worse.  We can imagine a series of ten consecutive stimuli, ending at time t9, which the person would consciously experience in a manner that depended on all ten stimuli and their order.  It makes no difference whether the person actually remembers the particular stimuli or their order of progression.  As long as he has a conscious experience at t9 that is in some (even miniscule) manner dependent on the particular stimuli and their order, then that conscious state is one of at least N^10 states, requiring at least 600 bits to specify.

Now note that his experience at t9 is a unique one of at least N^10 states, just as his experience at later time t19 is a unique one of at least N^10 states, and so forth until time t99.  But if his conscious experience at time t99 is sensitive to the ordering of his conscious experiences at t9, t19, t29, etc., then the conscious state at t99 is one of at least N^100 states, requiring at least 6000 bits to specify.  Once again, this analysis has nothing to do with whether the person remembers any specifics about his prior stimuli or experiences; all that matters is that his conscious experience at t99 depends to some degree on the ordering of experiences at t9, t19, etc., and that his experience at t9 depends to some degree on the ordering of stimuli at t0, t1, etc.

It’s easy to show, then, that the total number of possible conscious states is N^T, where T is the total number of individual “frames” of stimulus that one experiences over his life.  How many is that?  Well, 100 years is about 3 billion seconds, and we certainly experience more than one “frame” of stimulus per second.  (Otherwise, TVs would not need a refresh rate of around 30 frames/second.)  So, for 10 frames/second, we might estimate the total number of possible conscious states at about N^(30 billion).  If N is 2^60, then the total number of conscious states is 2^(1.8 trillion), requiring at least 1.8 trillion bits to specify.

I find it fascinating how close this is to the number of neurons (100 billion) in the human brain.  For extremely rough back-of-the-envelope calculations like this, an order or two of magnitude is certainly “close.”  The storage capacity of the human brain has been estimated somewhere in the tens to thousands of terabytes, and once again the above rough estimate is within a couple of orders of magnitude of this amount.

What this tells me is that this method of counting distinct conscious states is viable and potentially useful and valuable.  By getting better estimates for the number of stimuli that a person can distinguish, for example, we might find that the rough estimate above (≈ trillion bits) is far too high or far too low, which could then provide insights on our understanding of the brain as: a computer; a digital computer; a digital computer with neurons acting as bits; and the independent source of consciousness.  Of course, such an analysis will never get us anywhere near the Bekenstein bound or infinity, as addressed by Porpora’s paper, but I still think we can learn interesting and important things about the physical nature of consciousness by counting distinct conscious states.

Finally, I think the above analysis hints at something fundamental: that consciousness is history-dependent.  This is something I discuss at length in my paper on the Unique History Theorem, but the above arguments suggest a similar conclusion by a very different analysis.  If one’s conscious experience at time t99 depends to some degree on his experience at t98, which in turn depends on his experience at t97, and so on back, then it may not be possible to produce a person de novo in a particular conscious state C1 who has not already experienced the particular sequence of conscious states on which state C1 depends.

In any event, I think it makes sense to seriously consider and estimate the number of potentially distinct conscious states, taking into account a human’s sensitivity to different stimuli and the extent to which ordering of stimuli affect conscious states.  I think this approach could yield potentially fascinating knowledge and implications about the brain and the physical nature of consciousness.

Tuesday, June 2, 2020

Consciousness, Quantum Mechanics, and Pseudoscience

The study of consciousness is not currently “fashionable” in the physics community, and the notion that there might be any relationship between consciousness and quantum mechanics and/or relativity truly infuriates some physicists.  For instance, the hypothesis that consciousness causes collapse (“CCC”) of the quantum mechanical wave function is now considered fringy by many; a physicist who seriously considers it (or even mentions it without a deprecatory scowl) risks professional expulsion and even branding as a quack.

In 2011, two researchers took an unprovoked stab at the CCC hypothesis in this paper.  There is a fascinating experiment called the “delayed choice quantum eraser,” in which information appears to be erased from the universe after a quantum interference experiment has been performed.  The details don’t matter.  The point is that the researchers interpret the quantum eraser experiment as providing an empirical falsification of the CCC hypothesis.  They don’t hide their disdain for the suggestion that QM and consciousness may have a relationship.

The problem is: their paper is pseudoscientific shit.  They first make a massive logical mistake that, despite the authors’ contempt for philosophy, would have been avoided had they taken a philosophy class in logic.  They follow up that mistake with an even bigger blunder in their understanding of the foundations of quantum mechanics.  Essentially, they assert that the failure of a wave function to collapse always results in a visible interference pattern, which is just patently false.  They clearly fail to falsify the CCC hypothesis.  (For the record, I think the CCC hypothesis is likely false, but I am reasonably certain that it has not yet been falsified.)

Sure, there’s lots of pseudoscience out there, so why am I picking on this particular paper?  Because it was published in Annalen der Physik, the same journal in which Einstein published his groundbreaking papers on special relativity and the photoelectric effect (among others), and because it’s been cited by more than two dozen publications so far (often to attack the CCC hypothesis), only one of which actually refutes it.

What’s even more irritating is that the paper’s glaring errors could easily have been caught by a competent journal referee who had read the paper skeptically.  If the paper’s conclusion had been in support of the CCC hypothesis, you can bet that it would have been meticulously and critically analyzed before publication, assuming it was considered for publication at all.  But when referees already agree with a paper’s conclusion, they may be less interested in the logical steps taken to arrive at that conclusion.  A paper that comes to the correct conclusion via incorrect reasoning is still incorrect.  A scientist that rejects correct reasoning because it results in an unfashionable or unpopular conclusion is not a scientist.

Here is a preprint of my rebuttal to their paper.  Since it is intended to be a scholarly article, I am much nicer there than I’ve been here.