Imagine two cerebral hemispheres, materialized in a cosmic vacuum by unlikely fluctuations, simulated in advanced virtual reality, or physically disconnected and kept alive by advanced technology.

Both hemispheres are separated, but electrodes are attached to the nerve endings (or electrical impulses are simulated, or the fluctuations in their improbability recreate a given sequence of impulses), applying nerve impulses in a manner perfectly identical to what would occur in the brain if there were no spatial separation.

Assuming there would be a conscious state of mind in a given brain if it were not split, is the conscious mind also in two separate hemispheres? If not, why not?

And if it is in the hemispheres, does the distance between the hemispheres or the time in which they exist have any significance?

If we can imagine such a scenario, and if the described interpretation is to be taken seriously, despite its apparent abstractness, more advanced situations should also be considered.

We can imagine dividing the brain into 4 parts, 8 and 16, as well as disconnect the limbic system, the cerebellum and the individual lobes.

Imagine not all Boltzmann brains, but (much easier to create by random fluctuations) "Boltzmann neurons" (or create them in a simulated environment, or create them as random fluctuations in a simulated environment, we can also think of a planetary supercomputer simulating random neurons in arbitrary quantity). If in the set of these simulated or fluctuated neurons there were neurons receiving simulated or randomly fluctuated electrical impulses identical to those which would be realized in the conscious mind, would such a conscious mind, therefore, exist as a result of the mere existence of such neurons? If not, what is the difference between this scenario and having two separate hemispheres behaving as if they were transmitting impulses (although the impulses are only simulated or are fluctuating)?

While the formation of Boltzmann brains is improbable in a specific place (which does not change the fact that they seem certain in a sufficiently large or long-lived space), Boltzmann neurons can materialize more easily. Perhaps we should consider the possibility that they create consciousness if that is one of the possible interpretations.

Thinking in this way, Boltzmann's brains and related phenomena may constitute a greater proportion of our measure than previously assumed (no matter what fraction of the measure we assigned to them)


What would be the best argument against such an interpretation? Or why arguments in favor of that couldn't work?

 

{Interpretations that assume that conscious experience exists as specifically processed information, as a form of computation, do not necessarily require computation to exist in a particular place, integrated in a physically connected system. The paper describing the possibility of the spontaneous emergence of minds from information: https://quantum-journal.org/papers/q-2020-07-20-301/  }

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 8:20 PM

What would be the best argument against such an interpretation

If you require consciousness to supervene on the actual physics of the brain , you can rule out Boltzmann neuron along with various other kinds of consciousness.

I understand. Yet it is still unclear to me what would be going on with consciousness if we cut the brain in two and create a situation described above.

Why would it be clear? We don't understand consciousness.

I agree. What I think is that under the computational theory of mind, or some similar approaches, like integrated information theory, possibly under the broader spectrum of views, assuming consciousness emerges from information being processed in certain ways, the interpretation I've described can be more solid.  For now, it is rather assumed some form of computationalism has great chances to turn out to be true, that's why I think it can be important to determine its possible implications.

Here you have neurons and order of their connection. This order is a graph and could be described as one long number. Mind states appear as a brain moves from one state to another, and here it will be transition from one number to another.  

This is exactly what Muller wrote in his article, which you linked: you need just numbers and a law based on Kolmogorov's complexity which connects them  -  to create an illusion of stream of consciousness.  Neurons are not needed at all. 

It is exactly what I think, neurons are just a useful example. It is a more pictorial and heuristic (also probably it seems more "real" than abstract) way of imagining.

(Mod note: I moved a duplicate of this post back to drafts. I chose which of the two duplicates to move based on which one had comments. The other difference between them was that the other one was a question-post. I can convert this copy into a question post if that seems better.)

It is good. Thank You and sorry for the trouble

No value-free arguments against it, but it probably can be argued that you can't do anything to help Boltzmann’s brains anyway.

It can be the case. I personally think there could be a way, but it is rather beyond the scope of my post.

Looks like all possible minds are always being generated in the complexity of all places.