How in the world are we ever supposed to know if machines can be conscious? 
 

To my mind, the most bullish answer to this question goes something along these lines:
 

“Our brains do lots of things with information, only some of which are consciously experienced. Long-division is conscious, heartbeat regulation is not.

If we can identify the brain operations which systematically distinguish all conscious processes (like long-division), from all unconscious processes (like heartbeat regulation), we can ask whether whether AI’s appear to be doing the sorts of things to information which look more like what we do when we do long division, or more like what we do when we regulate our heartbeats.

Therefore, the question of AI-consciousness is simply a question of first, identifying this consciousness-computation in humans; and, second, investigating whether an analogous computation exists in AI systems.”


The underlying assumption here is that a calculator doing long division is - in some crucial, discoverable way - more similar to our brains doing heartbeat regulation than it is to our brains doing long division. 

This search for the precise consciousness-computation has been aptly termed “the Big Question” by Rob Long, and the most optimistic hopes for the informed assessment of consciousness in AI hinge on answering it.[1] 

But what if we could rule out, in advance, the possibility that any such computation can be run on non-biological hardware? 

This is the core argument posed in Aru et. al.’s recent paper on the “Feasability of AI consciousness through the lens of neuroscience”. Biology is also notably the most plausible permanent barrier to the attainment of AI-consciousness. The paper also gives a scholarly voice to the common intuition of many laypeople thinking about AI consciousness for the first time, articulating that common sentiment which demands: “how can it be conscious if it isn’t even alive?

So far, however, the idea that consciousness is in some way tied to biology remains nothing more than an intuition. Sure, maybe there is something going on in living systems, which can’t be functionally replicated in silico, which matters to consciousness. Do we have any reason to expect that to be true?

 

Neuroscience weighs in on LLM-consciousness

Aru and colleagues’ recent paper is notable for being, to my knowledge, the first formal paper-length intervention in the debate on AI consciousness by a leading neuroscientist of consciousness since the ChatGPT-earthquake of last year. It’s worth taking seriously and trying to understand exactly what they’re on about (which turns out to be pretty difficult). 

The authors give three arguments for the view that “present-day AI systems do not have consciousness”, which I will quote in full here as they first present them:
 

  1. “First, in mammalian brains, consciousness is supported by a highly interconnected thalamocortical system.”
  2. “Second, consciousness, is tied to the sensory streams that are meaningful for the organism.”
  3. “And third, consciousness might be bound to complex ‘skin in the game’ processes characteristic of living systems.”
     

The first two arguments are of the type that have been ventured or considered before, and represent important and meaningful ways of assessing consciousness in LLMs. As Aru et al., acknowledge, however, they are at most provisional limitations, given that “future AI systems… might be equipped with different types of inputs, they might have a global workspace and perhaps something equivalent to computations happening on the dendrites”. On some accounts, notably, they already no longer hold

I won’t have much to say about these arguments here - they are interesting and useful in so far as they bear on existing LLMs - though I will say for now that the paper itself does not specify their preferred theory of consciousness (Dendritic Integration Theory) in the formal terms one would need to rule out the possibility that LLMs recreate the architecture of thalamocortical the ways that are computationally relevant for the generation of consciousness. 

 

Consciousness is about quoting Nassim Taleb 

The novelty and significance of this paper lies entirely in the third argument - that consciousness depends on “skin in the game” (and they do actually cite Taleb), a claim the authors say “will also apply to future LLMs and AI systems with a different architecture”. 

What, then, is this new “skin in the argument”, and how seriously should we take it? 

The first formulation of it is astonishingly simple: “perhaps consciousness is related to specific processes within living organisms”. 

“Another way of stating this,” the authors go on to say, “is that living systems differ from software and machines.” This difference moreover, goes deeper than the eminently solvable problem of “embodiment, but is “rather about having ‘skin in the game’.” They explain: 

“Having ‘skin in the game’ means, in simple terms, that the organism has something to lose. An LLM could, in principle, claim in a conversation that it does not want to be shut down, but an LLM does not have skin in the game as there is no real consequence to the software when it is actually shut down. In contrast, in biology the system has something to lose on several levels. It cannot stop living, as otherwise it will die.”

As put here, the argument begs the question to an almost baffling degree. Surely no one is claiming that the reason LLMs don’t have consciousness is that “there is no real consequence to the software when it is actually shut down”. Thankfully, the authors don't stop here, and go on to discuss why "skin in the game" matters and is unique to biology.

 

Biology does things that computers don’t

In another formulation - whose relation to the former is as yet unclear - Aru et. al return to the idea that “perhaps consciusness is related to specific processes within living organisms”. 

The idea that there might be “specific processes” that underpin consciousness is of course nothing new, and is the core of their first argument that the lack of any architecture analogous to a thalamocortical network in LLMs rules out any conscious experience. What’s different here is the suggestion that these “specific processes” might be, for some as-yet unarticulated reason, the exclusive privilege of “living organisms”. Why might this be the case? 

First, it is worth noting that the paper does not come out and say which of the specific processes of living organisms are the ones that matter - instead, the authors are simply saying that there is a vast amount of stuff going on inside a living being which is not even approximately modelled in any artificial system we know or (crucially) can even conceive of. This argument takes the following steps:

  1. Point to a number of biological processes or attributes (eg the Krebs cycle of a cell, or the plasticity of biological neurons in isolated in vitro conditions); 
  2. Indicate the major importance of these processes to biological systems (literally existential); 
  3. Argue that we have no strong reason for regarding these processs as irrelevant to consciousness;
  4. Point out that current AI systems definitely do not recreate these processes;
  5. Further argue that these are processes that can only ever be implemented in biological systems. 

Their argument appears to me to be that the consciousness-generation that goes on in living systems is an exceptionally computationally deep problem. That is, consciousness is one of the relevant outputs of a system which can only be generated from a model which recreates the original system in enormous detail. 

Whereas the language-generation of humans can indeed be replicated through shortcuts which look nothing like the computations generating language in the human body, there are no such shortcuts for the consciousness-generation of humans, and thus they must recreate, to an unspecified but undoubtedly very high level of detail, the computations generating consciousness in human bodies.

The reason consciousness can’t be recreated in silico, therefore, is the same reason cellular respiration can’t: “capturing cellular respiration in a computer simulation will not keep the cell alive: processes like cellular respiration need to happen with real physical molecules”. 

This point is best illustrated with an example. Let’s say that Home Depot release a new videogame, Plant Simulator, for aspiring plant lovers to practice their skills on before buying the real thing. Plant Simulator is state-of-the-art, and will recreate the physics of your own bedroom and the plant you are hoping to buy in exceptional detail. In the game, you can choose where in your “bedroom” to position the plant, how often to water it, when to rotate it etc etc. The physics-biology model is so detailed, in fact, that the Plant Simulator will successfully predict all of the things you care about which would happen with the real plant given the same treatment - how it will grow, whether it’ll die, and so on. 

If all you care about is whether the plant grows or dies, then growing a plant in your bedroom is a computationally reducible problem. You can build a model (Plant Simulator) which will faithfully recreate all of the relevant facts of growing a plant in your bedroom.

Seen from another point of view however, the model is incomplete. In Plant Simulator, I hear you say, no actual photosynthesis takes place. This is a fact about what happens in the real world which does not take place in Plant Simulator. Photosynthesis doesn’t happen in Plant Simulator in the same way cellular respiration doesn’t happen in LLMs - the authors of the paper argue we have reason to believe that specifically biological processes such as these are relevant to the generation of consciousness. 

The contention is that the exact details of what is going on in things like cell Krebs cycles, and not some more high-level, generalisable output, is crucial to the generation of consciousness. The information spiral from the inside of cells all the way up to the generation of consciousness is not one that can be described in shortcuts - it must to be run in a way that recreates, in an enormous amount of detail, the information operations at each stage of the biological process. 

It's not that this argument makes no sense - it is certainly possible that consciousness depends on lower level biological machinery. 

That a silicon machine will never recreate the entirety of all outputs generated by a living organism is easy to agree on - but why should we believe that the low-level details of living organisms, such as cell Krebs cycles, are relevant to the output that is consciousness? 

The “skin in the game” argument attempts to answer this question a priori: the authors argue that the reason consciousness only emerges from biological processes is that biological processes are the only ones that emerge in response to the very specific problem of survival as a living organism, and it is the presence of this specific problem across all levels of biological existence which lead to the generation of consciousness
 

Machines can't get a life


Living organisms, Aru et al. contend, have “skin in the game” all the way down to their most basic constituent components. The same cannot be said for artificially constructed systems. 

The contention seems to be that it is the unity of the necessity for survival across “several levels” that has shaped biological computation in such a way that consciousness is an output of even the details of intracellular processes. The reason for the “complex causality between physical events at the microscale and consciousness” is that the “system has skin in the game across levels of processing and these levels are not independent from each other”. In consequence, we should “at least entertain the possibility that consciousness is linked to this complex ‘skin in the game’ process’ underlying life.”

I’m far from certain how to read this argument, but I think the Plant Simulator analogy is helpful again here.

No matter how plant behaviour is recreated on a computer, photosynthesis will never be a technique the computer resolves to in order to recreate the relevant aspects of plant behaviour.[2] This is because the requirements of the machine’s existence are different to those of a living organism.

Similarly, it simply does not matter how intelligent behaviour is recreated in a machine - because machines do not need to survive in the same way that living organisms do, consciousness is simply not a solution that is on the table, no matter what problems the machine is tasked with solving. 

As it stands, however, the argument remains unconvincing. Though the implausibility of photosynthesis as a convergent solution for non-biological machines makes sense, it’s unclear to me why this is the case for consciousness, and the authors do nothing to specify which aspects of the unique way that living organisms “survive” make consciousness a particularly useful tool. 

At base, therefore, the argument is that consciousness is only useful to things that live. Artificial systems cut from different cloth don't tend to evolve towards the kinds of computations which "care" about existence in the way that living systems do. 

This ends up looking a lot like the argument Anil Seth gestures towards at the end of Being You, where he voices his intuition that "the materiality of life will turn out to be important for all manifestations of consciousness". His reasoning is very similar: "the imperative for regulation and self-maintenance in living systems isn’t restricted to just one level... [It] goes all the way down, even down to the level of individual cells". 

Seth is clear, however, that he is working with "only an intuition", and Aru's attempt to evidence this point with an argument from "skin in the game" leaves us firmly in the domain of intuitions. 

Formulated this way, however, the argument does become a little more promising as a (still immensely under-specified) research programme: are there ways that only living things optimise that seem to make consciousness specifically appealing as a tool?

If we can find any evidence to suggest this possiblity, it begins to seem more plausible to hold that consciousness is not a convergent solution for problem-solving in non-biological substrates. Until then, however, ruling out consciousness in artificial systems appears as doubtful as ever. 


 

  1. ^

     One plausible take, for example, would be that what makes human long-division conscious is that the relevant information (and this includes everything from the visual input of numbers to the subtle somatics of curiosity and mental exertion) is uploaded to a domain-general multimodal global “workspace” of information which constitutes our conscious experience at any given time. If GPT doesn’t have this workspace, we can assert that it’s doing long division in the non-conscious “calculator way” (even if, interestingly, the operation of information in GPT is still radically different to that of a pocket calculator - the point is that like the calculator it lacks the specific phenomenology-computation). This method can also be extended to non-mammalian animals - one recent study found that stimulus-perception in the Nidopallium caudolaterale of carrion crows is correlated with patterns of neural activity similar to those identified with a “global workspace” in primate neocortices. 

  2. ^

     Interestingly, this argument leaves open the possibility that an AGI Plant Simulator would itself design tools that photosynthesise for it. The point is that the original design of the Plant Simulator would never give rise to photosynthesis. 


     

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 1:32 PM
[-]Viliam10mo20

The authors give three arguments for the view that “present-day AI systems do not have consciousness”, which I will quote in full here as they first present them: 

  1. “First, in mammalian brains, consciousness is supported by a highly interconnected thalamocortical system.”
  2. “Second, consciousness, is tied to the sensory streams that are meaningful for the organism.”
  3. “And third, consciousness might be bound to complex ‘skin in the game’ processes characteristic of living systems.”

This is irrelevant to the question whether biology is necessary for consciousness. In principle, machines can also be highly interconnected and have sensory streams. Some kind of "skin in the game" can be achieved by uploading algorithms to robotic bodies (which can potentially be destroyed, or maybe need to find energy int he environment), or even easier by simulated environments.

Generally, it seems to me that authors are playing the following game: "Find some property X that humans have, and machines do not have (yet). Claim that X is inevitable for consciousness (without explaining why)." In other words, as long as we cannot clearly describe what exactly consciousness is, we can make up arbitrary statements about what is a necessary ingredient, and use them to win debates. I am not impressed.

[-]TAG10mo20

Claim that X is inevitable for consciousness (without explaining why).”

Note that this still works as a "might be" claim. Abstract computationalism isn't a necessary truth, so dependence on physics, chemistry or biology is possible truth.

In other words, as long as we cannot clearly describe what exactly consciousness is, we can make up arbitrary statements about what is a necessary ingredient, and use them to win debates. I am not impressed.

I don't see why being able to define "consciousness" would tell you how it works.

[-]Reed10mo10

Yes, that's exactly the game the authors are playing - I too was pretty unimpressed tbh.

To be fair to them, though, "X = thalamocortical networks" or "X = sensory streams that are meaningful to the organism" aren't claims with literally 0 evidence (even though the evidence to this date is contentious). They are claims based off of contemporary neuroscience - eg, studies which show that conscious (as opposed to non-conscious) processing appears to involve thalamacortical networks in some special way. Also worth noting that the authors fully acknowledge that, yes, machines can be given these "sensory streams" or relevant forms of "interconnection". 

I do also think that one could argue that we don't need an exact description of consciousness is to get an idea of the sorts of information processing that might generate it. The most widely accepted paradigm in neuroscience is basically just to ask someone whether they consciously experienced something, and look at the neural correlates of that experience. If you accept that this approach makes sense (and there are ofc good reasons not to), then you do end up with a non-arbitrary reason for saying something is a necessary ingredient of consciousness. 

Wrt the possiblity of creating "skin in the game" by uploading algorithms to robotic bodies - I agree that this is possible in the normal sense in which you or I might conceive of "skin in the game". But the authors of the paper are arguing that this is literally impossible, because they use "skin in the game" to describe a system whose existence is underpinned by biological processes at every single level - from intracellular upwards. They don't, however, provide much of argument for why this makes consciousness a product only of systems with "skin in the game". I was kinda just trying to get to the bottom of why the paper thought this conception of "skin in the game" uniquely leads to consciousness, since variants of "X = biology" are pretty commonly offered as reasons for AI consciousness being impossible. 

[-]Dagon10mo20

How in the world are we ever supposed to know if machines can be conscious?

Start by how you know exactly when a human becomes conscious (presuming an embryo isn't, and the adult it becomes is).  And how you know whether a whale is conscious, or a slime mold.

If you can't do this, you probably don't have a solid definition of "conscious", and your question is kind of meaningless.