We would like to discern the intentions of a hyperintelligent, possibly malicious agent which has every incentive to conceal its evil intentions from us. But what even is intention? What does it mean for an agent to work towards a goal?

Consider the lowly ant and the immobile rock. Intuitively, we feel one has (some) agency and the other doesn't, while a human has more agency than either of them. Yet, a sceptic might object that ants seek out sugar and rocks fall down but that there is no intrinsic difference between the goal of eating yummy sweets and minimising gravitational energy.

Intention is a property that an agent has with respect to a goal. Intention is not a binary value, a number or even a topological vector space. Rather, it is a certain constellation of counterfactuals.

***

Let be a world, which we imagine as a causal model in the sense of Pearl: a directed acyclic graph with nodes and attached random variables . Let be an agent. We imagine to be a little robot - so not a hyperintelligent malignant AI- and we'd like to test whether it has a goal , say . To do so we are going to run an Intentional Agency Experiment: we ask to choose an action from its possible actions .

Out of the possible actions one is the 'best' action for if it has goal in the sense that for

If doesn't choose , great! We're done; doesn't have goal . If does choose , we provide it with a new piece of (counterfactual) information and offer the option of changing its action. From the remaining actions there is one next best actions . Given the information if does not choose we stop, if does we provide it with the information and continue as before.

At each round we assign more and more agency to . Rather, than a binary 'Yes, has agency' or 'No, has no agency' we imagine a continuum going from a rock, which has no possible actions, to an ant, which might pass some of the tests but not all, to humans and beyond.

***

Q: What if isn't acting rational? What if it doesn't know all the details of ? What if it knows more? What if has bounded computation? What if...

A: The above is merely a simple model that tries to capture intent; one can complicate it as needed. Most of these objections come down to the possible inability of to choose the best action (given infinite compute and full knowledge of ). To remedy this we might allow the Intentional Agency Experiment to continue if chooses an action that is close to optimal but not optimal. We may introduce a Time to Think parameter when we consider bounded computational agents, etc. Once again, the point is not to assign a binary value of goal intention to an agent, rather it is to assign it a degree of agency.

Q: What if knows that we are testing and tries to deceive us?

A: Yes, this breaks the model.

Q: Counterfactuals are weird and might not exist. Even if they did, the Intentional Agency Experiment is impossible to execute in practise.

Despite the Intentional Agency Experiment being an idealisation, we may approximate it in the real world. For instance, if we'd like to test the intention of an ant to seek out a sugar source (as opposed to a random walk) we might first check if it moves towards the sugar source; if it does we block off this route towards the sugar source and sees whether it tries to circumvent it. In fact, it could be argued that this is the way we test agency in real life.

Frontpage

4

New Comment