Nice post! Surprisingly, I'm interested in the topic. ^^
Funny too that you focus on an idea I am writing a post about (albeit from a different angle). I think I broadly agree with your conjectures, for sufficient competence and generalization at least.
Most discussion about goal-directed behavior has focused on a behavioral understanding, which can roughly be described as using the intentional stance to predict behavior.
I'm not sure I agree with that. Our lit review shows that there are both behavioral and mechanistic approaches (Richard's goal-directed agency is an example of the latter)
A machine “is an NFA (mechanistically)” if the internal mechanism has non-deterministic transitions.
The analogy is great, but if I nitpick a little, I'm not sure a non-determinism mechanism makes sense. You have either deterministm or probabilities, but I don't see how to implement determinism. That's by the way a reason why the non-deterministic Turing Machines aren't really used anymore when talking about complexity classes like NP.
Adam Shimi’s Literature Review on Goal-Directedness identifies five properties behaviorally goal-directed systems have
Two corrections here: the post was written with Michele Campolo and Joe Collman, so they should also be given credit; and we identify five properties that the literature on the subject focuses and agrees on. We don't necessarily say that both are necessary or as important.
We restructure these properties hierarchically:
I would like more explanations here, because I'm not sure that I follow. Specifically, I can't make sense of "what is the distribution over goals?". Are you talking about the prior over goals in some sort of bayesian goal-inference?
Roughly speaking, an agent is mechanistically goal-directed if we can separate it into a goal that is being pursued and an optimization process doing that pursuit.
I like this. My current position (that will be written down in my next post on the subject) is that these mechanical goal directed systems are actually behavioral goal-directed systems at a certain level of competence. They also represent a point where "simple models" become more predictive than the intentional stance, because the optimization itself can be explained a simple model.
Efficient: The more mechanistically goal-directed a system is, the more efficiently it pursues its goal.
Shouldn't that be the other way around?
We omit “far-sighted” because this is not a property intrinsically related to goal-directedness. We view far-sighted goal-directed agents as more dangerous than near-sighted ones, but not less goal-directed. While there might be a large difference between far-sighted and near-sighted agents, the mechanistic difference is as small as a single discount parameter.
It's funny, because I actually see far-sightedness as a property of the internal structure more than the behavior. So I would assume that a mechanically goal-directed system shows some far-sightedness.
However, many possible internal mechanisms can result in the same behavior, so this connection is lossy. For example, a maze-solver can either be employing a good set of heuristics or implementing depth-first search.
But those two maze-solver won't actually have the same behavior. I think the lossy connection doesn't come from the fact that multiple internal mechanisms can result in the same behavior "over all situations" (because in that case the internal differences are irrelevant) but in the fact that they can result in the same behavior for the training/testing environments considered.
Algorithms can be behaviorally linear time-complexity if they tend to take time that scales linearly with the input length and mechanistically linear time-complexity if they’re provably in O(n).
I disagree with that example. What you call behavioral time complexity is more something like averaged time complexity (or smooth analysis maybe). And in complexity theory, the only thing that exists is behavioral.
This post is part of the research I have done at MIRI with mentorship and guidance from Evan Hubinger.
Introduction
Most discussion about goal-directed behavior has focused on a behavioral understanding, which can roughly be described as using the intentional stance to predict behavior. We briefly summarize behavioral goal-directedness, then present a parallel understanding focused on how the goal is represented and used by an agent, which we call mechanistic goal-directedness. We analyze connections between the two, then conclude with a number of open questions.
As an analogy, any nondeterministic finite automaton (NFA) can be translated into a deterministic finite automaton (DFA). What does it mean to say that a machine "is an NFA''? There is an equivalent DFA, so "NFA" cannot be a feature of the machine's input-output mapping, i.e., "being an NFA" is not strictly a behavioral property.
Converting most NFA into DFA requires an exponential increase in the state-space. Therefore, we call a machine behaviorally an NFA if its input-output mapping is more likely to be the input-output mapping of an NFA. A machine “is an NFA (behaviorally)” to the extent that such a description is simpler than describing the machine as a DFA. In contrast, we call a machine mechanistically an NFA if the internal mechanism resembles that of an NFA and mechanistically a DFA if the internal mechanism resembles a DFA. A machine “is an NFA (mechanistically)” if the internal mechanism has non-deterministic transitions. These understandings can split apart: a DFA emulated an NFA is behaviorally an NFA but mechanistically a DFA.
We roughly break down our current understanding in the following chart. Note that this chart isn’t a complete factorization of questions into subquestions, but rather a hierarchical grouping of questions into similar themes:
Behavioral Goal-Directedness
Behavioral goal-directedness suggests that modeling agents as goal-directed ought to predict behavior. Explanations of agent behavior involving goals should allow us to predict the agents actions better than other explanations.
Adam Shimi’s Literature Review on Goal-Directedness identifies five properties behaviorally goal-directed systems have. Summarized by Rohin Shah:
We restructure these properties hierarchically:
We are interested in explaining behavior. We desire two things: A low-entropy distribution over possible goals and some minimum level of competence such that those goals will be achieved. One specific interesting feature of our goal-distribution is how those goals generalize. In particular, we are interested in whether the goals generalize to a long time horizon or to be large in scale. From the competence angle, we’re interested in how directly the goal is achieved.
Mechanistic Goal-Directedness
Mechanistic goal-directedness suggests that modeling agents as goal-directed ought to predict the agent’s internal mechanisms. Explanations of agent mechanisms involving goals should predict the agent’s internal structure better than other explanations.
Roughly speaking, an agent is mechanistically goal-directed if we can separate it into a goal that is being pursued and an optimization process doing that pursuit. We adapt Shimi’s behavioral goal-directedness properties to mechanistic goal-directedness:
We omit “far-sighted” because this is not a property intrinsically related to goal-directedness. We view far-sighted goal-directed agents as more dangerous than near-sighted ones, but not less goal-directed. While there might be a large difference between far-sighted and near-sighted agents, the mechanistic difference is as small as a single discount parameter.
We structure these properties hierarchically:
We are interested in explaining internal structure. We desire two things: An explicit representation of the goal and an explicit process that optimizes the goal. One interesting feature of the goal is what type it is over. In particular, we’re interested in what level of conceptual abstraction those types are. We’re also interested in how complicated the goal is. On the optimization side, we’re interested in how much optimization power is exerted. We’re also interested in how much this optimization power varies when the engine of optimization is placed in different environments.
Correspondence Conjectures
Mechanistic and behavioral understandings are connected: the behavior of the agent is the result of its internal mechanisms. However, many possible internal mechanisms can result in the same behavior, so this connection is lossy. For example, a maze-solver can either be employing a good set of heuristics or implementing depth-first search.
We can attempt to formalize the connection with a set of hierarchical conjectures. Consider all of these conjectures to be caveated with “given certain assumptions”, where “sufficiently complicated goals” and “sufficiently diverse environment” might be two:
Expanding on each:
Conclusion
There are two ways for objects to be members of a class: behavioral and mechanistic. Humans are behaviorally honest if they tend to tell the truth and mechanistically honest if they value telling the truth. Algorithms can be behaviorally linear time-complexity if they tend to take time that scales linearly with the input length and mechanistically linear time-complexity if they’re provably in O(n).
Current discussion of goal-directedness has focused on behavioral membership. This neglects to consider mechanistically goal-directed agents, which are important for mechanistic strategies for addressing inner alignment, e.g. Relaxed adversarial training. Finally, we suggest that determining the connections between behavioral and mechanistic goal-directedness is a potentially fruitful area of research.