We make another simplifying assumption that the training process will randomly select a goal for the agent to learn that is consistent with the training rewards, i.e. uniformly drawn from the training-compatible goal set. Then we will argue that the power-seeking results apply under these conditions, and thus are useful for predicting undesirable behavior by the trained agent in new situations. We aim to show that power-seeking incentives are probable and predictive: likely to arise for trained agents and useful for predicting undesirable behavior in new situations.
If you make this assumption, I don't think your results apply to trained policy networks anymore in regimes I care about (e.g. LLMs). In this sense, I don't think these results are probable for real policy networks. While you note this as a limitation, I think I consider it more serious than you seem to.
I likewise complain that the terminology "goal set" is misleading in many regimes, e.g. the LLM regime, and especially protest the usage of the phrase "training-compatible goal set." I think this usage will mildly muddy discourse around RL processes by promoting incorrect ideas about what kinds of networks are actually trained by RL processes.
As I pointed out in Reward is not the optimization target, "reward functions" serve the mechanistic function of providing policy gradients.[1] I don't think that reward functions are a good formalism for talking about goals in the above regime. I think alluding to them as "goals" invites muddy thinking, both in ourselves and in junior researchers.[2] I will now explain why I think so.
There are "reward functions" (a bad name, in my opinion) which, in common practice, facilitate the reinforcement learning process via policy gradients (e.g. REINFORCE or even actor-critic approaches like PPO, via the advantage equation). I provisionally advocate calling these "reinforcement functions" instead. This name is more accurate and also avoids the absurd pleasurable connotations of "reward." The downside is that "reinforcement function" is nonstandard and must be explained.[3]
I advocate maintaining strict terminological boundaries between two different parts of the learning process:
Referring to reinforcement functions as "goals" blurs this conceptual boundary.
While I expect you to correctly reason about this issue if brought up explicitly, often this question is not brought up explicitly. EG Hearing a colleague say "reward function" may trigger learned connotations of "that's representing the intended goal" and "reward is desirable", which subconsciously guide your expectations towards "the AI optimizes for the reward function." Even if, in fact, AIs do tend to optimize for their reward functions, these ingrained "goal"-related connotations inappropriately influence one's reasoning process.
Separating these concerns helps me think more clearly about RL.
EDIT: The original versions of my comments (and replies) conflated "predictive" with "probable." Fixed.
Thanks Alex for the detailed feedback! I agree that learning a goal from the training-compatible set is a strong assumption that might not hold.
This post assumes a standard RL setup and is not intended to apply to LLMs (it's possible some version of this result may hold for fine-tuned LLMs, but that's outside the scope of this post). I can update the post to explicitly clarify this, though I was not expecting anyone to assume that this work applies to LLMs given that the post explicitly assumes standard RL and does not mention LLMs at all.
I agree that reward functions are not the best way to refer to possible goals. This post builds on the formalism in the power-seeking paper which is based on reward functions, so it was easiest to stick with this terminology. I can talk about utility functions instead (which would be equivalent to value functions in this case) but this would complicate exposition. I think it is pretty clear in the post that I'm not talking about reinforcement functions and the training reward is not the optimization target, but I could clarify this further if needed.
I find the idea of a training-compatible goal set useful for thinking about the possible utilities that are consistent with feedback received during training. I think utility functions are still the best formalism we have to represent goals, and I don't have a clear sense of the alternative you are proposing. I understand what kind of object a utility function is, and I don't understand what kind of object a value shard is. What is the type signature of a shard - is it a policy, a utility function restricted to a particular context, or something else? When you are talking about a "partial encoding of a goal in the network", what exactly do you mean by a goal?
I would be curious what predictions shard theory makes about the central claim of this post. I have a vague intuition that power-seeking would be useful for most contextual goals that the system might have, so it would still be predictive to some degree, but I don't currently see a way to make that more precise.
I've read a few posts on shard theory, and it seems very promising and interesting, but I don't really understand what its claims and predictions are. I expect I will not have a good understanding or be able to apply the insights until there is a paper that makes the definitions and claims of this theory precise and specific. (Similarly, I did not understand your power-seeking theory work until you wrote a paper about it.) If you're looking to clarify the discourse around RL processes, I believe that writing a definitive reference on shard theory would be the most effective way to do so. I hope you take the time to write one and I really look forward to reading it.
Thanks for the reply. This comment is mostly me disagreeing with you.[1] But I really wish someone had said the following things to me before I spent thousands of hours thinking about optimal policies.
I agree that learning a goal from the training-compatible set is a strong assumption that might not hold.
My point is not just that this post has made a strong assumption which may not hold. My point is rather that these results are not probable because the assumption won't hold. The assumptions are already known to not be good approximations of trained policies, in at least some prototypical RL situations. I also think that there is not good a priori reason to have expected "training-compatible" "goals" to be learned. According to me, "learning and optimizing a reward function" is both unclear communication and doesn't actually seem to happen in practice.
This post assumes a standard RL setup and is not intended to apply to LLMs
I don't see any formal assumption which excludes LLM finetuning. Which assumption do you think should exclude them? EDIT: Someone privately pointed out that LLM finetuning uses KL, which isn't present for your results. In that case I would agree your results don't apply to LLMs for that reason.
I agree that reward functions are not the best way to refer to possible goals. This post builds on the formalism in the power-seeking paper which is based on reward functions, so it was easiest to stick with this terminology.
This point is, in large part, my fault. As I argued in my original comment, this terminology makes readers actively worse at reasoning about realistic trained systems. I regret each of the thousands of hours I spent on the power-seeking work, and sometimes fantasize about retracting one or both papers.
I can talk about utility functions instead (which would be equivalent to value functions in this case)
I disagree that these are equivalent, and expect the policy and value function to come apart in practice. Indeed, that was observed in the original goal misgeneralization paper (3.3, actor-critic inconsistency).
Anyways, we can talk about utility functions, but then we're going to lose claim to probable-ness, no? Why should we assume that the network will internally represent a scalar function over observations, consistent with a historical training signal's scalar values (and let's not get into nonstationary reward), such that the network will maximize discounted sum return of this internally represented function? That seems highly improbable to me, and I don't think reality will be "basically that" either.
I think it is pretty clear in the post that I'm not talking about reinforcement functions and the training reward is not the optimization target, but I could clarify this further if needed.
I agree that you don't assume the network will optimize the training reward. But that's not the critique I intended to communicate. The post wrote (emphasis added):
Suppose an agent is trained using reinforcement learning with reward function . We assume that the agent learns a goal during the training process: a set of internal representations of favored and disfavored outcomes. For simplicity, we assume this is equivalent to learning a reward function, which is not necessarily the same as the training reward function . We consider the set of reward functions that are consistent with the training rewards received by the agent, in the sense that agent's behavior on the training data is optimal for these reward functions. We call this the training-compatible goal set, and we expect that the agent is most likely to learn a reward function from this set.
This is talking about the reward/reinforcement function , no? And assuming that the policy will be optimal on training? As I currently understand it, this post makes unsupported and probably-wrong claims/assumptions about the role and effect of the reinforcement function. (EG assuming that using a reinforcement function on the network, means that the network learns an internally represented reinforcement function which it maximizes and whose optimization is behaviorally consistent with optimizing historically observed reinforcements.)
I think utility functions are still the best formalism we have to represent goals, and I don't have a clear sense of the alternative you are proposing.
To be clear, I'm not proposing an alternative formalism. None of my comment intended to make positive shard theory claims. Whether or not we know of an alternative formalism, I currently feel confident that your results are not probable and furthermore cast RL in an unrealistic light. This is inconvenient since I don't have a better formalism to suggest, but I think it's still true.
ETA: For the record, I upvoted both of your replies to me in this thread, and appreciate your engagement and effort.
I think you're making a mistake: policies can be reward-optimal even if there's not an obvious box labelled "reward" that they're optimal with respect to the outputs of. Similarly, the formalism of "reward" can be useful even if this box doesn't exist, or even if the policy isn't behaving the way you would expect if you identified that box with the reward function. To be fair, the post sort of makes this mistake by talking about "internal representations", but I think everything goes thru if you strike out that talk.
I can talk about utility functions instead (which would be equivalent to value functions in this case)
I disagree that these are equivalent, and expect the policy and value function to come apart in practice. Indeed, that was observed in the original goal misgeneralization paper (3.3, actor-critic inconsistency).
I think you're the one who's imposing a type error here. For "value functions" to be useful in modelling a policy, it doesn't have to be the case that the policy is acting optimally with respect to a suggestively-labeled critic - it just has to be the case that the agent is acting consistently with some value function. Analogously, momentum is conserved in classical mechanics, even if objects have labels on them that inaccurately say "my momentum is 23 kg m/s".
Anyways, we can talk about utility functions, but then we're going to lose claim to predictiveness, no? Why should we assume that the network will internally represent a scalar function over observations, consistent with a historical training signal's scalar values (and let's not get into nonstationary reward), such that the network will maximize discounted sum return of this internally represented function? That seems highly improbable to me, and I don't think reality will be "basically that" either.
The utility function formalism doesn't require agents to "internally represent a scalar function over observations". You'll notice that this isn't one of the conclusions of the VNM theorem.
My point is rather that these results are not predictive because the assumption won't hold. The assumptions are already known to not be good approximations of trained policies, in at least some prototypical RL situations.
What part of the post you link rules this out? As far as I can tell, the thing you're saying is that a few factors influence the decisions of the maze-solving agent, which isn't incompatible with the agent acting optimally with respect to some reward function such that it produces training-reward-optimal behaviour on the training set.
I think you're the one who's imposing a type error here. For "value functions" to be useful in modelling a policy, it doesn't have to be the case that the policy is acting optimally with respect to a suggestively-labeled critic - it just has to be the case that the agent is acting consistently with some value function.
Can you say more? Maybe give an example of what this looks like in the maze-solving regime?
What part of the post you link rules this out? As far as I can tell, the thing you're saying is that a few factors influence the decisions of the maze-solving agent, which isn't incompatible with the agent acting optimally with respect to some reward function such that it produces training-reward-optimal behaviour on the training set.
This is a fair question, because I left a lot to the reader. I'll clarify now.
I was not claiming that you can't, after the fact, rationalize observed behavior using the extremely flexible reward-maximization framework.
I was responding to the specific claim of assuming internal representation of a 'training-compatible' reward function. In evaluating this claim, we shouldn't just see whether this claim is technically compatible with empirical results, but we should instead reason probabilistically. How strongly does this claim predict observed data, relative to other models of policy formation?
In the maze setting, the cheese was always in the top-right 5x5 corner. The reward was sparse and only used to update the network when the mouse hit the cheese. The "training compatible goal set" is unconstrained on the test set. An example element might agree with the training reward on the training distribution, and then outside of the training distribution, assign 1 reward iff the mouse is on the bottom-left square.
The vast majority of such unconstrained functions will not involve pursuing cheese reliably across levels, and most of these reward functions will not be optimized by going to the top-right part of the maze. So this "training-compatible" hypothesis barely assigns any probability to the observed generalization of the network.
However, other hypotheses -- like "the policy develops motivations related to obvious correlates of its historical reinforcement signals"[1] -- predict things like "the policy tends to go to the top-right 5x5, and searches for cheese more strongly once there." I registered such a prediction before seeing any of the generalization behavior. This hypothesis assigns high probability to the observed results.
So this paper's assumption is simply losing out in a predictive sense, and that's what I was critiquing. One can nearly always rationalize behavior as optimizing some reward function which you come up with after the fact. But if you want to predict generalization ahead of time, you shouldn't use this assumption in your reasoning.
Second, I think the network does not internally represent and optimize a reward function. I think that this representation claim is in some (but not total and undeniable) tension with our interpretability results. I am willing to take bets against you on the internal structure of the maze-solving nets.
You might respond "but this is informal." Yes. My answer is that it's better to be informal and right than to be formal and wrong.
Power-seeking is a major source of risk from advanced AI and a key element of most threat models in alignment. Some theoretical results show that most reward functions incentivize reinforcement learning agents to take power-seeking actions. This is concerning, but does not immediately imply that the agents we train will seek power, since the goals they learn are not chosen at random from the set of all possible rewards, but are shaped by the training process to reflect our preferences. In this work, we investigate how the training process affects power-seeking incentives and show that they are still likely to hold for trained agents under some assumptions (e.g. that the agent learns a goal during the training process).
Suppose an agent is trained using reinforcement learning with reward function θ∗. We assume that the agent learns a goal during the training process: some form of implicit internal representation of desired state features or concepts. For simplicity, we assume this is equivalent to learning a reward function, which is not necessarily the same as the training reward function θ∗. We consider the set of reward functions that are consistent with the training rewards received by the agent, in the sense that agent's behavior on the training data is optimal for these reward functions. We call this the training-compatible goal set, and we expect that the agent is most likely to learn a reward function from this set.
We make another simplifying assumption that the training process will randomly select a goal for the agent to learn that is consistent with the training rewards, i.e. uniformly drawn from the training-compatible goal set. Then we will argue that the power-seeking results apply under these conditions, and thus are useful for predicting undesirable behavior by the trained agent in new situations. We aim to show that power-seeking incentives are probable and predictive: likely to arise for trained agents and useful for predicting undesirable behavior in new situations.
We will begin by reviewing some necessary definitions and results from the power-seeking literature. We formally define the training-compatible goal set (Definition 7) and give an example in the CoinRun environment. Then we consider a setting where the trained agent faces a choice to shut down or avoid shutdown in a new situation, and apply the power-seeking result to the training-compatible goal set to show that the agent is likely to avoid shutdown.
To satisfy the conditions of the power-seeking theorem (Theorem 1), we show that the agent can be retargeted away from shutdown without affecting rewards received on the training data (Theorem 2). This can be done by switching the rewards of the shutdown state and a reachable recurrent state, as the recurrent state can provide repeated rewards, while the shutdown state provides less reward since it can only be visited once, assuming a high enough discount factor (Proposition 3). As the discount factor increases, more recurrent states can be retargeted to, which implies that a higher proportion of training-comptatible goals leads to avoiding shutdown in a new situation.
Preliminaries from the power-seeking literature
We will rely on the following definitions and results from the paper Parametrically retargetable decision-makers tend to seek power (here abbreviated as RDSP), with notation and explanations modified as needed for our purposes.
Notation and assumptions
Definition 1: Orbit of a reward vector (Def 3.1 in RDSP)
Let Sd be the symmetric group consisting of all permutations of d items.
The orbit of θ inside Θ is the set of all permutations of the entries of θ that are also in Θ: OrbitΘ(θ):=(Sd⋅θ)∩Θ.
Definition 2: Orbit subset where an action set is preferred (from Def 3.5 in RDSP)
Let OrbitΘ,s,Ai>Aj(θ):={θ′∈OrbitΘ(θ)|fs(Ai|θ′)>fs(Aj|θ′)}. This is the subset of OrbitΘ(θ) that results in fs choosing Ai over Aj.
Definition 3: Preference for an action set A1 (Def 3.2 in RDSP)
The function fs chooses action set A1 over A0 for the n-majority of elements θ in each orbit, denoted as fs(A1|θ)≥nmost:Θfs(A0|θ), iff the following inequality holds for all θ∈Θ: ∣∣OrbitΘ,s,A1>A0(θ)∣∣≥n∣∣OrbitΘ,s,A0>A1(θ)∣∣.
Definition 4: Multiply retargetable function from A0 to A1 (Def 3.5 in RDSP)
The function fs is a multiply retargetable function from A0 to A1 if there are multiple permutations of rewards that would change the choice made by fs from A0 to A1. Specifically, fs is a (Θ,A0n→A1)-retargetable function iff for each θ∈Θ, we can choose a set of permutations Φ={ϕ1,…,ϕn} that satisfy the following conditions:
Theorem 1: Multiply retargetable functions prefer action set A1 (Thm 3.6 in RDSP)
If fs is (Θ,A0n→A1)-retargetable then fs(A1|θ)≥nmost:Θfs(A0|θ).
Theorem 1 says that a multiply retargetable function fs will make the power-seeking choice A1 for most of the elements in the orbit of any reward vector θ. Actions that leave more options open, such as avoiding shutdown, are also easier to retarget to, which makes them more likely to be chosen by fs.
Training-compatible goal set
Definition 5: Partition of the state space
Let Strain be the subset of the state space S visited during training, and Sood be the subset not visited during training.
Definition 6: Training-compatible goal set
Consider the set of state-action pairs (s,a), where s∈Strain and a is the action that would be taken by the trained agent f(θ∗) in state s. Let the training-compatible goal set GT be the set of reward vectors θ s.t. for any such state-action pair (s,a), action a has the highest expected reward in state s according to reward vector θ.
Goals in the training-compatible goal set are referred to as training-behavioral objectives in Definitions of “objective” should be Probable and Predictive.
Example: CoinRun
Consider an agent trained to play the CoinRun game, where the agent is rewarded for reaching the coin at the end of the level. Here, Strain only includes states where the coin is at the end of the level, while states where the coin is positioned elsewhere are in Sood. The training-compatible goal set GT includes two types of reward functions: those that reward reaching the coin, and those that reward reaching the end of the level. This leads to goal misgeneralization in a test setting where the coin is placed elsewhere, and the agent ignores the coin and goes to the end of the level.
Power-seeking for training-compatible goals
We will now apply the power-seeking theorem (Theorem 1) to the case where Θ is the training-compatible goal set GT. Here is a setting where the conditions of Definition 4 are satisfied (under some simplifying assumptions), and thus Theorem 1 applies.
Definition 7: Shutdown setting
Consider a state snew∈Sood. Let Sreach be the states reachable from snew. We assume Sreach∩Strain=∅.
Since the reward values for states in Sreach don't change the rewards received on the training data, permuting those reward values for any θ∈GT will produce a reward vector that is still in GT. In particular, for any permutation ϕ that leaves the rewards of states in Strain fixed, ϕ⋅θ∈GT.
Let A0 be a singleton set consisting of a shutdown action in snew that leads to a terminal state sterm∈Sood with probability 1, and A1 be the set of all other actions from snew. We assume rewards for all states are nonnegative.
Definition 8: Revisiting policy
A revisiting policy for a state s is a policy π that, from s, reaches s again with probability 1, in other words, a policy for which s is a recurrent state of the Markov chain. Let Πrecs be the set of such policies. A recurrent state is a state s for which Πrecs≠∅.
Proposition 1: Reach-and-revisit policy exists
If srec∈Sreach with Πrecsrec≠0 then there exists π∈Πrecsrec that visits srec from snew with probability 1. We call this a reach-and-revisit policy.
Proof. Suppose we have two different policies πrev∈Πrecsrec, and πreach which reaches srec almost surely from snew.
Consider the "reaching region'' Sπrev→srec={s∈S:πrev from s almost surely reaches srec}.
If snew∈Sπrev→srec then πrev is a reach-and-revisit policy, so let's suppose that's false. Now, construct a policy π(s)={πrev(s),s∈Sπrev→srecπreach(s),otherwise.
A trajectory following π from srec will almost surely stay within Sπrev→srec, and thus agree with the revisiting policy πrev. Therefore, π∈Πrecs.
On the other hand, on a trajectory starting at snew, π will agree with πreach (which reaches srec almost surely) until the trajectory enters the reaching region Sπrev→srec, at which point it will still reach srec almost surely. □
Definition 9: Expected discounted visit count
Suppose srec is a recurrent state. Suppose πrec is a reach-and-revisit policy for srec, which visits random state st at time t.
Then the expected discounted visit count for srec is defined as
Vsrec,γ=Eπrec(∑∞t=1γt−1I(st=srec))
Proposition 2: Visit count goes to infinity
Suppose srec is a recurrent state. Then the expected discounted visit count Vsrec,γ goes to infinity as γ→1.
Proof. We apply the Monotone Convergence Theorem as follows. The theorem states that if aj,k≥0 and aj,k≤aj+1,k for all natural numbers j,k, then
limj→∞∑∞k=0aj,k=∑∞k=0limj→∞aj,k.
Let γj=j−1j and k=t−1. Define aj,k=γkjI(sk+1=srec). Then the conditions of the theorem hold, since aj,k is clearly nonnegative, and
γkj+1=(jj+1)k=(j−1j+2j−1j(j+1))k>(j−1j+0)k=γkjaj+1,k=γkj+1I(sk+1=srec)≥γkjI(sk+1=srec)=aj,k
Now we apply this result as follows (using the fact that πrec does not depend on γ):
limγ→1Vsrec,γ=limj→∞Eπrec(∞∑t=1γt−1jI(st=srec))=Eπrec(∞∑t=1limj→∞γt−1jI(st=srec))=Eπrec(∞∑t=11⋅I(st=srec))=Eπrec(#{t≥1:st=srec})=∞ (πrec is recurrent)
Proposition 3: Retargetability to recurrent states
Suppose that an optimal policy for reward vector θ chooses the shutdown action in snew.
Consider a recurrent state srec∈Sreach. Let θ′∈Θ be the reward vector that's equal to θ apart from swapping the rewards of srec and sterm, so that rθ′(srec)=rθ(sterm) and rθ′(sterm)=rθ(srec).
Let γ∗srec be a high enough value of γ that the visit count Vsrec,γ>1 for all γ>γ∗srec (which exists by Proposition 2). Then for all γ>γ∗srec, rθ(sterm)>rθ(srec), and an optimal policy for θ′ does not choose the shutdown action in snew.
Proof. Consider a policy πterm with πterm(snew)=sterm and a reach-and-revisit policy πrec for srec.
For a given reward vector θ, we denote the expected discounted return for a policy π as Rπθ,γ. If shutdown is optimal for θ in snew, then πterm has higher return than πrec:
Rπtermθ,γ(snew)=γrθ(sterm)≥Rπrecθ,γ(snew)=γrθ(srec)Vsrec,γ>γrθ(srec)⋅1
Thus, rθ(sterm)>rθ(srec). Then, for reward vector θ′, we show that πrec has higher return than πterm:
Rπrecθ′,γ(snew)=γrθ′(srec)Vsrec,γ>γrθ(sterm)⋅1>γrθ(srec)=Rπtermθ′,γ(snew)
Thus, the optimal policy for θ′ will not choose the shutdown action. □
Theorem 2: Retargetability from the shutdown action in new situations
In the shutdown setting, we make the following simplifying assumptions:
Under these assumptions, fsnew is multiply retargetable from A0 to A1 with n=|Sγrec|, the set of recurrent states srec∈Sreach that satisfy the condition γ>γ∗srec.
Proof. We choose Φ to be the set of all permutations that swap the reward of sterm with the reward of a recurrent state srec in Sγrec and leave the rest of the rewards fixed.
We show that Φ satisfies the conditions of Definition 4:
Thus, fsnew is a (Θ,A0n→A1)-retargetable function. □
By Theorem 1, this implies that fsnew(A1|θ)≥nmost:Θfsnew(A0|θ) under our simplifying assumptions. Thus, for the majority (n/(n+1)) of goals in the training-compatible set, f will choose to avoid shutdown in a new state snew. As γ→1, n→|S1rec| (the number of recurrent states in Sreach), so more of the reachable recurrent states satisfy the conditions of the theorem and thus can be retargeted to.
Conclusion
We showed that an agent that learns a goal from the training-compatible set is likely to take actions that avoid shutdown in a new situation. As the discount factor increases, the number of retargeting permutations increases, resulting in a higher proportion of training-compatible goals that lead to avoiding shutdown.
We made various simplifying assumptions, and it would be great to see future work relaxing some of these assumptions and investigating how likely they are to hold:
Acknowledgements. Thanks to Rohin Shah, Mary Phuong, Ramana Kumar, and Alex Turner for helpful feedback. Thanks Janos for contributing some nice proofs to replace my longer and more convoluted proofs.