# 76

This insight was made possible by many conversations with Quintin Pope, where he challenged my implicit assumptions about alignment. I’m not sure who came up with this particular idea.

In this essay, I call an agent a “reward optimizer” if it not only gets lots of reward, but if it reliably makes choices like “reward but no task completion” (e.g. receiving reward without eating pizza) over “task completion but no reward” (e.g. eating pizza without receiving reward). Under this definition, an agent can be a reward optimizer even if it doesn't contain an explicit representation of reward, or implement a search process for reward.

Reinforcement learning is learning what to do—how to map situations to actions so as to maximize a numerical reward signal. — Reinforcement learning: An introduction

Many people[1] seem to expect that reward will be the optimization target of really smart learned policies—that these policies will be reward optimizers. I strongly disagree. As I argue in this essay, reward is not, in general, that-which-is-optimized by RL agents.[2]

Separately, as far as I can tell, most[3] practitioners usually view reward as encoding the relative utilities of states and actions (e.g. it’s this good to have all the trash put away), as opposed to imposing a reinforcement schedule which builds certain computational edifices inside the model (e.g. reward for picking up trash → reinforce trash-recognition and trash-seeking and trash-putting-away subroutines). I think the former view is usually inappropriate, because in many setups, reward chisels cognitive grooves into an agent.

Therefore, reward is not the optimization target in two senses:

1. Deep reinforcement learning agents will not come to intrinsically and primarily value their reward signal; reward is not the trained agent’s optimization target.
2. Utility functions express the relative goodness of outcomes. Reward is not best understood as being a kind of utility function. Reward has the mechanistic effect of chiseling cognition into the agent's network. Therefore, properly understood, reward does not express relative goodness and is therefore not an optimization target at all.

# Reward probably won’t be a deep RL agent’s primary optimization target

After work, you grab pizza with your friends. You eat a bite. The taste releases reward in your brain, which triggers credit assignment. Credit assignment identifies which thoughts and decisions were responsible for the release of that reward, and makes those decisions more likely to happen in similar situations in the future. Perhaps you had thoughts like

• “It’ll be fun to hang out with my friends” and
• “The pizza shop is nearby” and
• “Since I just ordered food at a cash register, execute motor-subroutine-#51241 to take out my wallet” and
• “If the pizza is in front of me and it’s mine and I’m hungry, raise the slice to my mouth” and
• “If the slice is near my mouth and I’m not already chewing, take a bite.”

Many of these thoughts will be judged responsible by credit assignment, and thereby become more likely to trigger in the future. This is what reinforcement learning is all about—the reward is the reinforcer of those things which came before it and the creator of new lines of cognition entirely (e.g. anglicized as "I shouldn't buy pizza when I'm mostly full"). The reward chisels cognition which increases the probability of the reward accruing next time.

Importantly, reward does not automatically spawn thoughts about reward, and reinforce those reward-focused thoughts! Just because common English endows “reward” with suggestive pleasurable connotations, that does not mean that an RL agent will terminally value reward!

What kinds of people (or non-tabular agents more generally) will become reward optimizers, such that the agent ends up terminally caring about reward (and little else)? Reconsider the pizza situation, but instead suppose you were thinking thoughts like “this pizza is going to be so rewarding” and “in this situation, eating pizza sure will activate my reward circuitry.”

You eat the pizza, triggering reward, triggering credit assignment, which correctly locates these reward-focused thoughts as contributing to the release of reward. Therefore, in the future, you will more often take actions because you think they will produce reward, and so you will become more of the kind of person who intrinsically cares about reward. This is a path[4] to reward-optimization and wireheading.

While it's possible to have activations on "pizza consumption predicted to be rewarding" and "execute motor-subroutine-#51241" and then have credit assignment hook these up into a new motivational circuit, this is only one possible direction of value formation in the agent. Seemingly, the most direct way for an agent to become more of a reward optimizer is to already make decisions motivated by reward, and then have credit assignment further generalize that decision-making.

## The siren-like suggestiveness of the word “reward”

Let’s strip away the suggestive word “reward”, and replace it by its substance: cognition-updater.

Suppose a human trains an RL agent by pressing the cognition-updater button when the agent puts trash in a trash can. While putting trash away, the AI’s policy network is probably “thinking about”[5] the actual world it’s interacting with, and so the cognition-updater reinforces those heuristics which lead to the trash getting put away (e.g. “if trash-classifier activates near center-of-visual-field, then grab trash using motor-subroutine-#642”).

Then suppose this AI models the true fact that the button-pressing produces the cognition-updater. Suppose this AI, which has historically had its trash-related thoughts reinforced, considers the plan of pressing this button. “If I press the button, that triggers credit assignment, which will reinforce my decision to press the button, such that in the future I will press the button even more.”

Why, exactly, would the AI seize[6] the button? To reinforce itself into a certain corner of its policy space? The AI has not had antecedent-computation-reinforcer-thoughts reinforced in the past, and so its current decision will not be made in order to acquire the cognition-updater!

RL is not, in general, about training cognition-updater optimizers.

## When is reward the optimization target of the agent?

If reward is guaranteed to become your optimization target, then your learning algorithm can force you to become a drug addict. Let me explain.

Convergence theorems provide conditions under which a reinforcement learning algorithm is guaranteed to converge to an optimal policy for a reward function. For example, value iteration maintains a table of value estimates for each state s, and iteratively propagates information about that value to the neighbors of s. If a far-away state f has huge reward, then that reward ripples back through the environmental dynamics via this “backup” operation. Nearby parents of gain value, and then after lots of backups, far-away ancestor-states gain value due to f’s high reward.

Eventually, the “value ripples” settle down. The agent picks an (optimal) policy by acting to maximize the value-estimates for its post-action states.

Suppose it would be extremely rewarding to do drugs, but those drugs are on the other side of the world. Value iteration backs up that high value to your present space-time location, such that your policy necessarily gets at least that much reward. There’s no escaping it: After enough backup steps, you’re traveling across the world to do cocaine.

But obviously these conditions aren’t true in the real world. Your learning algorithm doesn’t force you to try drugs. Any AI which e.g. tried every action at least once would quickly kill itself, and so real-world general RL agents won’t explore like that because that would be stupid. So the RL agent’s algorithm won’t make it e.g. explore wireheading either, and so the convergence theorems don’t apply even a little—even in spirit.

## Anticipated questions

1. Why won’t early-stage agents think thoughts like “If putting trash away will lead to reward, then execute motor-subroutine-#642”, and then this gets reinforced into reward-focused cognition early on?
1. Suppose the agent puts away trash in a blue room. Why won’t early-stage agents think thoughts like “If putting trash away will lead to the wall being blue, then execute motor-subroutine-#642”, and then this gets reinforced into blue-wall-focused cognition early on? Why consider either scenario to begin with?
2. But aren’t we implicitly selecting for agents with high cumulative reward, when we train those agents?
1. Yeah. But on its own, this argument can’t possibly imply that selected agents will probably be reward optimizers. The argument would prove too much. Evolution selected for inclusive genetic fitness, and it did not get IGF optimizers.
1. "We're selecting for agents on reward  we get an agent which optimizes reward" is locally invalid. "We select for agents on X  we get an agent which optimizes X" is not true for the case of evolution, and so is not true in general.
2. Therefore, the argument isn't necessarily true in the AI reward-selection case. Even if RL did happen to train reward optimizers and this post were wrong, the selection argument is too weak on its own to establish that conclusion.
2. Here’s the more concrete response: Selection isn’t just for agents which get lots of reward.
1. For simplicity, consider the case where on the training distribution, the agent gets reward if and only if it reaches a goal state. Then any selection for reward is also selection for reaching the goal. And if the goal is the only red object, then selection for reward is also selection for reaching red objects.
2. In general, selection for reward produces equally strong selection for reward’s necessary and sufficient conditions. In general, it seems like there should be a lot of those. Therefore, since selection is not only for reward but for anything which goes along with reward (e.g. reaching the goal), then selection won’t advantage reward optimizers over agents which reach goals quickly / pick up lots of trash / [do the objective].
3. Another reason to not expect the selection argument to work is that it’s instrumentally convergent for most inner agent values to not become wireheaders, for them to not try hitting the reward button.
1. I think that before the agent can hit the particular attractor of reward-optimization, it will hit an attractor in which it optimizes for some aspect of a historical correlate of reward.
1. We train agents which intelligently optimize for e.g. putting trash away, and this reinforces the trash-putting-away computations, which activate in a broad range of situations so as to steer agents into a future where trash has been put away. An intelligent agent will model the true fact that, if the agent reinforces itself into caring about cognition-updating, then it will no longer navigate to futures where trash is put away. Therefore, it decides to not hit the reward button.
2. This reasoning follows for most inner goals by instrumental convergence.
2. On my current best model, this is why people usually don’t wirehead. They learn their own values via deep RL, like caring about dogs, and these actual values are opposed to the person they would become if they wirehead.
3. Don’t some people terminally care about reward?
1. I think so! I think that generally intelligent RL agents will have secondary, relatively weaker values around reward, but that reward will not be a primary motivator. Under my current (weakly held) model, an AI will only start chiseled computations about reward after it has chiseled other kinds of computations (e.g. putting away trash). More on this in later essays.
4. But what if the AI bops the reward button early in training, while exploring? Then credit assignment would make the AI more likely to hit the button again.
1. Then keep the button away from the AI until it can model the effects of hitting the cognition-updater button.[7]
2. For the reasons given in the “siren” section, a sufficiently reflective AI probably won’t seek the reward button on its own.
5. AIXI—
1. will always kill you and then wirehead forever, unless you gave it something like a constant reward function.
2. And, IMO, this fact is not practically relevant to alignment. AIXI is explicitly a reward-maximizer. As far as I know, AIXI(-tl) is not the limiting form of any kind of real-world intelligence trained via reinforcement learning.
6. Does the choice of RL algorithm matter?
1. For point 1 (reward is not the trained agent's optimization target), it might matter.
1. I started off analyzing model-free actor-based approaches, but have also considered a few model-based setups. I think the key lessons apply to the general case, but I think the setup will substantially affect which values tend to be grown.
1. If the agent's curriculum is broad, then reward-based cognition may get reinforced from a confluence of tasks (solve mazes, write sonnets), while each task-specific cognitive structure is only narrowly contextually reinforced. That said, this is also selecting equally hard for agents which do the rewarded activities, and reward-motivation is only one possible value which produces those decisions.
2. Pretraining a language model and then slotting that into an RL setup also changes the initial computations in a way which I have not yet tried to analyze.
2. It’s possible there’s some kind of RL algorithm which does train agents which limit to reward optimization (and, of course, thereby “solves” inner alignment in its literal form of “find a policy which optimizes the outer objective signal”).
2. For point 2 (reward provides local updates to the agent's cognition via credit assignment; reward is not best understood as specifying our preferences), the choice of RL algorithm should not matter, as long as it uses reward to compute local updates.
1. A similar lesson applies to the updates provided by loss signals. A loss signal provides updates which deform the agent's cognition into a new shape.
7. TurnTrout, you've been talking about an AI's learning process using English, but ML gradients may not neatly be expressible in our concepts. How do we know that it's appropriate to speculate in English?
1. I am not certain that my model is legit, but it sure seems more legit than (my perception of) how people usually think about RL (i.e. in terms of reward maximization, and reward-as-optimization-target instead of as feedback signal which builds cognitive structures).
2. I only have access to my own concepts and words, so I am provisionally reasoning ahead anyways, while keeping in mind the potential treacheries of anglicizing imaginary gradient updates (e.g. "be more likely to eat pizza in similar situations").

# Dropping the old hypothesis

At this point, I don't see a strong reason to focus on the “reward optimizer” hypothesis. The idea that AIs will get really smart and primarily optimize some reward signal… I don’t know of any tight mechanistic stories for that. I’d love to hear some, if there are any.

As far as I’m aware, the strongest evidence left for agents intrinsically valuing cognition-updating is that some humans do strongly (but not uniquely) value cognition-updating,[8] and many humans seem to value it weakly, and humans are probably RL agents in the appropriate ways. So we definitely can’t rule out agents which strongly (and not just weakly) value the cognition-updater. But it’s also not the overdetermined default outcome. More on that in future essays.

It’s true that reward can be an agent’s optimization target, but what reward actually does is reinforce computations which lead to it. A particular alignment proposal might argue that a reward function will reinforce the agent into a shape such that it intrinsically values reinforcement, and that the cognition-updater goal is also a human-aligned optimization target, but this is still just one particular approach of using the cognition-updating to produce desirable cognition within an agent. Even in that proposal, the primary mechanistic function of reward is reinforcement, not optimization-target.

## Implications

1. Any reasoning derived from the reward-optimization premise is now suspect until otherwise supported.
2. Wireheading was never a high-probability problem for RL-trained agents, absent a specific story for why cognition-updater-acquiring thoughts would be chiseled into primary decision factors.
3. Stop worrying about finding “outer objectives” which are safe to maximize.[9] I think that you’re not going to get an outer-objective-maximizer (i.e. an agent which maximizes the explicitly specified reward function).
1. Instead, focus on building good cognition within the agent.
2. In my ontology, there's only one question: How do we grow good cognition inside of the trained agent?
4. Mechanistically model RL agents as executing behaviors downstream of past reinforcement (e.g. putting trash away), in addition to thinking about policies which are selected for having high reward on the training distribution (e.g. hitting the button).
1. The latter form of reasoning skips past the mechanistic substance of reinforcement learning: The chiseling of computations responsible for the acquisition of the cognition-updater. I still think it's useful to consider selection, but mostly in order to generate failures modes whose mechanistic plausibility can be evaluated.
2. In my view, reward's proper role isn't to encode an objective, but a reinforcement schedule, such that the right kinds of computations get reinforced within the AI's mind.

Edit 11/15/22: The original version of this post talked about how reward reinforces antecedent computations in policy gradient approaches. This is not true in general. I edited the post to instead talk about how reward is used to upweight certain kinds of actions in certain kinds of situations, and therefore reward chisels cognitive grooves into agents

# Appendix: The field of RL thinks reward=optimization target

Let’s take a little stroll through Google Scholar’s top results for “reinforcement learning", emphasis added:

The agent's job is to find a policy… that maximizes some long-run measure of reinforcement. ~ Reinforcement learning: A survey

In instrumental conditioning, animals learn to choose actions to obtain rewards and avoid punishments, or, more generally to achieve goals. Various goals are possible, such as optimizing the average rate of acquisition of net rewards (i.e. rewards minus punishments), or some proxy for this such as the expected sum of future rewards. ~ Reinforcement learning: The Good, The Bad and The Ugly

Steve Byrnes did, in fact, briefly point out part of the “reward is the optimization target” mistake:

I note that even experts sometimes sloppily talk as if RL agents make plans towards the goal of maximizing future reward… — Model-based RL, Desires, Brains, Wireheading

I don't think it's just sloppy talk, I think it's incorrect belief in many cases. I mean, I did my PhD on RL theory, and I still believed it. Many authorities and textbooks confidently claim—presenting little to no evidence—that reward is an optimization target (i.e. the quantity which the policy is in fact trying to optimize, or the quantity to be optimized by the policy). Check what the math actually says

1. ^

Including the authors of the quoted introductory text, Reinforcement learning: An introduction. I have, however, met several alignment researchers who already internalized that reward is not the optimization target, perhaps not in so many words.

2. ^

Utility ≠ Reward points out that an RL-trained agent is optimized by original reward, but not necessarily optimizing for the original reward. This essay goes further in several ways, including when it argues that reward and utility have different type signatures—that reward shouldn’t be viewed as encoding a goal at all, but rather a reinforcement schedule. And not only do I not expect the trained agents to not maximize the original “outer” reward signal, I think they probably won’t try to strongly optimize any reward signal.

3. ^

Reward shaping seems like the most prominent counterexample to the “reward represents terminal preferences over state-action pairs” line of thinking.

4. ^

But also, you were still probably thinking about reality as you interacted with it (“since I’m in front of the shop where I want to buy food, go inside”), and credit assignment will still locate some of those thoughts as relevant, and so you wouldn’t purely reinforce the reward-focused computations.

5. ^

"Reward reinforces existing thoughts" is ultimately a claim about how updates depend on the existing weights of the network. I think that it's easier to update cognition along the lines of existing abstractions and lines of reasoning. If you're already running away from wolves, then if you see a bear and become afraid, you can be updated to run away from large furry animals. This would leverage your existing concepts.

The local mapping from gradient directions to behaviors is given by the neural tangent kernel, and the learnability of different behaviors is given by the NTK’s eigenspectrum, which seems to adapt to the task at hand, making the network quicker to learn along behavioral dimensions similar to those it has already acquired.

6. ^

Quintin Pope remarks: “The AI would probably want to establish control over the button, if only to ensure its values aren't updated in a way it wouldn't endorse. Though that's an example of convergent powerseeking, not reward seeking.”

7. ^

For mechanistically similar reasons, keep cocaine out of the crib until your children can model the consequences of addiction.

8. ^

I am presently ignorant of the relationship between pleasure and reward prediction error in the brain. I do not think they are the same.

However, I think people are usually weakly hedonically / experientially motivated. Consider a person about to eat pizza. If you give them the choice between "pizza but no pleasure from eating it" and "pleasure but no pizza", I think most people would choose the latter (unless they were really hungry and needed the calories). If people just navigated to futures where they had eaten pizza, that would not be true.

9. ^

From correspondence with another researcher: There may yet be an interesting alignment-related puzzle to "Find an optimization process whose maxima are friendly", but I personally don't share the intuition yet.

# 76

New Comment
Some comments are truncated due to high volume. Change truncation settings

At some level I agree with this post---policies learned by RL are probably not purely described as optimizing anything. I also agree that an alignment strategy might try to exploit the suboptimality of gradient descent, and indeed this is one of the major points of discussion amongst people working on alignment in practice at ML labs.

However, I'm confused or skeptical about the particular deviations you are discussing and I suspect I disagree with or misunderstand this post.

As you suggest, in deep RL we typically use gradient descent to find policies that achieve a lot of reward (typically updating the policy based on an estimator for the gradient of the reward).

If you have a system with a sophisticated understanding of the world, then cognitive policies like "select actions that I expect would lead to reward" will tend to outperform policies like "try to complete the task," and so I usually expect them to be selected by gradient descent over time. (Or we could be more precise and think about little fragments of policies, but I don't think it changes anything I say here.)

It seems to me like you are saying that you think gradient descent will fail to find such policies because...

Thanks for the detailed comment. Overall, it seems to me like my points stand, although I think a few of them are somewhat different than you seem to have interpreted.

policies learned by RL are probably not purely described as optimizing anything. I also agree that an alignment strategy might try to exploit the suboptimality of gradient descent

I think I believe the first claim, which I understand to mean "early-/mid-training AGI policies consist of contextually activated heuristics of varying sophistication, instead of e.g. a globally activated line of reasoning about a crisp inner objective." But that wasn't actually a point I was trying to make in this post.

in deep RL we typically use gradient descent to find policies that achieve a lot of reward (typically updating the policy based on an estimator for the gradient of the reward).

Depends. This describes vanilla PG but not DQN. I think there are lots of complications which throw serious wrenches into the "and then SGD hits a 'global reward optimum'" picture. I'm going to have a post explaining this in more detail, but I will say some abstract words right now in case it shakes something loose / clarifies my thoughts.

Critic-ba...

1jacob_cannell14d
If this agent is smart/reflective enough to model/predict the future effects of its RL updates, then you already are assuming a model-based agent which will then predict higher future reward by going for the blueberry. You seem to be assuming the bizarre combination of model-based predictive capability for future reward gradient updates but not future reward itself. Any sensible model-based agent would go for the blueberry absent some other considerations. This is not just purely speculation in the sense that you can run efficient zero in scenarios like this, and I bet it goes for the blueberry. Your mental model seems to assume pure model-free RL trained to the point that it gains some specific model-based predictive planning capabilities without using those same capabilities to get greater reward. Humans often intentionally avoid some high reward 'blueberry' analogs like drugs using something like the process you describe here, but hedonic reward is only one component of the human utility function, and our long term planning instead optimizes more for empowerment - which is usually in conflict with short term hedonic reward.
2Alex Turner8d
Long before they knew about reward circuitry, humans noticed that e.g. vices are behavioral attractors, with vice -> more propensity to do the vice next time -> vice, in a vicious cycle. They noticed that far before they noticed that they had reward circuitry causing the internal reinforcement events. If you're predicting future observations via eg SSL, I think it becomes important to (at least crudely) model effects of value drift during training. I'm not saying the AI won't care about reward at all. I think it'll be a secondary value, but that was sideways of my point here. In this quote, I was arguing that the AI would be quite able to avoid a "vice" (the blueberry) by modeling the value drift on some level. I was showing a sufficient condition for the "global maximum" picture getting a wrench thrown in it. When, quantitatively, should that happen, where the agent steps around the planning process? Not sure.
4Steve Byrnes4mo
If you write code for a model-based RL agent, there might be a model that’s updated by self-supervised learning, and actor-critic parts that involve TD learning, and there’s stuff in the code that calculates the reward function, and other odds and ends like initializing the neural architecture and setting the hyperparameters and shuttling information around between different memory locations and so on. * On the one hand, “there is a lot of stuff going on” in this codebase. * On the other hand, I would say that this codebase is for “an RL agent”. You use the word “pure” (“Humans do not appear to be purely RL agents…”), but I don’t know what that means. If a model-based RL agent involves self-supervised learning within the model, is it “impure”?? :-P The thing I describe above is very roughly how I propose the human brain works—see Posts #2–#7 here [https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8]. Yes it’s absolutely a “conjecture”—for example, I’m quite sure Steven Pinker would strongly object to it. Whether it’s “surprising a priori” or not goes back to whether that proposal is “entirely described by RL” or not. I guess you would probably say “no that proposal is not entirely described by RL”. For example, I believe there is circuitry in the brainstem that regulates your heart-rate, and I believe that this circuitry is specified in detail by the genome, not learned within a lifetime by a learning algorithm. (Otherwise you would die.) This kind of thing is absolutely part of my proposal [https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and] , but probably not what you would describe as “pure RL”.
5Paul Christiano4mo
It sounded like OP was saying: using gradient descent to select a policy that gets a high reward probably won't produce a policy that tries to maximize reward. After all, look at humans, who aren't just trying to get a high reward. And I am saying: this analogy seem like it's pretty weak evidence, because human brains seem to have a lot of things going on other than "search for a policy that gets high reward," and those other things seem like they have a massive impacts on what goals I end up pursuing. ETA: as a simple example, it seems like the details of humans' desire for their children's success, or their fear of death, don't seem to match well with the theory that all human desires come from RL on intrinsic reward. I guess you probably think they do? If you've already written about that somewhere it might be interesting to see. Right now the theory "human preferences are entirely produced by doing RL on an intrinsic reward function" seems to me to make a lot of bad predictions and not really have any evidence supporting it (in contrast with a more limited theory about RL-amongst-other-things, which seems more solid but not sufficient for the inference you are trying to make in this post).

I didn’t write the OP. If I were writing a post like this, I would (1) frame it as a discussion of a more specific class of model-based RL algorithms (a class that includes human within-lifetime learning), (2) soften the claim from “the agent won’t try to maximize reward” to “the agent won’t necessarily try to maximize reward”.

I do think the human (within-lifetime) reward function has an outsized impact on what goals humans ends up pursuing, although I acknowledge that it’s not literally the only thing that matters.

(By the way, I’m not sure why your original comment brought up inclusive genetic fitness at all; aren’t we talking about within-lifetime RL? The within-lifetime reward function is some complicated thing involving hunger and sex and friendship etc., not inclusive genetic fitness, right?)

I think incomplete exploration is very important in this context and I don’t quite follow why you de-emphasize that in your first comment. In the context of within-lifetime learning, perfect exploration entails that you try dropping an anvil on your head, and then you die. So we don’t expect perfect exploration; instead we’d presumably design the agent such that explores if and only if it ...

5Paul Christiano4mo
This was mentioned in OP ("The argument wouldprove too much [https://slatestarcodex.com/2013/04/13/proving-too-much/]. Evolution selected for inclusive genetic fitness, and itdid not get IGF optimizers [https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers] ."). It also appears to be a much stronger argument for the OP's position and so seemed worth responding to. It seems to me that incomplete exploration doesn't plausibly cause you to learn "task completion" instead of "reward" unless the reward function is perfectly aligned with task completion in practice. That's an extremely strong condition, and if the entire OP is conditioned on that assumption then I would expect it to have been mentioned. If the OP is not intending to talk about the kind of ML algorithm deployed in practice, then it seems like a lot of the implications for AI safety would need to be revisited. (For example, if it doesn't apply to either policy gradients or the kind of model-based control that has been used in practice, then that would be a huge caveat.)

It seems to me that incomplete exploration doesn't plausibly cause you to learn "task completion" instead of "reward" unless the reward function is perfectly aligned with task completion in practice. That's an extremely strong condition, and if the entire OP is conditioned on that assumption then I would expect it to have been mentioned.

Let’s say, in the first few actually-encountered examples, reward is in fact strongly correlated with task completion. Reward is also of course 100% correlated with reward itself.

Then (at least under many plausible RL algorithms), the agent-in-training, having encountered those first few examples, might wind up wanting / liking the idea of task completion, OR wanting / liking the idea of reward, OR wanting / liking both of those things at once (perhaps to different extents). (I think it’s generally complicated and a bit fraught to predict which of these three possibilities would happen.)

But let’s consider the case where the RL agent-in-training winds up mostly or entirely wanting / liking the idea of task completion. And suppose further that the agent-in-training is by now pretty smart and self-aware and in control of its situation. Then the agent m...

4Richard Ngo4mo
+1 on this comment, I feel pretty confused about the excerpt from Paul that Steve quoted above. And even without the agent deliberately deciding where to avoid exploring, incomplete exploration may lead to agents which learn non-reward goals before convergence - so if Paul's statement is intended to refer to optimal policies, I'd be curious why he thinks that's the most important case to focus on.
3Lukas Finnveden4mo
This seems plausible if the environment is a mix of (i) situations where task completion correlates (almost) perfectly with reward, and (ii) situations where reward is very high while task completion is very low. Such as if we found a perfect outer alignment objective, and the only situation in which reward could deviate from the overseer's preferences would be if the AI entirely seized control of the reward. But it seems less plausible if there are always (small) deviations between reward and any reasonable optimization target that isn't reward (or close enough so as to carry all relevant arguments). E.g. if an AI is trained on RL from human feedback, and it can almost always do slightly better by reasoning about which action will cause the human to give it the highest reward.
3Steve Byrnes4mo
Sure, other things equal. But other things aren’t necessarily equal. For example, regularization could stack the deck in favor of one policy over another, even if the latter has been systematically producing slightly higher reward. There are lots of things like that; the details depend on the exact RL algorithm. In the context of brains, I have discussion and examples in §9.3.3 here [https://www.lesswrong.com/posts/vpdJz4k5BgGzuGo7A/intro-to-brain-like-agi-safety-9-takeaways-from-neuro-2-2-on#1__Credit_assignment_can_have__priors__that_bias_what_type_of_concept_gets_what_type_of_credit_] .
2Not Relevant4mo
I'm trying to parse out what you're saying here, to understand whether I agree that human behavior doesn't seem to be almost perfectly explained as the result of an RL agent (with an interesting internal architecture) maximizing an inner learned reward. On my model, the outer objective of inclusive genetic fitness created human mesaoptimizers with inner objectives like "desire your children's success" or "fear death", which are decent approximations of IGF (given that directly maximizing IGF itself is intractable as it's a Nash equilibrium of an unknown game). It seems to me that human behavior policies are actually well-approximated as those of RL agents maximizing [our children's success] + [not dying] + [retaining high status within the tribe] + [being exposed to novelty to improve our predictive abilities] + ... . Humans do sometimes construct modified internal versions of these rewards based on pre-existing learned representations (e.g. desiring your adopted children's success) - is that what you're pointing at? Generally interested to hear more of the "bad predictions" this model makes.
2Alex Turner4mo
What do you mean by "inner learned reward"? This post points out that even if humans were "pure RL agents", we shouldn't expect them to maximize their own reward. Maybe you mean "inner mesa objectives"?
2Alex Turner14d
I've updated the post to clarify. I think focus on "antecedent computation reinforcement" (while often probably ~accurate) was imprecise/wrong for reasons like this. I now instead emphasize that the math of policy gradient approaches means that reward chisels cognitive circuits into networks.
1. Stop worrying about finding “outer objectives” which are safe to maximize.[9] I think that you’re not going to get an outer-objective-maximizer (i.e. an agent which maximizes the explicitly specified reward function).
1. Instead, focus on building good cognition within the agent.
2. In my ontology, there's only an inner alignment problem: How do we grow good cognition inside of the trained agent?

This feels very strongly reminiscent of an update I made a while back, and which I tried to convey in this section of AGI safety from first principles. But I think you've stated it far too strongly; and I think fewer other people were making this mistake than you expect (including people in the standard field of RL), for reasons that Paul laid out above. When you say things like "Any reasoning derived from the reward-optimization premise is now suspect until otherwise supported", this assumes that the people doing this reasoning were using the premise in the mistaken way that you (and some other people, including past Richard) were. Before drawing these conclusions wholesale, I'd suggest trying to identify ways in which the things other people are saying are consistent with th...

When you say things like "Any reasoning derived from the reward-optimization premise is now suspect until otherwise supported", this assumes that the people doing this reasoning were using the premise in the mistaken way

I have considered the hypothesis that most alignment researchers do understand this post already, while also somehow reliably emitting statements which, to me, indicate that they do not understand it. I deem this hypothesis unlikely. I have also considered that I may be misunderstanding them, and think in some small fraction of instances I might be.

I do in fact think that few people actually already deeply internalized the points I'm making in this post, even including a few people who say they have or that this post is obvious. Therefore, I concluded that lots of alignment thinking is suspect until re-analyzed.

I did preface "Here are some major updates which I made:". The post is ambiguous on whether/why I believe others have been mistaken, though. I felt that if I just blurted out my true beliefs about how people had been reasoning incorrectly, people would get defensive. I did in fact consider combing through Ajeya's post for disagreements, but I thought it...

It seems to me that the basic conceptual point made in this post is entirely contained in our Risks from Learned Optimization paper. I might just be missing a point. You've certainly phrased things differently and made some specific points that we didn't, but am I just misunderstanding something if I think the basic conceptual claims of this post (which seems to be presented as new) are implied by RFLO? If not, could you state briefly what is different?

(Note I am still surprised sometimes that people still think certain wireheading scenario's make sense despite them having read RFLO, so it's plausible to me that we really didn't communicate everyrhing that's in my head about this).

3Alex Turner4mo
"Wireheading is improbable" is only half of the point of the essay. The other main point is "reward functions are not the same type of object as utility functions." I haven't reread all of RFLO recently, but on a skim—RFLO consistently talks about reward functions as "objectives": Which is reasonable parlance, given that everyone else uses it, but I don't find that terminology very useful for thinking about what kinds of inner cognition will be developed in the network. Reward functions + environmental data provides a series of cognitive-updates to the network, in the form of reinforcement schedules. The reward function is not necessarily an 'objective' at all. (You might have privately known about this distinction. Fine by me! But I can't back it out from a skim of RFLO, even already knowing the insight and looking for it.)
5Chris van Merwijk4mo
Maybe you have made a gestalt-switch I haven't made yet, or maybe yours is a better way to communicate the same thing, but: the way I think of it is that the reward function is just a function from states to numbers, and the way the information contained in the reward function affects the model parameters is via reinforcement of pre-existing computations. Is there a difference between saying: * A reward function is an objective function, but the only way that it affects behaviour is via reinforcement of pre-existing computations in the model, and it doesn't actually encode in any way the "goal" of the model itself. * A reward function is not an objective function, and the only way that it affects behaviour is via reinforcement of pre-existing computations in the model, and it doesn't actually encode in any way the "goal" of the model itself. It seems to me that once you acknowledge the point about reinforcement, the additional statement that reward is not an objective doesn't actually imply anything further about the mechanistic properties of deep reinforcement learners? It is just a way to put a high-level conceptual story on top of it, and in this sense it seems to me that this point is already known (and in particular, contained within RFLO), even though we talked of the base objective still as an "objective". However, it might be that while RFLO pointed out the same mechanistic understanding that you have in mind, but calling it an objective tends in practice to not fully communicate that mechanistic understanding. Or it might be that I am really not yet understanding that there is an actual diferrence in mechanistic understanding, or that my intuitions are still being misled by the wrong high-level concept even if I have the lower-level mechanistic understanding right. (On the other hand, one reason to still call it an objective is because we really can think of the selection process, i.e. evolution/the learning algorithm of an RL agent,
3Alex Turner3mo
Where did RFLO point it out? RFLO talks about a mesa objective being different from the "base objective" (even though reward is not a kind of objective). IIRC on my skim most of the arguments were non-mechanistic reasoning about what gets selected for. (Which isn't a knockdown complaint, but those arguments are also not about the mechanism.) Also see my comment to Evan. [https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target?commentId=zcv9YsJ8EWxdn9hM8] Like, from my POV, people are reliably reasoning about what RL "selects for" via "lots of optimization pressure" on "high reward by the formal metric", but who's reasoning about what kinds of antecedent computations get reinforced when credit assignment activates? Can you give me examples of anyone else spelling this out in a straightforward fashion? Yeah, I think it just doesn't communicate the mechanistic understanding (not even imperfectly, in most cases, I imagine). From my current viewpoint, I just wouldn't call reward an objective at all, except in the context of learned antecedent-computation-reinforcement terminal values. It's like if I said "My cake is red" when the cake is blue, I guess? IMO it's just not how to communicate the concept. Why is this reasonable?
3Evan Hubinger4mo
Reward functions often are structured as objectives, which is why we talk about them that way. In most situations, if you had access to e.g. AIXI, you could directly build a “reward maximizer.” I agree that this is not always the case, though, as in the discussion here [https://www.lesswrong.com/posts/rAhJrdxjsXcngn3ip/an-observation-about-hubinger-et-al-s-framework-for-learned] . That being said, I think it is often enough the case that it made sense to focus on that particular case in RFLO.
2Alex Turner3mo
What does this mean? By "structured as objectives", do you mean something like "people try to express what they want with a reward function, by conferring more reward to more desirable states"? (I'm going to assume so for the rest of the comment, LMK if this is wrong.) I agree that other people (especially my past self) think about reward functions this way. I think they're generally wrong to do so, and it's misleading as to the real nature of the alignment problem. I agree with that post, thanks for linking. As far as I can tell, AIXI and other hardcoded planning agents are the known exceptions to the arguments in this post. We will not get AGI via these approaches. When else is it the case? I therefore still feel confused why you think it made sense. While I definitely appreciate the work you all did with RFLO, the framing of reward as a "base objective" seems like a misstep that set discourse in a weird direction which I'm trying to push back on (from my POV!). I think that the "base objective" is better described as a "cognitive-update-generator." (This is not me trying to educate you on this specific point, but rather argue that it really matters how we frame the problem in our day-to-day reasoning.)

I do in fact think that few people actually already deeply internalized the points I'm making in this post, even including a few people who say they have or that this post is obvious. Therefore, I concluded that lots of alignment thinking is suspect until re-analyzed.

Risks from Learned Optimization in Advanced Machine Learning Systems,” which we published three years ago and started writing four years ago, is extremely explicit that we don't know how to get an agent that is actually optimizing for a specified reward function. The alignment research community has been heavily engaging with this idea since then. Though I agree that many alignment researchers used to be making this mistake, I think it's extremely clear that by this point most serious alignment researchers understand the distinction.

I have relatively little idea how to "improve" a reward function so that it improves the inner cognition chiseled into the policy, because I don't know the mapping from outer reward schedules to inner cognition within the agent. Does an "amplified" reward signal produce better cognition in the inner agent? Possibly? Even if that were true, how would I know it?

This is precisely the point I make in “How do we become confident in the safety of a machine learning system is making,” btw.

3Alex Turner4mo
That isn't the main point I had in mind. See my comment to Chris here [https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target?commentId=eFb9sdzymu3oPm8PE] . EDIT: Yup, the training story regime sounds good by my lights. Am I intended to conclude something further from this remark of yours, though?
4Evan Hubinger4mo
Left a comment. Nope, just wanted to draw your attention to another instance of alignment researchers already understanding this point. -------------------------------------------------------------------------------- Also, I want to be clear that I like this post a lot and I'm glad you wrote it—I think it's good to explain this sort of thing more, especially in different ways that are likely to click for different people. I just think your specific claim that most alignment researchers don't understand this already is false.
5Alex Turner4mo
I have privately corresponded with a senior researcher who, when asked what they thought would result from a specific training scenario, made an explicit (and acknowledged) mistake along the lines of this post. Another respected researcher seemingly slipped on the same point, some time after already discussing this post with them. I am still not sure whether I'm on the same page with Paul, as well (I have general trouble understanding what he believes, though). And Rohin also has this experience of explaining the points in OP on a regular basis. All this among many other private communication events I've experienced. (Out of everyone I would expect to already have understood this post, I think you and Rohin would be at the top of the list.) So basically, the above screens off "Who said what in past posts?", because whoever said whatever, it's still producing my weekly experiences of explaining the points in this post. I still haven't seen the antecedent-computation-reinforcement (ACR) emphasis thoroughly explained elsewhere, although I agree that some important bits (like training stories) are not novel to this post. (The point isn't so much "What do I get credit for?" as much as "I am concerned about this situation.") Here's more speculation. I think alignment theorists mostly reason via selection-level arguments. While they might answer correctly on "Reward is? optimization target" when pressed, and implicitly use ACR to reason about what's going on in their ML training runs, I'd guess that probably don't engage in mechanistic ACR reasoning in their day-to-day theorizing. (Again, I can only speculate, because I am not a mind-reader, but I do still have beliefs on the matter.)

(Just wanted to echo that I agree with TurnTrout that I find myself explaining the point that reward may not be the optimization target a lot, and I think I disagree somewhat with Ajeya's recent post for similar reasons. I don't think that the people I'm explaining it to literally don't understand the point at all; I think it mostly hasn't propagated into some parts of their other reasoning about alignment. I'm less on board with the "it's incorrect to call reward a base objective" point but I think it's pretty plausible that once I actually understand what TurnTrout is saying there I'll agree with it.)

5Richard Ngo4mo
You don't need to know the full mapping in order to suspect that, when we reward agents for doing undesirable things, we tend to get more undesirable cognition. For example, if we reward agents for lying to us, then we'll tend to get less honest agents. We can construct examples where this isn't true but it seems like a pretty reasonable working hypothesis. It's possible that discarding this working hypothesis will lead to better research but I don't think your arguments manage to establish that, they only establish that we might in theory find ourselves in a situation where it's reasonable to discard this working hypothesis.
3Alex Turner4mo
This specific point is why I said "relatively" little idea, and not zero idea. You have defended the common-sense version of "improving" a reward function (which I agree with, don't reward obvious bad things), but I perceive you to have originally claimed a much more aggressive and speculative claim, which is something like "'amplified' reward signals are improvements over non-'amplified' reward signals" (which might well be true, but how would we know?).
5Richard Ngo4mo
Amplification can just be used as a method for making more and better common-sense improvements, though. You could also do all sorts of other stuff with it, but standard examples (like "catch agents when they lie to us") seem very much like common-sense improvements.
2Alex Turner4mo
I think that few people understand these points already. If RL professionals did understand this point, there would be pushback on Reward is Enough from RL professionals pointing out that reward is not the optimization target. After 15 minutes of searching, I found no [https://robotic.substack.com/p/reward-is-not-enough ] one [https://www.reddit.com/r/MachineLearning/comments/pwju6t/what_are_your_thoughts_on_the_reward_is_enough/] making [https://arxiv.org/abs/2112.15422] the [https://venturebeat.com/2021/07/10/building-artificial-intelligence-reward-is-not-enough/] counterpoint [https://www.lesswrong.com/posts/frApEhpyKQAcFvbXJ/reward-is-not-enough]. I mean, that thesis is just so wrong, and it's by famous researchers, and no one points out the obvious error. RL researchers don't get it.[1] [#fndtxqyvw6s2q]It's not complicated to me. (Do you know of any instance at all of someone else (outside of alignment) making the points in this post?) Currently not convinced by / properly understanding Paul's counterpoints. 1. ^ [#fnrefdtxqyvw6s2q]Although I flag that we might be considering different kinds of "getting it", where by my lights, "getting it" means "not consistently emitting statements which contravene the points of this post", while you might consider "if pressed on the issue, will admit reward is not the optimization target" to be "getting it."

The way I attempt to avoid confusion is to distinguish between the RL algorithm's optimization target and the RL policy's optimization target, and then avoid talking about the "RL agent's" optimization target, since that's ambiguous between the two meanings. I dislike the title of this post because it implies that there's only one optimization target, which exacerbates this ambiguity. I predict that if you switch to using this terminology, and then start asking a bunch of RL researchers questions, they'll tend to give broadly sensible answers (conditional on taking on the idea of "RL policy's optimization target" as a reasonable concept).

Authors' summary of the "reward is enough" paper:

In this paper we hypothesise that the objective of maximising reward is enough to drive behaviour that exhibits most if not all attributes of intelligence that are studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language and generalisation. This is in contrast to the view that specialised problem formulations are needed for each attribute of intelligence, based on other signals or objectives. The reward-is-enough hypothesis suggests that

...
4Alex Turner4mo
Actually, while I did recheck the Reward is Enough paper, I think I did misunderstand part of it in a way which wasn't obvious to me while I reread, which makes the paper much less egregious. I am updating that you are correct and I am not spending enough effort on favorably interpreting existing discourse. I still disagree with parts of that essay and still think Sutton & co don't understand the key points. I still think you underestimate how much people don't get these points. I am provisionally retracting the comment you replied to while I compose a more thorough response (may be a little while). Agreed on both counts for your first sentence. The "and" in "reward does not magically spawn thoughtsaboutreward, and reinforce those reward-focused thoughts" is doing important work; "magically" is meant to apply to the conjunction of the clauses. I added the second clause in order to pre-empt this objection. Maybe I should have added "reinforce those reward-focused thoughts into terminal values." Would that have been clearer? (I also have gone ahead and replaced "magically" with "automatically.")
2Richard Ngo4mo
Hmm, perhaps clearer to say "reward does not automatically reinforce reward-focused thoughts into terminal values", given that we both agree that agents will have thoughts about reward either way. But if you agree that reward gets reinforced as an instrumental value, then I think your claims here probably need to actually describe the distinction between terminal and instrumental values. And this feels pretty fuzzy - e.g. in humans, I think the distinction is actually not that clear-cut. In other words, if everyone agrees that reward likely becomes a strong instrumental value, then this seems like a prima facie reason to think that it's also plausible as a terminal value, unless you think the processes which give rise to terminal values are very different from the processes which give rise to instrumental values.

I like this post, and basically agree, but it comes across somewhat more broad and confident than I am, at least in certain places.

I’m currently thinking about RL along the lines of Nostalgebraist here:

“Reinforcement learning” (RL) is not a technique.  It’s a problem statement, i.e. a way of framing a task as an optimization problem, so you can hand it over to a mechanical optimizer.

What’s more, even calling it a problem statement is misleading,  because it’s (almost) the most general problem statement possible for any arbitrary task. Nostalgebraist 2020

If that’s right, then I am very reluctant to say anything whatsoever about “RL agents in general”. They’re too diverse.

Much of the post, especially the early part, reads (to me) like confident claims about all possible RL agents. For example, the excerpt “…reward is the antecedent-computation-reinforcer. Reward reinforces those computations which produced it.” sounds like a confident claim about all RL agents, maybe even by definition of “RL”. (If so, I think I disagree.)

But other parts of the post aren’t like that—for example, the “Does the choice of RL algorithm matter?” part seems more reasonable and hedged, and l...

Here is an example story I wrote (that has been minorly edited by TurnTrout) about how an agent trained by RL could plausibly not optimize reward, forsaking actions that it knew during training would get it high reward. I found it useful as a way to understand his views, and he has signed off on it. Just to be clear, this is not his proposal for why everything is fine, nor is it necessarily an accurate representation of my views, just a plausible-to-TurnTrout story for how agents won't end up wanting to game human approval:

• Agent gets trained on a reward function that's 1 if it gets human approval, 0 otherwise (or something).
• During an intermediate amount of training, the agent's honest and nice computations get reinforced by reward events.
• That means it develops a motivation to act honestly and behave nicely etc., and no similarly strong motivation to gain human approval at all costs.
• The agent then gets able to tell that it if it tricked the human, that would be reinforced.
• It then decides to not get close in action-space to tricking the human, so that it doesn't get reinforced into wanting to gain human approval by tricking the human.
• This works because:
• it's enough action hop
...
3DanielFilan3mo
One reason that I doubt this story is that "try new things in case they're good" is itself the sort of thing that should be reinforced during training on a complicated environment, and would push towards some sort of obfuscated manipulation of humans (similar to how if you read about enough social hacks you'll probably be a bit scammy even tho you like people and don't want to scam them). In general, this motivation will push RL agents towards reward-optimal behaviour on the distribution of states they know how to reach and handle.
3Alex Turner3mo
IDK if this is causally true or just evidentially true. I also further don't know why it would be mechanistically relevant to the heuristic you posit. Rather, I think that agents might end up with this heuristic at first, but over time it would get refined into "try new things which [among other criteria] aren't obviously going to cause bad value drift away from current values." One reason I expect the refinement in humans is that noticing your values drifted in a bad way is probably a negative reinforcement event, and so enough exploration-caused negative events might cause credit assignment to refine the heuristic into the shape I listed. This would convergently influence agents to not be reward-optimal, even on known-reachable-states. (I'm not super confident in this particular story porting over to AI, but think it's a plausible outcome.) If that's kind of heuristic is a major underpinning of what we call "curiosity" in humans, then that would explain why I am, in general, not curious about exploring a life of crime, but am curious about math and art and other activities which won't cause bad value drift away from my current values [https://www.lesswrong.com/posts/jFvFreCeejRKaZv4v/understanding-and-avoiding-value-drift] .

At this point, there isn’t a strong reason to elevate this “inner reward optimizer” hypothesis to our attention. The idea that AIs will get really smart and primarily optimize some reward signal… I don’t know of any good mechanistic stories for that. I’d love to hear some, if there are any.

Here's a story:

1. Suppose we provide the reward as an explicit input to the agent (in addition to using it as antecedent-computation-reinforcer)
2. If the agent has developed curiosity, it will think thoughts like "What is this number in my input stream?" and later "Hmm it seems correlated to my behavior in certain ways."
3. If the agent has developed cognitive machinery for doing exploration (in the explore/exploit sense) or philosophy, at some later point it might have thoughts like "What if I explicitly tried to increase this number? Would that be a good idea or bad?"
4. It might still answer "bad", but at this point the outer optimizer might notice (do the algorithmic equivalent of thinking the following), "If I modified this agent slightly by making it answer 'good' instead (or increasing its probability of answering 'good'), then expected future reward will be increased." In other words, there seems a fairly obvious gradient towards becoming a reward-maximizer at this point.

I don't think this is guaranteed to happen, but seems likely enough to elevate “inner reward optimizer” hypothesis to our attention, at least.

As a more general/tangential comment, I'm a bit confused about how "elevate hypothesis to our attention" is supposed to work. I mean it took some conscious effort to come up with a possible mechanistic story about how "inner reward optimizer" might arise, so how were we supposed to come up with such a story without paying attention to "inner reward optimizer" in the first place?

Perhaps it's not that we should literally pay no attention to "inner reward optimizer" until we have a good mechanistic story for it, but more like we are (or were) paying too much attention to it, given that we don't (didn't) yet have a good mechanistic story? (But if so, how to decide how much is too much?)

3Alex Turner4mo
I think this tangential comment is good; strong-upvote. I was hyperbolic in implying "don't even raise the reward-optimizer hypothesis to your attention", and will edit the post accordingly.
4Quintin Pope4mo
This is where I disagree with your mechanics story. The RL algorithm is not that clever. If the agent doesn’t explore in the direction of answering “good”, then there’s no gradient in that direction. You can propose different types of outer optimizers which are this clever and can do intentional lookahead like this, but e.g., policy gradient isn’t doing that.
3Matthew "Vaniver" Graves4mo
Wait, I don't think this is true? At least, I'd appreciate it being stepped thru in more detail. In the simplest story, we're imagining an agent whose policy isπθand, for simplicity's sake,θ0is a scalar that determines "how much to maximize for reward" and all the other parameters ofθstore other things about the dynamics of the world / decision-making process. It seems to me that∇θis obviously going to try to pointθ0in the direction of "maximize harder for reward". In the more complicated story, we're imagining an agent whose policy isπθwhich involves how it manipulates both external and internal actions (and thus both external and internal state). One of the internal state pieces (let's call its0 like last time) determines whether it selects actions that are more reward-seeking or not. Again I think it seems likely that∇θis going to try to adjustθsuch that the agent selects internal actions that points0in the direction of "maximize harder for reward". What is my story getting wrong?
1Oliver Sourbut4mo
I think Quintin[1] [#fn-ESLWoPmGz6TFPtAnw-1] is maybe alluding to the fact that in the limit of infinite counterfactual exploration then sure, the gradient in sample-based policy gradient estimation will push in that direction. But we don't ever have infinite exploration (and we certainly don't have counterfactual exploration; though we come very close in simulations with resets) so in pure non-lookahead (e.g. model free) sample-based policy gradient estimation, an action which has never been tried can not be reinforced (except as a side effect of generalisation by function approximation). This seems right to me and it's a nuance I've raised in a few conversations in the past. On the other hand kind of half the point of RL optimisation algorithms is to do 'enough' exploration! And furthermore (as I mentioned under Steven's comment [https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target?commentId=WDvRtdCx6uzDGhotB] ) I'm not confident that such simplistic RL is the one that will scale to AGI first. cf various impressive results from DeepMind over the years which use lots of shenanigans besides plain old sample-based policy gradient estimation (including model-based lookahead as in the Alpha and Mu gang). But maybe! -------------------------------------------------------------------------------- 1. This is a guess and I haven't spoken to Quintin about this - Quintin, feel free to clarify/contradict ↩︎ [#fnref-ESLWoPmGz6TFPtAnw-1]
1Matthew "Vaniver" Graves4mo
This is the bit I don't believe, actually. [Or at least don't think is relevant.] Note that in Wei_Dai's hypothetical, the neural net architecture has a particular arrangement such that "how much it optimizes for reward" is either directly or indirectly implied by the neural network weights. [We're providing the reward as part of its observations, and so if nothing else the weights from that part of the input vector to deeper in the network will be part of this, but the actual mechanism is going to be more complicated for one that doesn't have access to that.] Quintin seems to me to be arguing "if you actually follow the math, there isn't a gradient to that parameter," which I find surprising, and which seems easy to demonstrate by going thru the math. As far as I can tell, there is a gradient there, and it points in the direction of "care more about reward." This doesn't mean that, by caring about reward more, it knows which actions in the environment cause more reward. There I believe the story that the RL algorithm won't be able to reinforce actions that have never been tried. [EDIT: Maybe the argument is "but if it's never tried the action of optimizing harder for reward, then the RL algorithm won't be able to reinforce that internal action"? But that seems pretty strained and not very robust, as the first time it considers trying harder to get reward, it will likely get hooked.]
3Alex Turner3mo
3Matthew "Vaniver" Graves3mo
3Alex Turner3mo
I see. Can't speak for Quintin, but: I mostly think it won't be present, but also conditional on the motivational edifice being present, I expect the edifice to bid up rewarding actions and get reinforced into a substantial influence. I have a lot of uncertainty in this case. I'm hoping to work out a better mechanistic picture of how the gradients would affect such edifices. I think there are a range of disagreements here, but also one man's modus ponens is another's modus tollens: High variance in heroin-propensity implies we can optimize heroin-propensity down to negligible values with relatively few bits of optimization (if we knew what we were doing, at least) [https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome?commentId=kyz7ZNkLuGG5kwL2X] . This isn't obviously true to me, actually. That strategy certainly sounds quotidien, but is it truly mechanistically deficient? If we tell the early training-AGI "Hey, if you hit the reward button, the ensuing credit assignment will drift your values by mechanisms A, B, and C", that provides important information to the AGI. I think that that's convergently good advice, across most possible values the AGI could have. (This, of course, doesn't address the problem of whether the AGI does have good values to begin with.) More broadly, I suspect there might be some misconception about myself and other shard theory researchers. I don't think, "Wow humans are so awesome, let's go ahead and ctrl+C ctrl+V for alignment." I'm very very against boxing confusion like that. I'm more thinking, "Wow, humans have pretty good general alignment properties [https://www.lesswrong.com/posts/FMdGt9S9irgxeD9Xz/general-alignment-properties] ; I wonder what the generators are for that?". I want to understand the generators for the one example we have of general intelligences acquiring values over their lifetime [https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-weal

I think the quotes cited under "The field of RL thinks reward=optimization target" are all correct. One by one:

The agent's job is to find a policy… that maximizes some long-run measure of reinforcement.

Yes, that is the agent's job in RL, in the sense that if the training algorithm didn't do that we'd get another training algorithm (if we thought it was feasible for another algorithm to maximize reward). Basically, the field of RL uses a separation of concerns, where they design a reward function to incentivize good behaviour, and the agent maximizes th...

3Alex Turner4mo
I perceive you as saying "These statements can make sense." If so, the point isn't that they can't be viewed as correct in some sense—that no one sane could possibly emit such statements. The point is that these quotes are indicative of misunderstanding the points of this essay. That if someone says a point as quoted, that's unfavorable evidence on this question. I wasn't implying they're impossible, I was implying that this is somewhat misguided. Animals learn to achieve goals like "optimizing... the expected sume of future rewards"? That's exactly what I'm arguing against as improbable.
4DanielFilan4mo
I'm not saying "These statements can make sense", I'm saying they do make sense and are correct under their most plain reading. Re: a possible goal of animals being to optimize the expected sum of future rewards, in the cited paper [https://www.princeton.edu/~yael/Publications/DayanNiv2008.pdf] "rewards" appears to refer to stuff like eating tasty food or mating, where it's assumed the animal can trade those off against each other consistently: It seems totally plausible to me that an animal could be motivated to optimize the expected sum of future rewards in this sense, given that 'reward' is basically defined as "things they value". It seems like the way this would be false would be if animals rewards are super unstable, or the animal doesn't coherently trade off things they value. This could happen, but I don't see why I should see it as overwhelmingly likely. [EDIT: in other words, the reason the paper conflates 'rewards' with 'optimization target' is that that's how they're defining rewards]
3Alex Turner3mo
Yup, strong disagree with that. If that were true, that would definitely be a good counterpoint and mean I misread it. If so, I'd retract my original complaint with that passage. But I'm not convinced that it's true. The previous paragraph just describes finding cheese as an "affectively important outcome." Then, later, "outcomes are assumed to have numerical... utilities." So they're talking about utility now, OK. But then they talk about rewards. Is this utility? It's not outcomes (like finding cheese), because you can't take the expected sum of future finding-cheeses -- type error! When I ctrl+F rewards and scroll through, and it sure seems like they're talking about dopamine or RPE or that-which-gets-discounted-and-summed-to-produce-the-return, which lines up with my interpretation.
4DanielFilan3mo
Those are three pretty different things - the first is a chemical, the second I guess stands for 'reward prediction error', and the third is a mathematical quantity! Like, you also can't talk about the expected sum of dopamine, because dopamine is a chemical, not a number! Here's how I interpret the paper: stuff in the world is associated with 'rewards', which are real numbers that represent how good the stuff is. Then the 'return' of some period of time is the discounted sum of rewards. Rewards represent 'utilities' of individual bits of time, but the return function is the actual utility function over trajectories. 'Predictions of reward' means predictions of stuff like bits of cheese that is associated with reward. I do think the authors do a bit of equivocation between the numbers and the things that the numbers represent (which IMO is typical for non-mathematicians, see also how physicists constantly conflate quantities like velocity with the functions that take other physical quantities and return the velocity of something), but given that AFAICT my interpretation accounts for the uses of 'reward' in that paper (and in the intro). That said, there are a bunch of them, and as a fallible human I'm probably not good at finding the uses that undermine my theory, so if you have a quote or two in mind that makes more sense under the interpretation that 'reward' refers to some function of a brain state rather than some function of cheese consumption or whatever, I'd appreciate you pointing them out to me.

Relevant quote I just found in the paper "Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents":

The primary measure of an agent’s performance is the score achieved during an episode, namely the undiscounted sum of rewards for that episode. While this performance measure is quite natural, it is important to realize that score, in and of itself, is not necessarily an indicator of AI progress. In some games, agents can maximize their score by “getting stuck” in a loop of “small” rewards, ignoring what human p

...

I'm feeling confused.

It might just be my inexperience with reinforcement learning, but while I agree with what you say, I can't square it with my intuition of what a ML model does.

If our model uses some variant of gradient ascent, it will end up in high reward function values. (Not necessarily in any global/local maxima, but the attempt is to get it to some such maxima.) In that sense the model does optimize for reward.

Is that a special attribute of gradient ascent, that we shouldn't expect other models to have? Does that mean that gradient ascent models are more dangerous? Are you just noting that the model won't necessarily find the global maxima, and only reach some local maxima?

3Alex Turner3mo
Agreed. Disagreed. Consider vanilla PG, which is as close as I know of to "doing gradient ascent in the reward landscape." Here, the RL training process is optimizing the model in the direction of historically observed rewards. In such policy gradient methods, the model receives local cognitive updates (in the form of gradients) to increasing the logits on actions which are judged to have produced reward (e.g. in vanilla PG, this is determined by "was the action part of a high-reward trajectory?"). The model is being optimized in the direction of previous rewards, given the collected data distribution (e.g. put some trash away and observed some rewards) and the given states and its current paramterization. This process might even find very high reward policies. I expect it will. But that doesn't mean the model is optimizing for reward.

Here's my general view on this topic:

• Agents are reinforced by some reward function.
• They then get more likely to do stuff that the reward function rewards.
• This process, iterated a bunch, produces agents that are 'on-distribution optimal'.
• In particular, in states that are 'easily reached' during training, the agent will do things that approximately maximize reward.
• Some states aren't 'easily reached', e.g. states where there's a valid bitcoin blockchain of length 20,000,000 (current length as I write is 748,728), or states where you have messed around w
...

Edit 11/15/22: The original version of this post talked about how reward reinforces antecedent computations in policy gradient approaches. This is not true in general. I edited the post to instead talk about how reward is used to upweight certain kinds of actions in certain kinds of situations, and therefore reward chisels cognitive grooves into agents

Update: Changed

RL agents which don’t think about reward before getting reward, will not become reward optimizers, because there will be no reward-oriented computations for credit assignment to reinforce.

to

While it's possible to have activations on "pizza consumption predicted to be rewarding" and "execute motor-subroutine-#51241" and then have credit assignment hook these up into a new motivational circuit, this is only one possible direction of value formation in the agent. Seemingly, the most direct way for an agent to become more of a

...

I think there are some subtleties here regarding the distinction between RL as a type of reward signal, and RL as a specific algorithm. You can take the exact same reward signal and use it either to update all computations in the entire AI (with some slightly magical credit assignment scheme) as in this post, or you can use it to update a reward prediction model in a model-based RL agent that acts a lot more like a maximizer.

I'd also like to hear your opinion on the effect of information leakage. For example, if reward only correlates with getting to the g...

3Alex Turner4mo
Gradients are magical? The arguments apply in this case as well. Yeah, what if half of the time, getting to the goal doesn't give a reward? I think the arguments go through just fine, just training might be slower. Rewarding non-goal completions probably train other contextual computations / "values" into the agent. If reward is always given by hitting the button, I think it doesn't affect the analysis, unless the agent is exploring into the button early in training, in which case it "values" hitting the button, or some correlate thereof (i.e. develops contextually activated cognition which reliably steers it into a world where the button has been pressed).
1Charlie Steiner4mo
3Alex Turner4mo
These all sound somewhat like predictions I would make? My intended point is that if the button is out of the agent's easy reach, and the agent doesn't explore into the button early in training, by the time it's smart enough to model the effects of the distant reward button, the agent won't want to go mash the button as fast as possible.
2Charlie Steiner4mo
But Agent 57 (or its successor) would go mash the button once it figured out how to do it. Kinda like the salt-starved rats from that one Steve Byrnes post. Put another way, my claim is that the architectural tweaks that let you beat Montezuma's Revenge with RL are very similar to the architectural tweaks that make your agent act like it really is motivated by reward, across a broader domain.
2Alex Turner4mo
(Haven't checked out Agent 57 in particular, but expect it to not have the "actually optimizes reward" property in the cases I argue against in the post.)

The argument above isn’t clear to me, because I’m not sure how you’re defining your terms.

I should note that, contrary to the statement “reward is _not_, in general, that-which-is-optimized by RL agents”, by definition "reward _must be_ what is optimized for by RL agents." If they do not do that, they are not RL agents. At least, that is true based on the way the term “reward” is commonly used in the field of RL. That is what RL agents are programmed by humans to do. They do that by changing their behavior over many trials, and testing the results of ...

2Alex Turner22d
This is not true, and the essay is meant to explain why. In vanilla policy gradient, rewardRon a trajectoryτwill provide a set of gradients which push up logits on the actionsatwhich produced the trajectory. The gradient on the parametersθwhich parameterize the policyπθis in the direction of increasing returnJ: ∇θJ(πθ)=Eτ∼πθ[T∑t=0∇θlogπθ(at∣st)R(τ)]You can read more about this here [https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html]. Less formally, the agent does stuff. Some stuff is rewarding. Rewarding actions get upweighted locally. That's it. There's no math here that says "and the agent shall optimize for reward explicitly"; the math actually says "the agent's parameterization is locally optimized by reward on the data distribution of the observations it actually makes." Reward simply chisels cognition into agents (at least, in PG-style setups). In some settings, convergence results guarantee that this process converges to an optimal policy. As explained in the section "When is reward the optimization target of the agent? [https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target?commentId=53e4ZnsEwL2ae2gEr#When_is_reward_the_optimization_target_of_the_agent_] ", these settings probably don't bear on smart alignment-relevant agents operating in reality.

The deceptive alignment worry is that there is some goal about the real world at all. Deceptive alignment breaks robustness of any properties of policy behavior, not just the property of following reward as a goal in some unfathomable sense.

So refuting this worry requires quieting the more general hypothesis that RL selects optimizers with any goals of their own, doesn't matter what goals those are. It's only the argument for why this seems plausible that needs to refer to reward as related to the goal of such an optimizer, but the way the argument goes su...

3Alex Turner4mo
I don't see how this comment relates to my post. What gives you the idea that I'm trying to refute worries about deceptive alignment?