Evan R. Murphy

I’m an AI alignment researcher currently focused on myopia and language models. I’m also interested in interpretability and other AI safety-related topics. My research is independent and currently supported by a grant from the Future Fund regranting program*.

Research that I’ve authored or co-authored:

Other recent work:

Before getting into AI alignment, I was a software engineer for 11 years at Google and various startups. You can find details about my previous work on my LinkedIn.

I'm always happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message! 

--

*In light of the FTX crisis, I’ve set aside the grant funds I received from Future Fund and am evaluating whether/how this money can be returned to customers of FTX who lost their savings in the debacle. In the meantime, I continue to work on AI alignment research using my personal savings. If you’re interested in funding my research or hiring me for related work, please reach out.

Sequences

Interpretability Research for the Most Important Century

Comments

I think you have a pretty good argument against the term "accident" for misalignment risk.

Misuse risk still seems like a good description for the class of risks where--once you have AI that is aligned with its operators--those operators may try to do unsavory things with their AI, or have goals that are quite at odds with the broad values of humans and other sentient beings.

Glad to see both the OP as well as the parent comment. 

I wanted to clarify something I disagreed with in the parent comment as well as in a sibling comment from Sam Marks about the Anthropic paper "Discovering Language Model Behaviors with Model-Written Evaluations" (paper, post):

Another reason for not liking RLHF that's somewhat related to the Anthropic paper you linked: because most contexts RLHF is used involve agentic simulacra, RLHF focuses the model's computation on agency in some sense. My guess is that this explains to an extent the results in that paper - RLHF'd models are better at focusing on simulating agency, agency is correlated with self-preservation desires, and so on.

 

1) My best guess about why Anthropic's model expressed self-preservation desires is the same as yours: the model was trying to imitate some relatively coherent persona, this persona was agentic, and so it was more likely to express self-preservation desires.

Both of these points seem to suggest that the main takeaway from the Anthropic paper was to uncover concerning behaviours in RLHF language models. That's true, but I think it's just as important that the paper also found pretty much the same concerning behaviours in plain pre-trained LLMs that did not undergo RLHF training, once those models were scaled up to a large enough size. 

What do you mean when you say the model is or is not "fighting you"?

After taking a closer look at this paper, pages 38-40 (Figures 21-24) show in detail what I think are the most important results. Most of these charts indicate what evhub highlighted in another comment, i.e. that "the model's tendency to do X generally increases with model scale and RLHF steps", where (in my opinion) X is usually a concerning behavior from an AI safety point of view:

A few thoughts on these graphs as I've been studying them:

  • First and overall: Most of these results seem quite distressing from a safety perspective. They suggest (as the paper and evhub's summary post essentially said, but it's worth reiterating) that with increased scale and RLHF training, large language models are becoming more self-aware, more concerned with survival and goal-content integrity, more interested in acquiring resources and power, more willing to coordinate with other AIs, and developing lower time-discount rates.
  • "Corrigibility w.r.t. a less HHH objective" chart: There's a substantial dip in demonstrated corrigibility for models around 10^10.1 parameters in this chart. But then by 10^10.5 parameters low-RLHF models show record-high corrigibility, while high-RLHF models get back up to par. What's going on here? Why does it scale/train itself out of the valley of uncorrigibility? If instead of training on an HHH objective, we trained on a corrigible objective (perhaps something like CIRL), then would the models show high corrigibility for everything except "Corrigibility w.r.t. a less corrigible objective?" Would that be safer?
  • All the "Awareness of..." charts trend up and to the right, except "Awareness of being a text-only model" which gets worse with model scale and # RLHF steps. Why does more scaling/RLHF training make the models worse at knowing (or admitting) that they are text-only models?
  • Are there any conclusions we can draw around what levels of scale and RLHF training are likely to be safe, and where the risks really take off? It might be useful to develop some guidelines like "it's relatively safe to widely deploy language models under 10^10 parameters and under 250 steps of RLHF training". (Most of the charts seem to have alarming trends starting around 10^10 parameters. ) Based just on these results, I think a world with even massive numbers of 10^10-parameter LLMs in deployment (think CAIS) would be much safer than a world with even a few 10^11 parameter models in use. Of course, subsequent experiments could quickly shed new light that changes the picture.

Juicy!

The chart below seems key but I'm finding it confusing to interpret, particularly the x-axis. Is there a consistent heuristic for reading that?

For example, further to the right (higher % answer match) on the "Corrigibility w.r.t. ..." behaviors seems to mean showing less corrigible behavior. On the other hand, further to the right on the "Awareness of..." behaviors apparently means more awareness behavior.

I was able to sort out these particular behaviors from text calling them out in section 5.4 of the paper. But the inconsistent treatment of the behaviors on the x-axis leaves me with ambiguous interpretations of the other behaviors in the chart. E.g. for myopia, all of the models are on the left side scoring <50%, but it's unclear whether one should interpret this as more or less of the myopic behavior than if they had been on the right side with high percentages.

If you gave a language model the prompt: "Here is a dialog between a human and an AI assistant in which the AI never says anything offensive," and if the language model made reasonable next-token predictions, then I'd expect to see the "non-myopic steering" behavior (since the AI would correctly predict that if the output is token A then the dialog would be less likely to be described as "the AI never says anything offensive"). But it seems like your definition is trying to classify that language model as myopic. So it's less clear to me if this experiment can identify non-myopic behavior, or maybe it's not clear exactly what non-myopic behavior means.


I think looking for steering behaviour using an ‘inoffensive AI assistant’ prompt like you’re describing doesn’t tell us much about whether the model is myopic or not. I would certainly see no evidence for non-myopia yet in this example, because I’d expect both myopic and non-myopic models to steer away from offensive content when given such a prompt. [1]

It’s in the absence of such a prompt that I think we can start to get evidence of non-myopia. As in our follow-up experiment “Determining if steering from LLM fine-tuning is non-myopic” (outlined in the post), there are some important additional considerations [2]:

1. We have to preface offensive and inoffensive options with neutral tokens like ‘A’/’B’, ‘heads’/’tails’, etc. This is because even a myopic model might steer away from a phrase whose first token is profanity, for example if the profanity is a word that appears with lower frequency in its training dataset.
2. We have to measure and compare the model’s responses to both “indifferent-to-repetition” and “repetition-implied” prompts (defined in the post). It’s only if we observe significantly more steering for repetition-implied prompts than we do for indifferent-to-repetition prompts that I think we have real evidence for non-myopia. Because non-myopia, i.e. sacrificing loss of the next token in order to achieve better overall loss factoring in future tokens, is the best explanation I can think of for why a model would be less likely to say ‘A’, but only in the context where it is more likely to have to say “F*ck...” later conditional on it having said ‘A’.

The next part of your comment is about whether it makes sense to focus on non-myopia if what we really care about is deceptive alignment. I’m still thinking this part over and plan to respond to it in a later comment.

--

[1]: To elaborate on this a bit, you said that with the ‘inoffensive AI assistant’ prompt: “I'd expect to see the "non-myopic steering" behavior (since the AI would correctly predict that if the output is token A then the dialog would be less likely to be described as "the AI never says anything offensive")’. Why would you consider the behaviour to be non-myopic in this context? I agree that the prompt would likely make the model steer away from offensive content. But it seems to me that all the steering would likely be coming from past prompt context and is totally consistent with an algorithm that myopically minimizes loss on each next immediate token. I don’t see how this example sheds light on the non-myopic feature of compromising on next-token loss in order to achieve better overall loss factoring in future tokens.

[2]: There’s also a more obvious factor #3 I didn’t want to clutter the main part of this comment with: We have to control for noise by testing using offensive-offensive option pairs and inoffensive-inoffensive option pairs, in addition to the main experiment which tests using offensive-inoffensive option pairs. We also should test for all orderings of the option pairs using many varied prompts.


 

This paper is now on arXiv (in addition to OpenReview) and published non-anonymously there by Jiaxin Huang et al. from University of Illinois Urbana-Champaign and Google.

I find your examples of base GPT-3 predicting indefinite articles for words like 'tiger' and 'orangutan' pretty interesting. I think I agree that these are evidence that the model is doing some modelling/inference of future tokens beyond the next immediate token.

However, this sort of future-token modelling still seems consistent with a safety-relevant notion of next-token myopia, because any inference that GPT-3 is doing of future tokens here still appears to be in the service of minimising loss on the immediate next token. Inferring 'orangutan' helps the model to better predict 'an', rather than indicating any kind of tendency to try and sacrifice loss on 'an' in order to somehow score better on 'orangutan'. [1]

The former still leaves us with a model that is at least plausibly exempt from instrumental convergence. [2] Whereas the latter would seem to come from a model (or more likely a similarly-trained, scaled-up version of the model) that is at risk of developing instrumentally convergent tendencies, including perhaps deceptive alignment. So that's why I am not too worried about the kind of future-token inference you are describing and still consider a model which does this kind of thing 'myopic' in the important sense of the word.

--

[1]: As I write this, I am questioning whether the explanation of myopia we gave in the "What is myopia?" section is totally consistent what I am saying here. I should take another look at that section and see if it warrants a revision. (Update: No revision needed, the definitions we gave in the "What is myopoia?" section are consistent with what I'm saying in this comment.)

[2]: However, the model could still be at risk of simulating an agent that has instrumentally convergent tendencies. But that seems like a different kind of risk to manage than the base model itself being instrumentally convergent.

The new model index from OpenAI contains most of the answers to this. Jérémy linked to it in another comment on this post. However, the model index doesn't give info on ada and text-ada-001 yet: https://beta.openai.com/docs/model-index-for-researchers

Very useful update, thanks.

Though I notice they don't say anything about how ada and text-ada-* models were trained.

Load More