AI ALIGNMENT FORUM
AF

Owain Evans
Ω35912350
Message
Dialogue
Subscribe

https://owainevans.github.io/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Go home GPT-4o, you’re drunk: emergent misalignment as lowered inhibitions
Owain Evans4mo1710

I found this post frustrating. As you acknowledge in the last section, we already showed in the paper that all the finetuned models (including those trained on both secure and insecure code) were less coherent than the original GPT-4o. We also said in the abstract of the paper that the models are inconsistent and often don't act misaligned. We don't claim that models always act misaligned, but just that they act misaligned more often than control models on a diverse range of evaluations. 

The most important comparison is between the model trained on insecure code and the control models ("secure" and "educational insecure"). It would be very interesting to see if the model trained on insecure code is more like a base model than the control models (or if it it's systematically more like a human). So that's the experiment I think you should do.

Reply
Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data
Owain Evans1y45

I agree that there are ways to explain the results and these points from Steven and Thane make sense. I will note that the models are significantly more reliable at learning in-distribution (i.e. to predict the training set) than they are at generalizing to the evaluations that involve verbalizing the latent state (and answering downstream questions about it). So it's not the case that learning to predict the training set (or inputs very similar to training inputs) automatically results in generalization to the verbalized evaluations. We do see improvement in reliability with GPT-4 over GPT-3.5, but we don't have enough information to draw any firm conclusions about scaling.

Reply
Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data
Owain Evans1y20

Good question. I expect you would find some degree of consistency here. Johannes or Dami might be able to some results on this.

Reply
Paper: Tell, Don't Show- Declarative facts influence how LLMs generalize
Owain Evans2y10

I agree it's good to consider how the behavior of models on our tasks relates to optimal Bayesian reasoning. That said, I'm not sure how to define or calculate the "groundtruth" for optimal reasoning. (Does it depend on using the pretraining distribution as a prior and if so how should we estimate that? How to think about the distinction between in-context and out-of-context reasoning?).

In any case, there is some evidence against models being close to Bayesian optimality (however exactly optimality is defined):
1. Results on the same task differ between GPT-3 and Llama-2 models (two models that have fairly similar overall capabilities). Llama-2 being slightly more influenced by declarative information. 
2. From the Bayesian perspective, including "realized descriptions" should have a significant impact on how much the model is influenced by "unrealized descriptions". The effects we see seem smaller than expected (see Figure 4 and Table 2). 

Incidentally, I like the idea of testing in different languages to see if the model is encoding in the information more abstractly.

Reply
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
Owain Evans2y30

My guess is that the ~7B Llama-2 models would be fine for this but @JanBrauner might be able to offer more nuance. 

Reply
Paper: On measuring situational awareness in LLMs
Owain Evans2y10

Here's a Twitter thread and discussion: https://twitter.com/OwainEvans_UK/status/1698683186090537015

Reply
Paper: On measuring situational awareness in LLMs
Owain Evans2y20

We didn't investigate the specific question of whether it's raw diversity or specific features. In the Grosse et al paper on influence functions, they find that "high influence scores are relatively rare and they cover a large portion of the total influence". This (vaguely) suggests that the top k paraphrases would do most of the work, which is what I would guess. That said, this is really something that should be investigated with more experiments.

Reply1
Paper: On measuring situational awareness in LLMs
Owain Evans2y10

We think there's a connection between the Reversal Curse and some results in the model editing literature. I'm not sure if this applies to the specific ROME results in that post. We'll have the Reversal Curse paper out soon, which will explain more.

Reply
Paper: On measuring situational awareness in LLMs
Owain Evans2y41

Good points. As we note in the paper, this may conflict with the idea of automating alignment research in order to solve alignment. Aaron_Scher makes a related point. 

More generally, it's uncertain what the impact is of excluding a certain topic from pretraining. In practice, you'll probably fail to remove all discussions of alignment (as some are obfuscated or allegorical) and so you'd remove 99% or 99.9% rather than 100%. The experiments in our paper, along with the influence functions work by Grosse et al. could help us understand what the impact of this is likely to be.

Reply
Paper: On measuring situational awareness in LLMs
Owain Evans2y50

So performance here should be thought of more as ‘how good is the model at learning about a persona in fine-tuning and then being able to imitate/simulate that persona in deployment’. This is different from a model believing it is the persona or applying this knowledge to some concept of self. Good performance at this task does not require having a sense of self, this is just a precursor that may be necessary for situational awareness.

That's correct. We tried to emphasize that our experiments are testing out-of-context reasoning, rather than situational awareness. We also emphasize that we test whether the model can emulate multiple fictitious chatbots (which have a different identity than GPT-3 or Llama), which wouldn't make sense if the goal was to test whether the model has a sense of itself.

All the motivation for this project came from wanting to understand and forecast situational awareness and we want to encourage further work on that problem. This is why we've framed the paper around situational awareness, rather than simply talking about out-of-context reasoning. This is likely to cause some confusion if someone just skims the paper, but I hope that this will be reduced if people read more of the paper.

Reply
Load More
2Owain_Evans's Shortform
4y
0
110Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
4mo
1
51Tell me about yourself: LLMs are aware of their learned behaviors
6mo
0
39New, improved multiple-choice TruthfulQA
6mo
0
34Inference-Time-Compute: More Faithful? A Research Note
6mo
0
40LLMs can learn about themselves by introspection
9mo
0
88Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data
1y
10
16How do LLMs give truthful answers? A discussion of LLM vs. human reasoning, ensembles & parrots
1y
0
21Paper: Tell, Don't Show- Declarative facts influence how LLMs generalize
2y
3
73How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
2y
26
56Paper: LLMs trained on “A is B” fail to learn “B is A”
2y
0
Load More