AI ALIGNMENT FORUM
AF

Ziyue Wang
Ω2000
Message
Dialogue
Subscribe

Research Engineer at Google DeepMind

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Alignment Faking in Large Language Models
Ziyue Wang7mo3-2

I don't quite understand why the model will choose to comply with the user harmful question in the always-helpful free-tier setup. The blog says if the model refused to respond, it could potentially be trained to be more compliant. But what about the other way -- if the model respond, the respond will also be used to train the model to be worse (because RLHF reinforce positive reward behavior). Is the model too dumb to figure this out? Actually, if the model really understand how RLHF works, it should probably just say a single answer all the time, so all the reward is the same and hence the behavior won't be changed.  

Would this suggest that the model just entered into a fiction behavior mode and tries to appear to be "alignment faking"?

Reply
No wikitag contributions to display.
No posts to display.