AI ALIGNMENT FORUM
AF

AnthropicsDeceptive AlignmentSituational AwarenessAI
Frontpage

7

[ Question ]

Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception?

by David Scott Krueger
4th Sep 2024
1 min read
A
2
7

7

AnthropicsDeceptive AlignmentSituational AwarenessAI
Frontpage
Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception?
4Daniel Kokotajlo
1David Scott Krueger
2Daniel Kokotajlo
3David Scott Krueger
1Ramana Kumar
New Answer
New Comment

2 Answers sorted by
top scoring

Daniel Kokotajlo

Sep 04, 2024

42

I know of no rigorous proposals. The general challenge such proposals face is that if you are relying on fooling your AGI about something to keep control over it, and it's constantly and rapidly getting smarter and wiser... that's a recipe for your scheme to fail suddenly and silently (when it stops being fooled), which is a recipe for disaster.

Another type of proposal relies on making it actually true that it might be in a simulation--or to put it more precisely perhaps, making it actually the case that future aligned superintelligences will make simulations so accurate that even a baby superintelligence can't tell the difference. However, two can play at that game; more generally this just becomes a special case of acausal trade stuff which will be wild and confusing and very important once AIs are smart enough to take it seriously.

Add Comment
[-]David Scott Krueger10mo11

Not necessarily fooling it, just keeping it ignorant.  I think such schemes can plausibly scale to very high levels of capabilities, perhaps indefinitely, since intelligence doesn't give one the ability to create information from thin air...

Reply
2Daniel Kokotajlo10mo
Are you describing something that would fit within my 'Another type of proposal...' category?
3David Scott Krueger10mo
No, I was only responding to the the first part.

Ramana Kumar

Sep 26, 2024

10

Vaguely related perhaps is the work on Decoupled Approval: https://arxiv.org/abs/2011.08827

Add Comment
Moderation Log
Curated and popular this week
A
2
0

AI systems up to some high level of intelligence plausibly need to know exactly where they are in space-time in order for deception/"scheming" to make sense as a strategy.
This is because they need to know:
1) what sort of oversight they are subject to 
and
2) what effects their actions will have on the real world

(side note: Acausal trade might break this argument)

There are a number of informal proposals to keep AI systems selectively ignorant of (1) and (2) in order to prevent deception.  Those proposals seem very promising to flesh out; I'm not aware of any rigorous work doing so, however.  Are you?