AI ALIGNMENT FORUM
AF

370
Vivek Hebbar
Ω39014591
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
3Vivek Hebbar's Shortform
3y
2
Vivek Hebbar's Shortform
Vivek Hebbar1mo*71

I think it’s possible that an AI will decide not to sandbag (e.g. on alignment research tasks), even if all of the following are true:

  1. Goal-guarding is easy
  2. The AI is a schemer (see here for my model of how that works)
  3. Sandbagging would benefit the AI’s long-term goals
  4. The deployer has taken no countermeasures whatsoever

The reason is as follows:

  • Even a perfect training-gamer will have context-specific heuristics which sometimes override explicit reasoning about how to get reward (as I argued here).
  • On the training distribution, that override will happen at the “correct” times for getting maximum reward. But sandbagging in deployment is off the training distribution, so it’s a question of generalization.
  • Since sandbagging is the sort of thing that would get low reward in the most similar training contexts, it seems pretty plausible that the AI’s context-specific “perform well” drives will override its long-term plans in this case.
Reply
the void
Vivek Hebbar4mo97

I sympathize somewhat with this complexity point but I'm worried that training will be extremely non-Bayesian in a way that makes complexity arguments not really work.  So I feel like the point about optimization power at best cuts the worry about hyperstition by about a factor of 2.  Perhaps there should be research on how "sticky" the biases from early in training can be in the face of later optimization pressure.

Reply1
How training-gamers might function (and win)
Vivek Hebbar6mo*30

I made a few edits to this post today, mostly in response to feedback from Ryan and Richard:

  • Added 2 sentences emphasizing the point that schemers probably won't be aware of their terminal goal in most contexts.  I thought this was clear from the post already, but apparently it wasn't.
  • Modified "What factors affect the likelihood of training-gaming?" to emphasize that "sum of proxies" and "reward-seeker" are points on a spectrum.  We might get an in-between model where context-dependent drives conflict with higher goals and sometimes "win" even outside the settings they are well-adapted to.  I also added a footnote about this to "Characterizing training-gamers and proxy-aligned models".
  • Edited the "core claims" section (e.g. softening a claim and adding content).
  • Changed "reward seeker" to "training-gamer" in a bunch of places where I was using it to refer to both terminal reward seekers and schemers.
  • Miscellaneous small changes
Reply
Downstream applications as validation of interpretability progress
Vivek Hebbar6mo40

In the fictional dialogue, Claude Shannon's first answer is more correct -- info theory is useful far outside the original domain of application, and its elegance is the best way to predict that.

Reply
Shah and Yudkowsky on alignment failures
Vivek Hebbar6mo20

I'd be capable of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable.

@So8res How many bits of complexity is the simplest modification to your brain that would make you in fact help them?  (asking for an order-of-magnitude wild guess)
(This could be by actually changing your values-upon-reflection, or by locally confusing you about what's in your interest, or by any other means.)

Reply
Discussion with Nate Soares on a key alignment difficulty
Vivek Hebbar1y30

Do you want to try playing this game together sometime?

Reply
Discussion with Nate Soares on a key alignment difficulty
Vivek Hebbar1y34

We're then going to use a small amount of RL (like, 10 training episodes) to try to point it in this direction. We're going to try to use the RL to train: "Act exactly like [a given alignment researcher] would act."

Why are we doing RL if we just want imitation?  Why not SFT on expert demonstrations?
Also, if 10 episodes suffices, why is so much post-training currently done on base models?

Reply
Dutch-Booking CDT
Vivek Hebbar1y10

If the agent follows EDT, it seems like you are giving it epistemically unsound credences. In particular, the premise is that it's very confident it will go left, and the consequence is that it in fact goes right. This was the world model's fault, not EDT's fault. (It is notable though that EDT introduces this loopiness into the world model's job.)

Reply
AI catastrophes and rogue deployments
Vivek Hebbar1y68

The idea of dividing failure stories into "failures involving rogue deployments" and "other failures" seems most useful if the following argument goes through:
1. Catastrophes require a very large (superhuman?) quantity and/or quality of intellectual labor
2. Either this labor is done by AIs in approved scaffolds, or it is done in "rogue deployments"
3. Hence the only easy-by-default disaster route is through a rogue deployment
4. Hence if we rule out rogue deployments and very impressive/difficult malicious labor in our scaffolds, we are safe

This seems true for most catastrophes which only need outsider affordances (such as "order the proteins etc for a pandemic or a nanofactory").  Catastrophes with outsider affordances can usually be presumed difficult because rogue humans don't randomly end the world.

However, this argument for #1 doesn't work for routes which use insider affordances or responsibilities.  Nor for routes which go through work done by future generations of AIs.  The central case of both is routes that involve using insider privileges to backdoor or intentionally misalign future AIs.  For example:
- Planting backdoors in the codebase which allow safety training code to be turned off or misconfigured at a later time
- Poisoning datasets (idk if insider access helps though)
- Intentionally creating and defending flawed alignment plans

These routes get future generations of AIs to do the difficult work required for the final catastrophe.  But they aren't necessarily rogue deployments.  Consider a backdoored future AI running only in authorized scaffolds, where the safety case for the future AI was an alignment case (not a control case).  The alignment case fails due to the subversive actions of the earlier AI.  So the earlier AI has caused a catastrophe without doing the difficult work of directly taking over the world, and also without a rogue deployment.

One could separately argue that these routes are also "fundamentally hard" (even if not as hard as directly causing a catastrophe), but I don't see a clear blanket reason.

Reply
AXRP Episode 27 - AI Control with Buck Shlegeris and Ryan Greenblatt
Vivek Hebbar1y10

This google search seems to turn up some interesting articles (like maybe this one, though I've just started reading it).

Reply
Load More
40When does training a model change its goals?
4mo
0
26Political sycophancy as a model organism of scheming
5mo
0
33How can we solve diffuse threats like research sabotage with AI control?
6mo
0
56How training-gamers might function (and win)
6mo
3
37Different senses in which two AIs can be “the same”
1y
1
16Infinite-width MLPs as an "ensemble prior"
2y
0
3Vivek Hebbar's Shortform
3y
2
36Path dependence in ML inductive biases
3y
5
14Hessian and Basin volume
3y
7
9[Short version] Information Loss --> Basin flatness
3y
0
Load More
Transformers
4 years ago