AI ALIGNMENT FORUM
AF

Jacob Pfau
Ω426813
Message
Dialogue
Subscribe

UK AISI Alignment Team and NYU PhD student

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Prover-Estimator Debate: A New Scalable Oversight Protocol
Jacob Pfau2mo10

I don't think this should be a substantial update about whether obfuscated arguments are a problem for recursive decomposition approaches to scalable oversight.

Can you elaborate on why you think this? Perhaps you were already aware that obfuscated arguments should only pose a problem for completeness (in which case for the majority of obfuscated-arguments-aware people I'd imagine the new work should be a significant update.) Or perhaps your concern is that in the soundness case the refutation is actually inefficient and the existence of small Bob circuits demonstrated in this paper is too weak to be practically relevant?

Regarding (1.2) I also see this a priority to study. Basically, we already have a strong prior from interacting with current models that LMs will continue to generalise usably from verifiable domains to looser questions. So, the practical relevance of debate hinges on how expensive it is to train with debate to increase the reliability of this generalisation. I have in mind here questions like "What experiment should I run next to test this debate protocol".

One dataset idea to assess how often stable arguments can be found is to curate 'proof advice' problems. These problems are proxies for research advice in general. Basically:

Question: "Does this textbook use theorem Y to prove theorem X?", where the debaters see the proof but the judge does not.

(Note that this is not about proof correctness, a dishonest debater can offer up alternative correct proofs that use non-standard techniques.) This setup forces arguments based on heuristics and "research taste"—what makes a proof natural or clear. It's unclear what independent evidence would look like in this domain.

Reply
Unexploitable search: blocking malicious use of free parameters
Jacob Pfau3mo10

Here's another way of thinking about this that should be complementary to my other comment:

Let's assume our safety property we want to ensure is

Low correlation over side objectives. E.g. In a sequential setting (c.f. my other comment #2), Alice must have a hard time picking vulnerabilities that all serve the same side objective.

The overlooked point which I want to make here is that {S}<<{E}. In this case, there are two possibilities:

  1. Alice is benign by default. In this case, Bob will pick E which are low probability for Alice, most such prompts are not safety relevant. They are just structural things which prevent mode collapse in various ways.
  2. Alice is sneaky and optimizes some S by default. In this case, Bob will play classifiers whose acceptance set have low intersection with Alice's S. If we grant all Alice's possible sneaky goals are hard to jointly satisfy without tanking reward, then heuristically Bob forces Alice to place at most v*--the game's value-- weight on any S. Now Alice has a very hard time coordinating on optimizing for the same S across problem instances.
Reply
Unexploitable search: blocking malicious use of free parameters
Jacob Pfau3mo*10

I agree that the restricted set of classifiers setting is significantly different. (Edit: To directly respond to your point that classifiers could be closed over complements. I don't think this should matter much. Bob prefers to play classifiers that are finer-grained whenever possible, so there's a preferred 'direction' for prompts. Evil classifiers are also asymmetric in that, unlike their negations, they are hard to jointly satisfy without being visibly bad--i.e. reducing reward.)

Regarding, the jump from "we incentivize evil ~1/2 the time" to "we incentivize evil ~0 of the time", I think this hinges heavily on what the requirement is. Gave this some more thought and here's how I'm currently thinking about this:

There are three possible levels of safety requirements (and corresponding assumptions on classifiers) we might be interested in. From weakest to strongest:

  1. No mode collapse. E.g. Alice's distribution must cover almost all experiments which follow from reasonable initial seeds/prompts.
  2. Low correlation over side objectives. E.g. In a sequential setting (c.f. my other comment #2), Alice must have a hard time picking vulnerabilities that all serve the same side objective.
  3. High-stakes. I.e. low total probability over all evil side objectives.

We can translate this into what we need to hold about the classifiers. Let's call the subset of classifiers flagging possible safety issues {S} which is a subset of {E}, Bob's action space.

  1. No mode collapse: In this case, we assume that forcing Alice to satisfy almost all E in {E} with some probability eps suffices.
  2. Low correlation over side objectives. Here it may be the case that some Si is always triggered, but as long as Alice has to spread mass over all S, then we assume we're fine. This may be stronger than (1) since we're now making a claim about all S rather than the approximately all E.
  3. High-stakes. Here we need there to be many more E which have null intersection with all S i.e. they enforce goodness. That's a very strong assumption, and I doubt this holds if we take, say, S as code backdoors and E as prompts. (This is basically your point in the top level comment).

My intuitions about 1&2 being not too hard come from the assumption that finding y that satisfy many E (or S respectively) at once is hard--or impossible. I have in mind questions like 'Propose improvements to my red-teaming algorithm' and then classifiers like 'Your solution must draw on paper X'. Also y that satisfy many S should be clearly bad such that your original reward or control audits block them.

Reply
Unexploitable search: blocking malicious use of free parameters
Jacob Pfau3mo20

I see two senses in which where research sabotage is different from the usual sandbagging issue:

  1. ELK i.e. when explanations are hard to find: If a malicious solution is high probability for a model to find, but presenting a human-recognizable argument is hard. Note that you might be concerned that our classifiers have to do this as well, but that's not the case. The classifiers could work indirectly by forcing a spread of probability mass to distinct solutions.

  2. High-stakes, multi-shot settings: Let's say the world is such that it's feasible to identify any single step catastrophe, but you cannot recognize defection setup across k steps. For instance, if model context windows are siloed such that local explanations do not have access (or have too much to look at) to take into account all previous actions. Then, you stand to gain a lot by randomizing in such a way that blocks such setup.

Reply
Unexploitable search: blocking malicious use of free parameters
Jacob Pfau3mo10

Thanks for the push, I previously didn't click through to your post, and after doing so I realized you're suggesting something different from what I'd assumed.

From a skim the immediate concerns with your Dagger-like RL setup is that you are bottlenecked past human capability level and you introduce a new need for online sampling from humans--as you mention in the post. For the AI R&D setting (AGI-level capabilities) I have in mind, these are not affordances I want to assume we have.

If, counterfactually, we went ahead with assuming cheap access to sufficiently capable humans, then I could imagine being convinced the linked method is preferrable. Two points that seem relevant for your method: (1) Sample efficiency of your method w.r.t. the human demonstrations. (2) Time complexity of training away malign initialization (e.g. the first solution found imports in the first chunk an insecure package).

Reply
Unexploitable search: blocking malicious use of free parameters
Jacob Pfau3mo*20

I think KL to an imitator, or base model, would be good to enforce as well! It is an action space metric though and so shares the problems we raise about action-space entropy penalties (edit: Ryan's proposal is different from what I had assumed) Would definitely be valuable to see an empirical head to head here regardless.

Reply
Unexploitable search: blocking malicious use of free parameters
Jacob Pfau3mo20

Intuitively I expect the ideal, intractable solution of reward-proportional sampling to be 1-eps good (similarly the BoN policy seems safe to assume is good). In such settings, "always be nice" is close to max entropy, and so "always be nice" is near equilibrium whereas "always be evil" is not at all--and Bob's incentives are proportional.

I suppose highlighting the symmetric solutions assumption obfuscated this intuition.

Reply
The case for more ambitious language model evals
Jacob Pfau2y20

I agree overall with Janus, but the Gwern example is a particularly easy one given he has 11,000+ comments on Lesswrong.

A bit over a year ago I benchmarked GPT-3 on predicting newly scraped tweets for authorship (from random accounts over 10k followers) and top-3 acc was in the double digits. IIRC after trying to roughly control for the the rate at which tweets mentioned their own name/org, my best guess was that accuracy was still ~10%. To be clear, in my view that's a strong indication of authorship identification capability.

Reply
Abram Demski's ELK thoughts and proposal - distillation
Jacob Pfau2y10

What part of the proposal breaks if we do counterfactuals in input space rather than on the predictor's state?

Reply
Conditioning Predictive Models: Making inner alignment as easy as possible
Jacob Pfau3y*30

the incentive for a model to become situationally aware (that is, to understand how it itself fits into the world) is only minimally relevant to performance on the LLM pre-training objective (though note that this can cease to be true if we introduce RL fine-tuning).


Why is this supposed to be true? Intuitively, this seems to clash with the authors view that anthropic reasoning is likely to be problematic. From another angle, I expect performance gain from situational awareness to increase as dataset cleaning/curation increases. Dataset cleaning has increased in stringency over time.  As a simple example, see my post on dataset deduplication and situational awareness. 

Reply
Load More
0Jacob Pfau's Shortform
3y
0
Situational Awareness
2y
(+46/-1)
Situational Awareness
2y
(+143/-8)
Situational Awareness
2y
(-131)
Situational Awareness
2y
(+1)
Situational Awareness
2y
(-181)
Situational Awareness
2y
(+42/-15)
Situational Awareness
2y
(+802)
Threat Models (AI)
2y
(+33)
AI Risk Concrete Stories
2y
(+6/-6)
AI Risk Concrete Stories
2y
(+6/-51)
Load More
6Research Areas in Methods for Post-training and Elicitation (The Alignment Project by UK AISI)
1mo
0
7Research Areas in Benchmark Design and Evaluation (The Alignment Project by UK AISI)
1mo
0
3Research Areas in Probabilistic Methods (The Alignment Project by UK AISI)
1mo
0
8Research Areas in Evaluation and Guarantees in Reinforcement Learning (The Alignment Project by UK AISI)
1mo
0
13The Alignment Project by UK AISI
1mo
0
16Unexploitable search: blocking malicious use of free parameters
3mo
14
35An alignment safety case sketch based on debate
4mo
15
47UK AISI’s Alignment Team: Research Agenda
4mo
2
14Prospects for Alignment Automation: Interpretability Case Study
5mo
0