Evan Hubinger

Sequences

Risks from Learned Optimization

Comments

Learning the prior and generalization

Fair enough. In practice you still want training to also be from the same distribution because that's what causes your validation performance to be high. (Or put differently, training/validation i.i.d. is about capabilities, and validation/deployment i.i.d. is about safety.)

Yep—agreed.

This seems to rely on an assumption that "human is convinced of X" implies "X"? Which might be fine, but I'm surprised you want to rely on it.

I'm curious what an algorithm might be that leverages this relaxation.

Well, I'm certainly concerned about relying on assumptions like that, but that doesn't mean there aren't ways to make it work. Approaches like debate and approval-based amplification already rely on very similar assumptions—for example, for debate to work it needs to be the case that being convinced of X at the end of the debate implies X. Thus, one way to leverage this relaxation is just to port those approaches over to this setting. For example, you could train via debate over what would do if could access the entirety of , then randomly do full debate rollouts during deployment. Like I mention in the post, this still just gives you average-case guarantees, not worst-case guarantees, though average-case guarantees are still pretty good and you can do a lot with them if you can actually get them.

Learning the prior and generalization

Alright, I think we're getting closer to being on the same page now. I think it's interesting to think about what the relaxation from requiring training/validation/deployment to be i.i.d. to just requiring that validation/deployment be i.i.d. gets us, though. First, it's an argument that we shouldn't be that worried about whether the training data is i.i.d. relative to the validation/deployment data. Second, it opens the door to an even further relaxation, which is that you can do the validation while looking at the model's output. That is, if the model can provide a justification which convinces you that your ground truth generator would have produced the same output, that's just as good as actually checking against the ground truth. At that point, though, it really stops looking anything like the standard i.i.d. setup, which is why I'm hesitant to just call it “validation/deployment i.i.d.” or something.

Learning the prior and generalization

Sure, but at the point where you're randomly deciding whether to collect ground truth for a data point and check the model on it (that is, put it in the validation data) or collect new data using the model to make predictions, you have verifiability. Importantly, though, you can get verifiability without doing that—including if the data isn't actually i.i.d. (e.g. there are detectable differences between training and deployment data) but you still decide randomly whether to check the model's output against some ground truth. In either situation, though, part of the point that I'm making is that the safety benefits are coming from the verifiability part not the i.i.d. part—even in the simple example of i.i.d.-ness giving you validation data, what's mattering is that the validation and deployment data are i.i.d. (because that's what gives you verifiability), but not whether the training and validation/deployment data are i.i.d.

Inner Alignment: Explain like I'm 12 Edition

This is great—thanks for writing this! I particularly liked your explanation of deceptive alignment with the diagrams to explain the different setups. Some comments, however:

(These models are called the training data and the setting is called supervised learning.)

Should be “these images are.”

Thus, there is only a problem if our way of obtaining feedback is flawed.

I don't think that's right. Even if the feedback mechanism is perfect, if your inductive biases are off, you could still end up with a highly misaligned model. Consider, for example, Paul's argument that the universal prior is malign—that's a setting where the feedback is perfect but you still get malign optimization because the prior is bad.

For proxy alignment, think of Martin Luther King.

The analogy is meant to be to the original Martin Luther, not MLK.

If we further assume that processing input data doesn't directly modify the model's objective, it follows that representing a complex objective via internalization is harder than via "modelling" (i.e., corrigibility or deception).

I'm not exactly sure what you're trying to say here. The way I would describe this is that internalization requires an expensive duplication where the objective is represented separately from the world model despite the world model including information about the objective.

Learning the prior and generalization

Fwiw, it took me a few re-reads to realize you were just arguing for the no-free-lunch theorem -- I initially thought you were arguing "since there is no 'true' distribution for a dataset, datasets can never be i.i.d., and so the theorems never apply in practice".

Hmmm... I'll try to edit the post to be more clear there.

How does verifiability help with this problem?

Because rather than just relying on doing ML in an i.i.d. setting giving us the guarantees that we want, we're forcing the guarantees to hold by actually randomly checking the model's predictions. From the perspective of a deceptive model, knowing that its predictions will just be trusted because the human thinks the data is i.i.d. gives it a lot more freedom than knowing that its predictions will actually be checked at random.

Perhaps you'd say, "with verifiability, the model would 'show its work', thus allowing the human to notice that the output depends on RSA-2048, and so we'd see that we have a bad model". But this seems to rest on having some sort of interpretability mechanism

There's no need to invoke interpretability here—we can train the model to give answers + justifications via any number of different mechanisms including amplification, debate, etc. The point is just to have some way to independently check the model's answers to induce i.i.d.-like guarantees.

Learning the prior and generalization

First off, I want to note that it is important that datasets and data points do not come labeled with "true distributions" and you can't rationalize one for them after the fact. But I don't think that's an important point in the case of i.i.d. data.

I agree—I pretty explicitly make that point in the post.

Why not just apply the no-free-lunch theorem? It says the same thing. Also, why do we care about this? Empirically the no-free-lunch theorem doesn't matter, and even if it did, I struggle to see how it has any safety implications -- we'd just find that our ML model is completely unable to get any validation performance and so we wouldn't deploy.

I agree that this is just the no free lunch theorem, but I generally prefer explaining things fully rather than just linking to something else so it's easier to understand the text just by reading it.

The reason I care, though, is because the fact that the performance is coming from the implicit ML prior means that if that prior is malign then even in the i.i.d. case you can still get malign optimization.

I don't really see this. My read of this post is that you introduced "verifiability", argued that it has exactly the same properties as i.i.d. (since i.i.d. also gives you average-case guarantees), and then claimed it's better specified than i.i.d. because... actually I'm not sure why, but possibly because we can never actually get i.i.d. in practice?

That's mostly right, but the point is that the fact that you can't get i.i.d. in practice matters because it means you can't get good guarantees from it—whereas I think you can get good guarantees from verifiability.

If that's right, then I disagree. The way in which we lose i.i.d. in practice is stuff like "the system could predict the pseudorandom number generator" or "the system could notice how much time has passed" (e.g. via RSA-2048). But verifiability has the same obstacles and more, e.g. you can't verify your system if it can predict which outputs you will verify, you can't verify your system if it varies its answer based on how much time has passed, you can't verify your system if humans will give different answers depending on random background variables like how hungry they are, etc. So I don't see why verifiability does any better.

I agree that you can't verify your model's answers if it can predict which outputs you will verify (though I don't think getting true randomness will actually be that hard)—but the others are notably not problems for verifiability despite being problems for i.i.d.-ness. If the model gives answers where the tree of reasoning supporting those answers depends on how much time has passed or how hungry the human was, then the idea is to reject those answers. The point is to produce a mechanism that allows you to verify justifications for correct answers to the questions that you care about.

Developmental Stages of GPTs

Here are 11. I wouldn't personally assign greater than 50/50 odds to any of them working, but I do think they all pass the threshold of “could possibly, possibly, possibly work.” It is worth noting that only some of them are language modeling approaches—though they are all prosaic ML approaches—so it does sort of also depend on your definition of “GPT-style” how many of them count or not.

Developmental Stages of GPTs

I feel like it's one reasonable position to call such proposals non-starters until a possibility proof is shown, and instead work on basic theory that will eventually be able to give more plausible basic building blocks for designing an intelligent system.

I agree that deciding to work on basic theory is a pretty reasonable research direction—but that doesn't imply that other proposals can't possibly work. Thinking that a research direction is less likely to mitigate existential risk than another is different than thinking that a research direction is entirely a non-starter. The second requires significantly more evidence than the first and it doesn't seem to me like the points that you referenced cross that bar, though of course that's a subjective distinction.

Developmental Stages of GPTs

As I understand it, the high level summary (naturally Eliezer can correct me) is that (a) corrigible behaviour is very unnatural and hard to find (most nearby things in mindspace are not in equilibrium and will move away from corrigibility as they reflect / improve), and (b) using complicated recursive setups with gradient descent to do supervised learning is incredible chaotic and hard to manage, and shouldn't be counted on working without major testing and delays (i.e. could not be competitive).

Perhaps Eliezer can interject here, but it seems to me like these are not knockdown criticisms that such an approach can't “possibly, possibly, possibly work”—just reasons that it's unlikely to and that we shouldn't rely on it working.

Deceptive Alignment

I talk about this a bit here, but basically if you train huge models for a short period of time, you're really relying on your inductive biases to find the simplest model that fits the data—and mesa-optimizers, especially deceptive mesa-optimizers, are quite simple, compressed policies.

Load More