Tom Davidson

Wikitag Contributions

Comments

Sorted by

Let's say future post-training involves training in 5 diff RL environments one after the other, but the order doesn't matter for capabilities. It's possible that AI picks up it's goals from the first environment and then plays the training game from there.

 

Then intentionally mix up the order for subsequent generations. 

 

You could do this today with pre-training (eg randomly shuffling the data order each time), and maybe it's already happening. 

 

(Tbc, I haven't thought about whether this would work in practice, but inside view seems worth thinking more about it)

One other thing you don't discuss here, but affects the AIs decision to take the deal, is that we can purposefully mix-up the training data and order of training data in future generations of AI to make it less likely that successive generations of misaligned AI are aligned with each other. 

 

Again, to me this seems like a pretty simple intervention that could potentially shift the AI's calculus significantly when deciding whether to cooperate with humans. 

One other comment I'll throw out there is that my current hot take is that the case for doing this is extremely strong given the possibility of schemers that have DMR/bounded utility in resources at universe-wide scales.

 

It's just super cheap (as a fraction of total future resources) to buy off such schemers, and seems like leaving massive value on the table to not credibly offer the deals that would do so. 

 

There's a chance that this all schemers have DMR of this kind, and credible offers would incentivise them all to cooperate. 

One option you don't discuss, and that seems important, is to make our offer of payment CONDITIONAL on the AIs values.
 

For example:
- We pay AIs way more of they are linear in resources, as then they'll need greater payment to incentivise cooperation
- We pay AIs only a small amount if they have DMR in resources, as then they need only a small payment to incentivise cooperation


This way we don't waste money offering big returns to a satisfycing AI. 

If the AI agrees to the deal, we (in consultation with the AI) train it on documents about the deal.

 

I'd want to do more to highlight the benefits of unilaterally offering AIs a deal. 

You can just say: "look if post-singularity we later see you didn't take opportunities to sabotage research, or we see you tried harder to solve alignment than you would naturally given your actual values to, we'll pay you handsomely". 

 

That way, the AI doesn't have to tip its hand and reveal its own misalignment (giving humans info that they could potentially exploit) to engage in the deal. The AI could trade without much lower downside risk. 

 

This seems like it could be a big effect. 

 

(And if links to your later comments about private info:
 

  1. Private information: AIs and humans each know things about their capabilities and attitudes to risk that the other doesn’t, and have incentives to misrepresent such private information in order to gain a better deal. In particular, the AI has incentive not to disclose any information whatsoever before becoming confident that we wouldn’t exploit this in order to worsen its bargaining position, to an extent where it could be disincentivized from entering into any negotiations with us. )

In practice, we expect the majority of the donation to be invested in escrow until we’ve reached civilizational stability and are able to retrospectively adjudicate the AI’s cooperation, which is generally preferable to AIs without high temporal discount rates.

 

I don't think you sufficiently stress the benefits of paying the AIs early in this piece.

 

If the AI has a time discount, or uncertainty about the payment mechanism holding over time, getting the payout sooner can make a big diff. And early payment builds trust in the whole scheme.

 

So we should allow the AI to spend the money earlier as long as:
1) humans can validate the AI helped you out (eg if it proves that it's misaligned)
2) humans confirm the spending is not harmful

Yeah, you can get into other fancy tricks to defend it like:

  • Input-specific technological progress. Even if labour has grown more slowly than capital, maybe the 'effective labour supply' -- which includes tech makes labour more productive (e.g. drinking caffeine, writing faster on a laptop) -- has grown as fast as capital.
  • Input-specific 'stepping on toes' adjustments. If capital grows at 10%/year and labour grows at 5%/year, but (effective labour) = labour^0.5, and (effective capital)=capital, then the growth rates of effective labour and effective capital are equal

Thanks, this is a great comment.

 

I buy your core argument against the main CES model I presented. I think your key argument ("the relative quantity of labour vs compute has varied by OOMs; If CES were true, then one input would have become a hard bottleneck; but it hasn't.") is pretty compelling as an objection to the simple naive CES I mention in the post. It updates me even further towards thinking that, if you use this naive CES, you should have ρ> -0.2. Thanks!

 

The core argument is less powerful against a more realistic CES model that replaces 'compute' with 'near-frontier sized experiments'. I'm less sure how strong it is as an argument against the more-plausible version of the CES where rather than inputs of cognitive labour and compute we have inputs of cognitive labour and number of near-frontier-sized experiments. (I discuss this briefly in the post.) I.e. if a lab has total compute C_t, its frontier training run takes C_f compute,  and we say that a 'near-frontier-sized' experiment uses 1% as much compute as training a frontier model, then the number of near-frontier sized experiments that the lab could run equals E = 100* C_t / C_f  

With this formulation, it's no longer true that a key input has increased by many OOMs, which was the core of your objection (at least the part of your objection that was about the actual world rather than about hypotheticals - i discuss hypotheticals below.)

Unlike compute, E hasn't grown by many OOMs over the last decade. How has it changed? I'd guess it's gotten a bit smaller over the past decade as labs have scaled frontier training runs faster than they've scaled their total quantity of compute for running experiments. But maybe labs have scaled both at equal pace, especially as in recent years the size of pre-training has been growing more slowly (excluding GPT-4.5). 

So this version of the CES hypothesis fares better against your objection, bc the relative quantity of the two inputs (cognitive labour and number of near-frontier experiments) have changed by less over the past decade. Cognitive labour inputs have grown by maybe 2 OOMs over the past decade, but the 'effective labour supply', adjusting for diminishing quality and stepping-on-toes effects has grown by maybe just 1 OOM. With just 1 OOM relative increase in cognitive labour, the CES function with ρ=-0.4 implies that compute will have become more of a bottleneck, but not a complete bottleneck such that more labour isn't still useful. And that seems roughly realistic.

Minimally, the CES view predicts that AI companies should be spending less and less of their budget on compute as GPUs are getting cheaper. (Which seems very false.)

This version of the CES hypothesis also dodges this objection. AI companies need to spend much more on compute over time just to keep E constant and avoid compute becoming a bottleneck.

This model does make a somewhat crazy prediction where it implies that if you scale up compute and labor exactly in parallel, eventually further labor has no value. (I suppose this could be true, but seems a bit wild.)

Doesn't seem that wild to me? When we scale up compute we're also scaling up the size of frontier training runs; maybe past a certain point running smaller experiments just isn't useful (e.g. you can't learn anything from experiments using 1 billionth of the compute of a frontier training run); and maybe past a certain point you just can't design better experiments. (Though I agree with you that this is all unlikely to bite before a 10X speed up.)

 

consider a hypothetical AI company with the same resources as OpenAI except that they only employ aliens whose brains work 10x slower and for which the best researcher is roughly as good as OpenAI's median technical employee

Nice thought experiment. 

So the near-frontier-experiment version of the CES hypothesis would say that those aliens would be in a world where experimental compute isn't a bottleneck on AI progress at all: the aliens don't have time to write the code to run the experiments they have the compute for! And we know we're not in that world because experiments are a real bottleneck on our pace of progress already: researchers say they want more compute! These hypothetical aliens would make no such requests. It may be a weird empirical coincidence that cognitive labour helps up to our current level but not that much further, but we can confirm with the evidence of the marginal value of compute in our world. 

But actually I do agree the CES hypothesis is pretty implausible here. More compute seems like it would still be helpful for these aliens: e.g. automated search over different architectures and running all experiments at large scale. And evolution is an example where the "cognitive labour" going into AI R&D was very very very minimal and still having lots of compute to just try stuff out helped. 

So I think this alien hypothetical is probably the strongest argument against the near-frontier experiment version of the CES hypothesis. I don't think it's devastating -- the CES-advocate can bite the bullet and claim that more compute wouldn't be at all useful in that alien world. 

(Fwiw i preferred the way you described that hypothesis before your last edit.) 

 

You can also try to 'block' the idea of a 10X speed up by positing a large 'stepping on toes' effect. If it's v important to do experiments in series and that experiments can't be sped up past a certain point, then experiments could still bottleneck progress. This wouldn't be about the quantity of compute being a bottleneck per se, so it avoids your objection. Instead the bottleneck is 'number of experiments you can run per day'. Mathematically, you could represent this by smg like:

 AI progress per week = log(1000 + L^0.5 * E^0.5)

The idea is that there are ~linear gains to research effort initially, but past a certain point returns start to diminish increasingly steeply such that you'd struggle to ever realistically 10X the pace of progress. 

Ultimately, I don't really buy this argument. If you applied this functional form to other areas of science you'd get the implication that there's no point scaling up R&D past a certain point, which has never happened in practice. And even i think the functional form underestimates has much you could improve experiment quality and how much you could speed up experiments. And you have to cherry pick the constant so that we get a big benefit from going from the slow aliens to OAI-today, but limited benefit from going from today to ASI.

Still, this kind of serial-experiment bottleneck will apply to some extent so it seems worth highlighting that this bottleneck isn't effected by the main counterargument you made.

 

I don't think my treatment of initial conditions was confused

I think your discussion (and Epoch's discussion) of the CES model is confused as you aren't taking into account the possibility that we're already bottlenecking on compute or labor...

In particular, consider a hypothetical alternative world where they have the same amount of compute, but there is only 1 person (Bob) working on AI and this 1 person is as capable as the median AI company employee and also thinks 10x slower. In this alternative world they could also say "Aha, you see because , even if we had billions of superintelligences running billions of times faster than Bob, AI progress would only go up to around 4x faster!"

Of course, this view is absurd because we're clearly operating >>4x faster than Bob.

So, you need to make some assumptions about the initial conditions....

Consider another hypothetical world where the only compute they have is some guy with an abacus, but AI companies have the same employees they do now. In this alternative world, you could also have just as easily said "Aha, you see because , even if we had GPUs that could do 1e15 FLOP/s (far faster than our current rate of 1e-1 fp8 FLOP/s), AI progress would only go around 4x faster!"

My discussion does assume that we're not currently bottlenecked on either compute or labour, but I think that assumption is justified. It's quite clear that labs both want more high-quality researchers -- top talent has very high salaries, reflecting large marginal value-add. It's also clear that researchers want more compute -- again reflecting large marginal value-add. So it's seems clear that we're not strongly bottlenecked by just one of compute or labour currently. That's why I used α=0.5, assuming that the elasticity of progress to both inputs is equal. (I don't think this is exactly right, but seems in the right rough ballpark.)

 

I think your thought experiments about the the world with just one researcher and the world with just an abacus are an interesting challenge to the CES function, but don't imply that my treatment of initial conditions was confused. 

 

I actually don't find those two examples very convincing though as challenges to the CES. In both those worlds it seems pretty plausible that the scarce input would be a hard bottleneck on progress. If all you have is an abacus, then probably the value of the marginal AI researcher would be ~0 as they'd have no compute to use. And so you couldn't run the argument "ρ= - 0.4 and so more compute won't help much" because in that world (unlike our world) it will be very clear that compute is a hard bottleneck to progress and cognitive labour isn't helpful. And similarly, in the world with just Bob doing AI R&D, it's plausible that AI companies would have ~0 willingness to pay for more compute for experiments, as Bob can't use the compute that he's already got; labour is the hard bottleneck. So again you couldn't run the argument based on ρ=-0.4 bc that argument only works if neither input is currently a hard bottleneck. 

Can you spell out more how "Automated empirical research on process-focused evaluation methods" helps us to automate conceptual alignment research?

 

I get that we could get much better at understanding model psychology and how model's generalise.

 

But why does that mean we can now automate conceptual work like "is this untestable idea a good contribution to alignment"?

Yep, I think this is a plausible suggestion. Labs can plausibly train models that are v internally useful without being helpful only, and could fine-tune models for evals on a case-by-case basis (and delete the weights after the evals).

Load More