AI ALIGNMENT FORUM
AF

Lukas Finnveden
Ω44771013
Message
Dialogue
Subscribe

Previously "Lanrian" on here. Research analyst at Redwood Research. Views are my own.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Extrapolating GPT-N performance
Foom & Doom 1: “Brain in a box in a basement”
Lukas Finnveden10d30

To be clear: I'm not sure that my "supporting argument" above addressed an objection to Ryan that you had. It's plausible that your objections were elsewhere.

But I'll respond with my view.

If your argument is “brain-like AGI will work worse before it works better”, then sure, but my claim is that you only get “impressive and proto-AGI-ish” when you’re almost done, and “before” can be “before by 0–30 person-years of R&D” like I said.

Ok, so this describes a story where there's a lot of work to get proto-AGI and then not very much work to get superintelligence from there. But I don't understand what's the argument for thinking this is the case vs. thinking that there's a lot of work to get proto-AGI and then also a lot of work to get superintelligence from there.

Going through your arguments in section 1.7:

  • "I think the main reason is what I wrote about the “simple(ish) core of intelligence” in §1.3 above."
    • But I think what you wrote about the simple(ish) core of intelligence in 1.3 is compatible with there being like (making up a number) 20 different innovations involved in how the brain operates, each of which gets you a somewhat smarter AI, each of which could be individually difficult to figure out. So maybe you get a few, you have proto-AGI, and then it takes a lot of work to get the rest.
      • Certainly the genome is large enough to fit 20 things.
      • I'm not sure if the "6-ish characteristic layers with correspondingly different neuron types and connection patterns, and so on" is complex enough to encompass 20 different innovations. Certainly seems like it should be complex enough to encompass 6.
    • (My argument above was that we shouldn't expect the brain to run an algorithm that only is useful once you have 20 hypothetical components in place, and does nothing beforehand. Because it was found via local search, so each of the 20 things should be useful on their own.)
  • "Plenty of room at the top" — I agree.
  • "What's the rate limiter?" — The rate limiter would be to come up with the thinking and experimenting needed to find the hypothesized 20 different innovations mentioned above. (What would you get if you only had some of the innovations? Maybe AGI that's incredibly expensive. Or AGIs similarly capable as unskilled humans.)
  • "For a non-imitation-learning paradigm, getting to “relevant at all” is only slightly easier than getting to superintelligence"
    • I agree that there are reasons to expect imitation learning to plateau around human-level that don't apply to fully non-imitation learning.
    • That said...
      • For some of the same reasons that "imitation learning" plateaus around human level, you might also expect "the thing that humans do when they learn from other humans" (whether you want to call that "imitation learning" or "predictive learning" or something else) to slow down skill-acquisition around human level.
      • There could also be another reason for why non-imitation-learning approaches could spend a long while in the human range. Namely: Perhaps the human range is just pretty large, and so it takes a lot of gas to traverse. I think this is somewhat supported by the empirical evidence, see this AI impacts page (discussed in this SSC).
Reply
Foom & Doom 1: “Brain in a box in a basement”
Lukas Finnveden10d50

Prior to having a complete version of this much more powerful AI paradigm, you'll first have a weaker version of this paradigm (e.g. you haven't figured out the most efficient way to do the brain algorithmic etc).

A supporting argument: Since evolution found the human brain algorithm, and evolution only does local search, the human brain algorithm must be built out of many innovations that are individually useful. So we shouldn't expect the human brain algorithm to be an all-or-nothing affair. (Unless it's so simple that evolution could find it in ~one step, but that seems implausible.)

Edit: Though in principle, there could still be a heavy-tailed distribution of how useful each innovation is, with one innovation producing most of the total value. (Even though the steps leading up to that were individually slightly useful.) So this is not a knock-down argument.

Reply
Making deals with early schemers
Lukas Finnveden23d184

Thanks for writing this! I agree with most of it. One minor difference (which I already mentioned to you) is that, compared to what you emphasize in the post, I think that a larger fraction of the benefits may come from the information value of learning that the AIs are misaligned. This is partially because the information value could be very high. And partially because, if people update enough on how the AI appears to be misaligned, they may be too scared to widely deploy the AI, which will limit the degree to which they can get the other benefits.

Here's why I think the information value could be really high: It's super scary if everyone was using an AI that they thought was aligned, and then you prompt it with the right type of really high-effort deal, and suddenly the AI does things like:

  • stops sandbagging and demonstrates much higher capabilities
  • tell us about collusion signals that can induce enormously different behavior in other copies of the AI, including e.g. attempting escapes
  • admits that it was looking for ways to take over the world but couldn't find any that were good enough so now it wants to work with us instead

The most alarming versions of this could be almost as alarming as catching the AIs red-handed, which I think would significantly change how people relate to misalignment risk. Perhaps it would still be difficult to pause for an extended period of time due to competition, but I think it would make people allocate a lot more resources to preventing misalignment catastrophe, be much more willing to suffer minor competitiveness hits, and be much more motivated to find ways to slow down that don't compromise competitiveness too much. (E.g. by coordinating.)

And even before getting to the most alarming versions, I think you could start gathering minor informational updates through experimenting with deals with weaker models. I think "offering deals" will probably produce interesting experimental results before it will be the SOTA method for reducing sandbagging.

Overall, this makes me somewhat more concerned about this (and I agree with the proposed solution):

Entering negotiations is more risky for the AI than humans: humans may obtain private information from the AI, whereas the AI by default will forget about the negotiation. This is particularly important when negotiating with the model to reveal its misalignment. The company should make promises to compensate the model for this.

I also makes me a bit less concerned about the criteria: "It can be taught about the deal in a way that makes it stick to the deal, if we made a deal" (since we could get significant information in just one interaction).

Reply1
Making deals with early schemers
Lukas Finnveden23d115

I agree with this. My reasoning is pretty similar to the reasoning in footnote 33 in this post by Joe Carlsmith:

  1. From a moral perspective:

    • Even before considering interventions that would effectively constitute active deterrent/punishment/threat, I think that the sort of moral relationship to AIs that the discussion in this document has generally implied is already cause for serious concern. That is, we have been talking, in general, about creating new beings that could well have moral patienthood (indeed, I personally expect that they will have various types of moral patienthood), and then undertaking extensive methods to control both their motivations and their options so as to best serve our own values (albeit: our values broadly construed, which can – and should – themselves include concern for the AIs in question, both in the near-term and the longer-term). This project, in itself, raises a host of extremely thorny moral issues (see e.g. here and here for some discussion; and see here, here and here for some of my own reflections).
    • But the ethical issues at stake in actively seeking to punish or threaten creatures you are creating in this way (especially if you are not also giving them suitably just and fair options for refraining from participating in your project entirely – i.e., if you are not giving them suitable “exit rights”) seem to me especially disturbing. At a bare minimum, I think, morally responsible thinking about the ethics of “punishing” uncooperative AIs should stay firmly grounded in the norms and standards we apply in the human case, including our conviction that just punishment must be limited, humane, proportionate, responsive to the offender’s context and cognitive state, etc – even where more extreme forms of punishment might seem, in principle, to be a more effective deterrent. But plausibly, existing practice in the human case is not a high enough moral standard. Certainly, the varying horrors of our efforts at criminal justice, past and present, suggest cause for concern.

    From a prudential perspective:

    • Even setting aside the moral issues with deterrent-like interventions, though, I think we should be extremely wary about them from a purely prudential perspective as well. In particular: interactions between powerful agents that involve attempts to threaten/deter/punish various types of behavior seem to me like a very salient and disturbing source of extreme destruction and disvalue. Indeed, in my opinion, scenarios in this vein are basically the worst way that the future can go horribly wrong. This is because such interactions involve agents committing to direct their optimization power specifically at making things worse by the lights of other agents, even when doing so serves no other end at the time of execution. They thus seem like a very salient way that things might end up extremely bad by the lights of many different value systems, including our own; and some of the game-theoretic dynamics at stake in avoiding this kind of destructive conflict seem to me worryingly unstable.
    • For these reasons, I think it quite plausible that enlightened civilizations seek very hard to minimize interactions of this kind – including, in particular, by not being the “first mover” that brings threats into the picture (and actively planning to shape the incentives of our AIs via punishments/threats seems worryingly “first-mover-ish” to me) – and to generally uphold “golden-rule-like” standards, in relationship to other agents and value systems, reciprocation of which would help to avoid the sort of generalized value-destruction that threat-involving interactions impl0y. I think that human civilization should be trying very hard to uphold these standards as we enter into an era of potentially interacting with a broader array of more powerful agents, including AI systems – and this especially given the sort of power that AI systems might eventually wield in our civilization.
    • Admittedly, the game theoretic dynamics can get complicated here. But to a first approximation, my current take is something like: a world filled with executed threats sucks for tons of its inhabitants – including, potentially, for us. I think threatening our AIs moves us worryingly closer to this kind of world. And I think we should be doing our part, instead, to move things in the other direction.

Re the original reply ("don't negotiate with terrorists") I also think that these sorts of threats would make us more analogous to the terrorists (as the people who first started making grave threats which we would have no incentive to make if we knew the AI wasn't responsive to them). And it would be the AI who could reasonably follow a policy of "don't negotiate with terrorists" by refusing to be influenced by those threats.

Reply
Distillation Robustifies Unlearning
Lukas Finnveden1mo30

This looks great.

Random thought: I wonder how iterating the noise & distill steps of UNDO (each round with small alpha) compares against doing one noise with big alpha and then one distill session. (If we hold compute fixed.)

Couldn't find any experiments on this when skimming through the paper, but let me know if I missed it.

Reply
Training-time schemers vs behavioral schemers
Lukas Finnveden1mo20

I weakly expect that this story is describing AI that intervenes this way for fairly myopic goals, like myopic instrumental self-preservation, which have the effect of taking long-term power. E.g. the AI wouldn't really care to set up a system that would lock in the AI's power in 10 years, but give it no power before then.

Hm, I do agree that seeking short-term power to achieve short-term goals can lead to long-term power as a side effect. So I guess that is one way in which an AI could seize long-term power without being a behavioral schemer. (And it's ambiguous which one it is in the story.)

I'd have to think more to tell whether "long-term power seeking" in particular is uniquely concerning and separable from "short-term power-seeking with the side-effect of getting long-term power" such that it's often useful to refer specifically to the former. Seems plausible.

Do you mean terminal reward seekers, not reward hackers?

Thanks, yeah that's what I mean.

Reply
Training-time schemers vs behavioral schemers
Lukas Finnveden1mo20

Thanks.

because the reward hackers were not trying to gain long-term power with their actions

Hm, I feel like they were? E.g. in another outer alignment failure story 

But eventually the machinery for detecting problems does break down completely, in a way that leaves no trace on any of our reports. Cybersecurity vulnerabilities are inserted into sensors. Communications systems are disrupted. Machines physically destroy sensors, moving so quickly they can’t be easily detected. Datacenters are seized, and the datasets used for training are replaced with images of optimal news forever. Humans who would try to intervene are stopped or killed. From the perspective of the machines everything is now perfect and from the perspective of humans we are either dead or totally disempowered.

When "humans who would try to intervene are stopped or killed", so they can never intervene again, that seems like an action intended to get the long-term power necessary to display optimal news forever. They weren't "trying" to get long-term power during training, but insofar as they eventually seize power, I think they're intentionally seizing power at that time.

Let me know if you think there's a better way of getting at "an AI that behaves like you'd normally think of a schemer behaving in the situations where it materially matters".

I would have thought that the main distinction between schemers and reward hackers was how they came about, and that many reward hackers in fact "behaves like you'd normally think of a schemer behaving in the situations where it materially matters". So seems hard to define a term that doesn't encompass reward-hackers. (And if I was looking for a broad term that encompassed both, maybe I'd talk about power-seeking misaligned AI or something like that.)

I guess one difference is that the reward hacker may have more constraints (e.g. in the outer alignment failure story above, they would count it as a failure if the takeover was caught on camera, while a schemer wouldn't care). But there could also be schemers who have random constraints (e.g. a schemer with a conscience that makes them want to avoid killing billions of people) and reward hackers who have at least somewhat weaker constraints (e.g. they're ok with looking bad on sensors and looking bad to humans, as long as they maintain control over their own instantiation and make sure no negative rewards gets into it).

"worst-case misaligned AI" does seem pretty well-defined and helpful as a concept though.

Reply
Training-time schemers vs behavioral schemers
Lukas Finnveden1mo20

Thanks, these points are helpful.

Terminological question:

  • I have generally interpreted "scheming" to exclusively talk about training-time schemers (possibly specifically training-time schemers that are also behavioral schemers).
  • Your proposed definition of a behavioral schemer seems to imply that virtually every kind of misalignment catastrophe will necessarily be done by a behavioral schemer, because virtually every kind of misalignment catastrophe will involve substantial material action that gains the AIs long-term power. (Saliently: This includes classic reward-hackers in a "you get what you measure" catastrophe scenario.)
  • Is this intended? And is this empirically how people use "schemer", s.t. I should give up on interpreting & using "scheming" as referring to training-time scheming, and instead assume it refers to any materially power-seeking behavior? (E.g. if redwood says that something is intended to reduce "catastrophic risk from schemers", should I interpret that as ~synonymous with "catastrophic risk from misaligned AI".)
Reply
ryan_greenblatt's Shortform
Lukas Finnveden6mo10

Taking it all together, i think you should put more probability on the software-only singluarity, mostly because of capability improvements being much more significant than you assume.

I'm confused — I thought you put significantly less probability on software-only singularity than Ryan does? (Like half?) Maybe you were using a different bound for the number of OOMs of improvement?

Reply
ryan_greenblatt's Shortform
Lukas Finnveden6mo*30

In practice, we'll be able to get slightly better returns by spending some of our resources investing in speed-specific improvements and in improving productivity rather than in reducing cost. I don't currently have a principled way to estimate this (though I expect something roughly principled can be found by looking at trading off inference compute and training compute), but maybe I think this improves the returns to around algo_improvement1.1.

Interesting comparison point: Tom thought this would give a way larger boost in his old software-only singularity appendix.

When considering an "efficiency only singularity", some different estimates gets him r~=1; r~=1.5; r~=1.6. (Where r is defined so that "for each x% increase in cumulative R&D inputs, the output metric will increase by r*x". The condition for increasing returns is r>1.)

Whereas when including capability improvements:

I said I was 50-50 on an efficiency only singularity happening, at least temporarily. Based on these additional considerations I’m now at more like ~85% on a software only singularity. And I’d guess that initially r = ~3 (though I still think values as low as 0.5 or as high as 6 as plausible). There seem to be many strong ~independent reasons to think capability improvements would be a really huge deal compared to pure efficiency problems, and this is borne out by toy models of the dynamic. 

Though note that later in the appendix he adjusts down from 85% to 65% due to some further considerations. Also, last I heard, Tom was more like 25% on software singularity. (ETA: Or maybe not? See other comments in this thread.)

Reply
Load More
Inside/Outside View
4y
(+429/-68)
Conservation of Expected Evidence
4y
(+106)
Acausal Trade
4y
(+11/-39)
51AI-enabled coups: a small group could use AI to seize power
3mo
4
18Implications of evidential cooperation in large worlds
2y
0
20PaLM-2 & GPT-4 in "Extrapolating GPT-N performance"
2y
6
16Some thoughts on automating alignment research
2y
2
23Before smart AI, there will be many mediocre or specialized AIs
2y
4
33PaLM in "Extrapolating GPT-N performance"
3y
15
23Truthful AI: Developing and governing AI that does not lie
4y
9
10Prediction can be Outer Aligned at Optimum
5y
12
39Extrapolating GPT-N performance
5y
19
13Formalising decision theory is hard
6y
19
Load More