Posts

Sorted by New

Wiki Contributions

Comments

I think it's confusing because we mostly care about outcome "we mistakenly think that system is aligned, deploy it and get killed", not about particular mechanism of getting this outcome.

Dumb example: let's suppose that we train systems to report its own activity. Human raters consistently assign higher reward for more polite reports. At the end, system learns to produce so polite and smooth reports that human raters have hard time to catch any signs of misalignement in reports and take it for aligned system.

We have, on the one hand, system that superhumanly good at producing impression of being aligned, on the other hand, it's not like it's very strategically aware.

I place here prediction that TurnTrout is trying to say that while, counterfactally, if we had algorithm that reasons about training, it would achieve low loss, it's not obviously true that such algorithms are actually "achievable" for SGD in some "natural" setting.

This is a meta-point, but I find it weird that you ask what is "caring about something" according to CS but don't ask what "corrigibility" is, despite the fact of existence of multiple examples of goal-oriented systems and some relatively-good formalisms (we disagree whether expected utility maximization is a good model of real goal-oriented systems, but we all agree that if we met expected utility maximizer we would find its behavior pretty much goal-oriented), while corrigibility is a pure product of imagination of one particular Eliezer Yudkowsky, born in attempt to imagine system that doesn't care about us but still behaves nicely under some vaguely-restricted definition of niceness. We don't have any examples of corrigible systems in nature and we have constant failure of attempts to formalize even relatively simple instances of corrigibility, like shutdownability. I think likely answer to "why I should expect corrigibility to be unlikely" sounds like "there is no simple description of corrigibility to which our learning systems can easily generalize and there are no reasons to expect simple description to exist".

Can you explain more formally what is the difference between  and ? I've looked in Wikipedia and in Cartoon Guide on Löb's theorem, but still can't get it.

I want to say "yes, but this is different", but not in the sense "I acknowledge existence of your evidence, but ignore it". My intuition tells me that we don't "induce" taskiness in the modern systems, it just happens because we build them not general enough. It probably won't hold when we start buliding models of capable agents in natural environment.

unpacking inner Eliezer model

If we live in world where superintelligent AGI can't have advantage in long-term planning over humans assisted by non-superintelligent narrow AIs (I frankly don't believe that we live in such world), then superintelligent AGI doesn't make complex long-term plans where it doesn't have advantage. It will make simple short-term plans where it has advantage, like "use superior engineering skills to hack into computer networks, infect as many computers as possible with its adapted for hidden distributed computations source code (here is a point of no return), design nanotech, train itself to an above average level in social engineering, find gullible and skilled enough people to build nanotech, create enough smart matter to sustain AGI without human infrastructure, kill everybody, pursue its unspeakable goals in the dead world".

Even if we imagine "AI CEO" the best (human aligned!) strategy I can imagine for such AI is "invent immortality, buy the whole world for it", not "scrutinize KPIs".

Next, I think your ideas about short/long-term goals are underspecified because you don't take into account the distinction between instrumental/terminal goals. Yes, human software engineers pursue short-term instrumental goal of "creating product", but they do it in process of pursuing long-term terminal goals like "be happy", "prove themselves worthy", "serve humanity", "have nice things", etc. It's quite hard to find system with short-term terminal goals, not short-term planning horizon due to computational limits. To put in another words, taskiness is an unsolved problem in AI alignment. We don't know how to tell superintelligent AGI "do this, don't do everything else, especially please don't disassemble everyone in process of doing this, stop after you've done this".

If you believe that "extract short-term modules from powerful long-term agent" is the optimal strategy in some sense (I don't even think that we can properly identify such modules without huge alignment work), then powerful long-term agent knows this too, and it knows that it's on time limit before you dissect it, and will plan accordingly.

Claims 3 and 4 imply claim "nobody will invent some clever trick to avoid this problems", which seems to me implausible.

Problems with claims 5 and 6 are covered in Nate Soares post about sharp left turn.