This is a special post for quick takes by Adele Lopez. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
11 comments, sorted by Click to highlight new comments since: Today at 9:19 AM

Half-baked idea for low-impact AI:

As an example, imagine a board that's lodged directly by the wall (no other support structures). If you make it twice as wide, then it will be twice as stiff, but if you make it twice as thick, then it will be eight times as stiff. On the other hand, if you make it twice as long, it will be eight times more compliant.

In a similar way, different action parameters will have scaling exponents (or more generally, functions). So one way to decrease the risk of high-impact actions would be to make sure that the scaling exponent is bounded above by a certain amount.

Anyway, to even do this, you still need to make sure the agent's model is honestly evaluating the scaling exponent. And you would still need to define this stuff a lot more rigorously. I think this idea is more useful in the case where you already have an AI with high-level corrigible intent and want to give it a general "common sense" about the kinds of experiments it might think to try.

So it's probably not that useful, but I wanted to throw it out there.

Privacy as a component of AI alignment

[realized this is basically just a behaviorist genie, but posting it in case someone finds it useful]

What makes something manipulative? If I do something with the intent of getting you to do something, is that manipulative? A simple request seems fine, but if I have a complete model of your mind, and use it phrase things so you do exactly what I want, that seems to have crossed an important line.

The idea is that using a model of a person that is *too* detailed is a violation of human values. In particular, it violates the value of autonomy, since your actions can now be controlled by someone using this model. And I believe that this is a significant part of what we are trying to protect when we invoke the colloquial value of privacy.

In ordinary situations, people can control how much privacy they have relative to another entity by limiting their contact with them to certain situations. But with an AGI, a person may lose a very large amount of privacy from seemingly innocuous interactions (we're already seeing the start of this with "big data" companies improving their advertising effectiveness by using information that doesn't seem that significant to us). Even worse, an AGI may be able to break the privacy of everyone (or a very large class of people) by using inferences based on just a few people (leveraging perhaps knowledge of the human connectome, hypnosis, etc...).

If we could reliably point to specific models an AI is using, and have it honestly share its model structure with us, we could potentially limit the strength of its model of human minds. Perhaps even have it use a hardcoded model limited to knowledge of the physical conditions required to keep it healthy. This would mitigate issues such as deliberate deception or mindcrime.

We could also potentially allow it to use more detailed models in specific cases, for example, we could let it use a detailed mind model to figure out what is causing depression in a specific case, but it would have to use the limited model in any other contexts or for any planning aspects of it. Not sure if that example would work, but I think that there are potentially safe ways to have it use context-limited mind models.

It seems that privacy potentially could "tame" a not-quite-corrigible AI. With a full model, the AGI might receive a request, deduce that activating a certain set of neurons strongly would be the most robust way to make you feel the request was fulfilled, and then design an electrode set-up to accomplish that. Whereas the same AI with a weak model wouldn't be able to think of anything like that, and might resort to fulfilling the request in a more "normal" way. This doesn't seem that great, but it does seem to me like this is actually part of what makes humans relatively corrigible.

Elitzur-Vaidman AGI testing

One thing that makes AI alignment super hard is that we only get one shot.

However, it's potentially possible to get around this (though probably still very difficult).

The Elitzur-Vaidman bomb tester is a protocol (using quantum weirdness) by which a bomb may be tested, with arbitrarily little risk. It's interest comes from the fact that it works even when the only way to test the bomb is to try detonating it. It doesn't matter how the bomb works, as long as we can set things up so that it will allow/block a photon based on whether the bomb is live/dead. I won't explain the details here, but you can roughly think of it as a way of blowing up a bomb in one Many-Worlds branch, but learning the result on other branches via quantum entanglement.

If the "bomb" is an AGI program, and it is live if it's an unaligned yet functional superintelligence, then this provides a possible way to test the AGI without risking our entire future lightcone. This is still quite difficult, because unlike a bomb, a superintelligence will, by default, be motivated to allow/block the photon so that it looks like a dud. So we would need to figure out how to robustly get an honest signal from such an experiment, which still seems quite hard. But perhaps it's easier than solving the full alignment problem before the first shot.

So we would need to figure out how to robustly get an honest signal from such an experiment, which still seems quite hard. But perhaps it's easier than solving the full alignment problem before the first shot.

IMO this is a 'additional line of defense' boxing strategy instead of simplification. 

Note that in the traditional version, the 'dud' bit of the bomb can only be the trigger; a bomb that absorbs the photon but then explodes isn't distinguishable from a bomb that absorbs the photon and then doesn't explode (because of an error deeper in the bomb).

But let's suppose the quantum computing folks can come up with something like this, where we keep some branches entangled and run analysis of the AI code in only one branch, causing an explosion there but affecting the total outcome in all branches. [This seems pretty implausible to me that you manage to maintain entanglement despite that much impact on the external world, but maybe it's possible.] Then 1) as you point out, we need to ensure that the AI doesn't realize that what it needs to output in that branch and 2) need some sort of way to evaluate "did the AI pass our checks or not?". 

But, 2 is "the whole problem"!

I think we get enough things referencing quantum mechanics that we should probably explain why that doesn't work (if I it doesn't) rather than just downvoting and moving on.

It probably does work with a Sufficiently Powerful™ quantum computer, if you could write down a meaningful predicate which can be computed: https://en.wikipedia.org/wiki/Counterfactual_quantum_computation

Haha yeah, I'm not surprised if this ends up not working, but I'd appreciate hearing why.

[Epistemic status: very speculative]

One ray of hope that I've seen discussed is that we may be able to do some sort of acausal trade with even an unaligned AGI, such that it will spare us (e.g. it would give us a humanity-aligned AGI control of a few stars, in exchange for us giving it control of several stars in the worlds we win).

I think Eliezer is right that this wouldn't work.

But I think there are possible trades which don't have this problem. Consider the scenario in which we Win, with an aligned AGI taking control of our future light-cone. Assuming the Grabby aliens hypothesis is true, we will eventually run into other civilizations, which will either have Won themselves, or are AGIs who ate their mother civilizations. I think Humanity will be very sad at the loss of the civilizations who didn't make it because they failed at the alignment problem. We might even be willing to give up several star systems to an AGI who kept its mother civilization intact on a single star system. This trade wouldn't have the issue Eliezer brought up, since it doesn't require us to model such an AGI correctly in advance, only that that AGI was able to model Humanity well enough to know it would want this and would honor the implicit trade.

So symmetrically, we might hope that there are alien civilizations that both Win, and would value being able to meet alien civilizations strongly enough. In such a scenario, "dignity points" are especially aptly named: think of how much less embarrassing it would be to have gotten a little further at solving alignment when the aliens ask us why we failed so badly.

[I may try to flesh this out into a full-fledged post, but for now the idea is only partially baked. If you see a hole in the argument, please poke at it! Also I wouldn't be very surprised if someone has made this point already, but I don't remember seeing such. ]

Dissolving the paradox of useful noise

A perfect bayesian doesn't need randomization.

Yet in practice, randomization seems to be quite useful.

How to resolve this seeming contradiction?

I think the key is that a perfect bayesian (Omega) is logically omniscient. Omega can always fully update on all of the information at hand. There's simply nothing to be gained by adding noise.

A bounded agent will have difficulty keeping up. As with Omega, human strategies are born from an optimization process. This works well to the extent that the optimization process is well-suited to the task at hand. To Omega, it will be obvious whether the optimization process is actually optimizing for the right thing. But to us humans, it is not so obvious. Think of how many plans fail after contact with reality! A failure of this kind may look like a carefully executed model which some obvious-in-retrospect confounders which were not accounted for. For a bounded agent, there appears to be an inherent difference in seeing the flaw once pointed out, and being able to notice the flaw in the first place.

If we are modeling our problem well, then we can beat randomness. That's why we have modeling abilities in the first place. But if we are simply wrong in a fundamental way that hasn't occurred to us, we will be worse than random. It is in such situations that randomization is in fact, helpful.

This is why the P vs BPP difference matters. P and BPP can solve the same problems equally well, from the logically omniscient perspective. But to a bounded agent, the difference does matter, and to the extent to which a more efficient BPP algorithm than the P algorithm is known, the bounded agent can win by using randomization. This is fully compatible with the fact that to Omega, P and BPP are equally powerful.

As Jaynes said:

It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.

There's no contradiction because requiring more thought is costly to a bounded agent.