I agree that for such a system, the optimal policy of the actor is to rig the estimator, and to "intentionally" bias it towards easy-to-satisfy rewards like "the human loves heroin".

The part that confuses me is why we're having two separate systems with different objectives where one system is dumb and the other system is smart.

We don't need to have two separate systems. There's two meaning to your "bias it towards" phrase: the first one is the informal human one, where "the human loves heroin" is clearly a bias. The second is some formal definition of what is biasing and what isn't. And the system doesn't have that. The "estimator" doesn't "know" that "the human loves heroin" is a bias; instead, it sees this as a perfectly satisfactory way of accomplishing its goals, according to the bridging function it's been given. There is no conflict between estimator and actor.

Imagine that you have a complex CIRL game that models the real world well but assumes that the human is Boltzmann-rational. [...] Such a policy is going to "try" to learn preferences, learn incorrectly, and then act according to those incorrect learned preferences, but it is not going to "intentionally" rig the learning process.

The AI would not see any of these actions as "rigging", even if we would.

It might think "hey, I should check whether the human likes heroin by giving them some", and then think "oh they really do love heroin, I should pump them full of it".

It will do this if it can't already predict the effect of giving them heroin.

It won't think "aha, if I give the human heroin, then they'll ask for more heroin, causing my Boltzmann-rationality estimator module to predict they like heroin, and then I can get easy points by giving humans heroin".

If it can predict the effect of giving humans heroin, it will think something like that. It think: "if I give the humans heroin, they'll ask for more heroin; my Boltzmann-rationality estimator module confirms that this means they like heroin, so I can efficiently satisfy their preferences by giving humans heroin".

How should AIs update a prior over human preferences?

by Stuart Armstrong 1 min read15th May 20206 comments

9


I've always emphasised the constructive aspect of figuring out human preferences, and the desired formal properties of preference learning processes.

A common response to these points is something along the line of "have the AI pick a prior over human preferences, and update it".

However, I've come to realise that a prior over human preferences is of little use. The real key is figuring out how to update it, and that contains almost the entirety of the problem.

I've shown that you cannot deduce preferences from observations or facts about the world - at least, without making some assumptions. These assumptions are needed to bridge the gap between observations/facts, and updates to preferences.

For example, imagine you are doing cooperative inverse reinforcement learning[1] and want to deduce the preferences of the human . CIRL assumes that knows the true reward function, and is generally rational or noisily rational (along with a few other scenarios).

So, this is the bridging law:

  • knows their true reward function, and is noisily rational.

Given this, the AI has many options available to it, including the "drug the human with heroin" approach. If is not well-defined in the bridging law, then "do brain surgery on the human" also becomes valid.

And not only are those approaches valid; if the AI wants to maximise the reward function, according to how this is defined, then these are the optimal policies, as they result in the most return, given that bridging law.

Note that the following is not sufficient either:

  • has a noisy impression of their true reward function, and is noisily rational.

Neither of the "noisy" statements are true, so if the AI uses this bridging law, then, for almost any prior, preference learning will come to a bad end.

Joint priors

What we really want is something like:

  • has an imperfect impression of their true reward function, and is biased.

And yes, that bridging law is true. But it's also massively underdefined. We want to know how 's impression is imperfect, how they are biased, and also what counts as versus some brain-surgeried replacement of them.

So, given certain human actions, the AI can deduce human preferences. So this gives a joint prior over , the possible human reward functions and possible the human's policies[2]. Given that joint prior, then, yes, an AI can start deducing preferences from observations.

So instead of a "prior over preferences" and a "update bridging law", we need a joint object that does both.

But such a joint prior is essentially the same object as the assumptions needed to overcome the Occam's razor result.

Other areas

It seems to me that realisability has a similar problem: if the AI has an imperfect model of how they're embedded in the world, then they will "learn" disastrously wrong things.


  1. This is not a criticism of CIRL; it does its task very well, but still requires some underlying assumptions. ↩︎

  2. And the human's identity, which we're implicitly modelling as part of the policy. ↩︎

9