AI ALIGNMENT FORUM
AF

Iterated Amplification
Humans consulting HCHIterated Amplification
Frontpage

14

Humans Consulting HCH

by Paul Christiano
25th Nov 2018
2 min read
9

14

Humans consulting HCHIterated Amplification
Frontpage
Previous:
Approval-directed bootstrapping
No comments24 karma
Next:
Corrigibility
3 comments57 karma
Log in to save where you left off
New Comment
Moderation Log
Curated and popular this week
0Comments

(See also: strong HCH.)

Consider a human Hugh who has access to a question-answering machine. Suppose the machine answers question Q by perfectly imitating how Hugh would answer question Q, if Hugh had access to the question-answering machine.

That is, Hugh is able to consult a copy of Hugh, who is able to consult a copy of Hugh, who is able to consult a copy of Hugh…

Let’s call this process HCH, for “Humans Consulting HCH.”

I’ve talked about many variants of this process before, but I find it easier to think about with a nice handle. (Credit to Eliezer for proposing using a recursive acronym.)

HCH is easy to specify very precisely. For now, I think that HCH is our best way to precisely specify “a human’s enlightened judgment.” It’s got plenty of problems, but for now I don’t know anything better.

Elaborations

We can define realizable variants of this inaccessible ideal:

  • For a particular prediction algorithm P, define HCHᴾ as:
    “P’s prediction of what a human would say after consulting HCHᴾ”
  • For a reinforcement learning algorithm A, define max-HCHᴬ as:
    “A’s output when maximizing the evaluation of a human after consulting max-HCHᴬ”
  • For a given market structure and participants, define HCHᵐᵃʳᵏᵉᵗ as:
    “the market’s prediction of what a human will say after consulting HCHᵐᵃʳᵏᵉᵗ”

Note that e.g. HCHᴾ is totally different from “P’s prediction of HCH.” HCHᴾ will generally make worse predictions, but it is easier to implement.

Hope

The best case is that HCHᴾ, max-HCHᴬ, and HCHᵐᵃʳᵏᵉᵗ are:

  • As capable as the underlying predictor, reinforcement learner, or market participants.
  • Aligned with the enlightened judgment of the human, e.g. as evaluated by HCH.

(At least when the human is suitably prudent and wise.)

It is clear from the definitions that these systems can’t be any more capable than the underlying predictor/learner/market. I honestly don’t know whether we should expect them to match the underlying capabilities. My intuition is that max-HCHᴬ probably can, but that HCHᴾ and HCHᵐᵃʳᵏᵉᵗ probably can’t.

It is similarly unclear whether the system continues to reflect the human’s judgment. In some sense this is in tension with the desire to be capable — the more guarded the human, the less capable the system but the more likely it is to reflect their interests. The question is whether a prudent human can achieve both goals.


This was originally posted here on 29th January 2016.

Tomorrow's AI Alignment Forum sequences will take a break, and tomorrow's post will be Issue #34 of the Alignment Newsletter.

The next post in this sequence is 'Corrigibility' by Paul Christiano, which will be published on Tuesday 27th November.

Mentioned in
95(My understanding of) What Everyone in Technical Alignment is Doing and Why
60The Translucent Thoughts Hypotheses and Their Implications
47AI Alignment 2018-19 Review
31Agents Over Cartesian World Models
24Epistemological Framing for AI Alignment Research
Load More (5/19)