AI ALIGNMENT FORUM
AF

2350
Wikitags
Main
LW Wiki

LW Wiki

Edited by Ben Pace last updated 17th Jul 2020

Humans Consulting HCH (HCH) is a recursive acronym describing a setup where humans can consult simulations of themselves to help answer questions. It is a concept used in discussion of the iterated amplification proposal to solve the alignment problem.

It was first described by Paul Christiano in his post Humans Consulting HCH:

Consider a human Hugh who has access to a question-answering machine. Suppose the machine answers question Q by perfectly imitating how Hugh would answer question Q, if Hugh had access to the question-answering machine.

That is, Hugh is able to consult a copy of Hugh, who is able to consult a copy of Hugh, who is able to consult a copy of Hugh…

Let’s call this process HCH, for “Humans Consulting HCH.”

Subscribe
Discussion
Subscribe
Discussion
Posts tagged Humans consulting HCH
14Humans Consulting HCH
paulfchristiano
7y
0
42Paul's research agenda FAQ
zhukeepa
7y
34
25Relating HCH and Logical Induction
abramdemski
5y
3
28A guide to Iterated Amplification & Debate
Rafael Harth
5y
0
23Garrabrant and Shah on human modeling in AGI
Rob Bensinger
4y
7
18HCH Speculation Post #2A
Charlie Steiner
5y
7
16HCH is not just Mechanical Turk
William_S
7y
4
18Idealized Factored Cognition
Rafael Harth
5y
0
16Can HCH epistemically dominate Ramanujan?
zhukeepa
7y
2
11Clarifying Factored Cognition
Rafael Harth
5y
0
9Epistemology of HCH
adamShimi
5y
2
4Predicting HCH using expert advice
jessicata
9y
1
1HCH as a measure of manipulation
orthonormal
9y
6
43Rant on Problem Factorization for Alignment
johnswentworth
3y
32
15What's wrong with these analogies for understanding Informed Oversight and IDA?
Q
Wei Dai
7y
Q
3
Load More (15/22)
Add Posts