AI ALIGNMENT FORUM
AF

Research Agendas
Frontpage

15

Ultra-simplified research agenda

by Stuart_Armstrong
22nd Nov 2019
1 min read
4

15

Research Agendas
Frontpage
Ultra-simplified research agenda
4John_Maxwell
2Stuart_Armstrong
4Michaël Trazzi
New Comment
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:14 PM
[-]John_Maxwell6y40

Theory of mind is something that humans have instinctively and subconsciously, but that isn't easy to spell out explicitly; therefore, by Moravec's paradox, it will be very hard to implant it into an AI, and this needs to be done deliberately.

I think this is the weakest part. Consider: "Recognizing cat pictures is something humans can do instinctively and subconsciously, but that isn't easy to spell out explicitly; therefore, by Moravec's paradox, it will be very hard to implant it into an AI, and this needs to be done deliberately." But in practice, the techniques that work best for cat pictures work well for lots of other things as well, and a hardcoded solution customized for cat pictures will actually tend to underperform.

Reply
[-]Stuart_Armstrong6y20

I'm actually willing to believe that methods used for cat pictures might work for human theory of mind - if trained on that data (and this doesn't solve the underdefined problem).

Reply
[-]Michaël Trazzi6y40

Having printed and read the full version, this ultra-simplified version was an useful summary.

Happy to read a (not-so-)simplified version (like 20-30 paragraphs).

Reply
Moderation Log
More from Stuart_Armstrong
View more
Curated and popular this week
3Comments

This is an ultra-condensed version of the research agenda on synthesising human preferences (video version here):

In order to infer what a human wants from what they do, an AI needs to have a human theory of mind.

Theory of mind is something that humans have instinctively and subconsciously, but that isn't easy to spell out explicitly; therefore, by Moravec's paradox, it will be very hard to implant it into an AI, and this needs to be done deliberately.

One way of defining theory of mind is to look at how humans internally model the value of various hypothetical actions and events (happening to themselves and to others).

Finally, once we have a full theory of mind, we still need to deal, somehow, with the fact that humans have meta-preferences over their preferences, and that these preferences and meta-preferences are often contradictory, changeable, manipulable, and (more worryingly) underdefined in the exotic worlds that AIs could produce.

Any way of dealing with that fact will be contentious, but it's necessary to sketch out an explicit way of doing this, so it can be critiqued and improved.

A toy model for this research agenda can be found here.

Mentioned in
47AI Alignment 2018-19 Review
26The two-layer model of human values, and problems with synthesizing preferences
24AI Safety Research Project Ideas
12Learning human preferences: black-box, white-box, and structured white-box access
8Reward functions and updating assumptions can hide a multitude of sins
Load More (5/6)