AI ALIGNMENT FORUM
AF

RobinHanson
Ω4020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
What can the principal-agent literature tell us about AI risk?
RobinHanson6y20

We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.

Reply
What can the principal-agent literature tell us about AI risk?
RobinHanson6y20

The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of "unawareness". If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn't work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.

Reply
Tell Your Rationalist Origin Story
RobinHanson17y20

Are you sure "rationalist" is a good label here? It suggests the claim that you are rational, or at least more rational than most. "Rational" has so many associations that go beyond truth-seeking.

Reply
No posts to display.