[ Question ]

Why not tool AI?

by smithee 1 min read19th Jan 20191 comment


An extremely basic question that, after months of engaging with AI safety literature, I'm surprised to realize I don't fully understand: why not tool AI?

AI Safety scenarios seem to conceive of AI as an autonomous agent. Is that because of the current machine learning paradigm, where we're setting the AI's goals but not specifying the steps to get there? Is this paradigm the entire reason why AI safety is an issue?

If so, is there a reason why advanced AI would need an agenty utility function sort of set up? Is it just too cumbersome to give step by step instructions for high level tasks?


New Answer
Ask Related Question
New Comment

1 Answers

You mention having looked through the literature; in case you missed any, here's what I think of as the standard resources on this topic.

All are very worth reading.