AI ALIGNMENT FORUM
AF

Wikitags

Unfriendly Artificial Intelligence

Edited by Vladimir Nesov, Miranda Dixon-Luinenburg, et al. last updated 22nd Sep 2020

An Unfriendly artificial intelligence (or UFAI) is an capable of causing to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly; there are to expect that almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal. A is often imagined as an illustrative example of an unFriendly AI indifferent to humanity. An AGI specifically designed to have a positive effect on humanity is called a Friendly AI.

See also

  • ,
  • Friendly AI

References

  • Eliezer S. Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford University Press. (PDF)
  • Stephen M. Omohundro (2008). "The Basic AI Drives". Frontiers in Artificial Intelligence and Applications (IOS Press). (PDF)
artificial general intelligence
great harm
Existential risk
magical categories
Mind design space
paperclip maximizer
Paperclip maximizer
Really powerful optimization process
Discussion0
Discussion0
strong reasons
Basic AI drives