AI ALIGNMENT FORUM
AF

27
Wikitags
You are viewing version 1.5.0 of this page. Click here to view the latest version.

Squiggle Maximizer (formerly "Paperclip maximizer")

Edited by Multicore, Swimmer963 (Miranda Dixon-Luinenburg), JoshuaFox, et al. last updated 5th Apr 2023
You are viewing revision 1.5.0, last edited by Swimmer963 (Miranda Dixon-Luinenburg)

A Paperclip Maximizer is a hypothetical artificial intelligence whose utility function values something that humans would consider almost worthless, like maximizing the number of paperclips in the universe.

The idea of a paperclip maximizer was created to illustrate some ideas about AI risk:

-Orthogonality thesis: It's possible to have an AI with a high level of general intelligence which does not reach the same moral conclusions that humans do. Some people might intuitively think that something so smart should want something as "stupid" as paperclips, but there are possible minds with high intelligence that pursue any number of different goals.

-Instrumental convergence: The paperclip maximizer only cares about paperclips, but maximizing them implies taking control of all matter and energy within reach, as well as other goals like preventing itself from being shut off or having its goals changed. " The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."

A cousin to the paperclip maximizer is the AI that, being naively trained to value happiness, tiles the universe with tiny molecular smiley faces.

Paperclip maximizers have also been the subject of much humor on Less Wrong. A paperclip maximizer in a scenario is often given the name Clippy, in reference to the animated paperclip in older Microsoft Office software.

References

  • Nick Bostrom (2003). "Ethical Issues in Advanced Artificial Intelligence". Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence.
  • Stephen M. Omohundro (2008). "The Basic AI Drives". Frontiers in Artificial Intelligence and Applications (IOS Press). (PDF)
  • Eliezer Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks, ed. Nick Bostrrom and Milan Cirkovic (Oxford University Press): 308-345. ([1])

Blog posts

  • Ethical Injunctions
  • The True Prisoner's Dilemma

See also

  • Paperclip maximizer on Arbital
  • Orthogonality thesis
  • Unfriendly AI
  • Mind design space, Magical categories, Complexity of value
  • Alien values, Anthropomorphism
  • Utilitronium
  • User:Clippy - a LessWrong contributor account that plays the role of a non-FOOMed paperclip maximiser trying to talk to humans. Wiki page and FAQ
  • Clippius Maximus - A facebook page which makes clippy-related memes and comments on current events from the perspective of clippy.
  • A clicker game based on the idea
Subscribe
Discussion
2
Subscribe
Discussion
2
Posts tagged Squiggle Maximizer (formerly "Paperclip maximizer")
89It Looks Like You're Trying To Take Over The World
gwern
4y
2
36A simple case for extreme inner misalignment
Richard_Ngo
1y
11
26Non-superintelligent paperclip maximizers are normal
jessicata
2y
3
Add Posts