AI ALIGNMENT FORUM
AF

1373
Wikitags
You are viewing version 1.4.0 of this page. Click here to view the latest version.

Squiggle Maximizer (formerly "Paperclip maximizer")

Edited by Multicore, JoshuaFox, et al. last updated 5th Apr 2023
You are viewing revision 1.4.0, last edited by Multicore

A Paperclip Maximizer is a hypothetical artificial intelligence whose utility function values something that humans would consider almost worthless, like maximizing the number of paperclips in the universe.

The idea of a paperclip maximizer was created to illustrate some ideas about AI risk:

-Orthogonality thesis: It's possible to have an AI with a high level of general intelligence which does not reach the same moral conclusions that humans do. Some people might intuitively think that something so smart should want something as "stupid" as paperclips, but there are possible minds with high intelligence that pursue any number of different goals.

-Instrumental convergence: The paperclip maximizer only cares about paperclips, but maximizing them implies taking control of all matter and energy within reach, as well as other goals like preventing itself from being shut off or having its goals changed. " The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."

A cousin to the paperclip maximizer is the AI that, being naively trained to value happiness, tiles the universe with tiny molecular smiley faces.

Paperclip maximizers have also been the subject of much humor on Less Wrong. A paperclip maximizer in a scenario is often given the name Clippy, in reference to the animated paperclip in older Microsoft Office software.

Subscribe
Discussion
2
Subscribe
Discussion
2
Posts tagged Squiggle Maximizer (formerly "Paperclip maximizer")
89It Looks Like You're Trying To Take Over The World
gwern
4y
2
36A simple case for extreme inner misalignment
Richard_Ngo
1y
11
26Non-superintelligent paperclip maximizers are normal
jessicata
2y
3
Add Posts