A Paperclip Maximizer is a hypothetical artificial intelligence whose utility function values something that humans would consider almost worthless, like maximizing the number of paperclips in the universe.
The idea of a paperclip maximizer was created to illustrate some ideas about AI risk:
-Orthogonality thesis: It's possible to have an AI with a high level of general intelligence which does not reach the same moral conclusions that humans do. Some people might intuitively think that something so smart should want something as "stupid" as paperclips, but there are possible minds with high intelligence that pursue any number of different goals.
-Instrumental convergence: The paperclip maximizer only cares about paperclips, but maximizing them implies taking control of all matter and energy within reach, as well as other goals like preventing itself from being shut off or having its goals changed. " The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."
Paperclip maximizers have also been the subject of much humor on Less Wrong.