AI ALIGNMENT FORUM
AF

Wikitags

Nanny AI

Edited by TerminalAwareness, Miranda Dixon-Luinenburg, et al. last updated 17th Sep 2020

Nanny AI is a form of proposed by to delay the while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the associated with the by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our and time to built a .

Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:

  • A mildly superhuman Artificial General Intelligence
  • A global surveillance network tied to the Nanny AI
  • Final control of all robots given to the Nanny AI
  • To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
  • To be able to reinterpret its goals at human prompting
  • To prevent any technological development that would hinder it
  • To yield control to another AI at a predetermined time

In a paper by Luke Muehlhauser and to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a . Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology, though he acknowledges it poses risks of its own.

References

  • Should humanity build a global AI nanny to delay the singularity until it’s better understood? by Ben Goertzel
  • Mitigating the Risks of Artificial Superintelligence
  • Does Humanity Need an AI Nanny? by Ben Goertzel
  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.
Ben Goertzel
Artificial General Intelligence
friendly artificial intelligence
Friendly Artificial Intelligence
Singularity
Singularity
Anna Salamon
risks
values
Discussion0
Discussion0