AI ALIGNMENT FORUMTags
AF

Holden Karnofsky

EditHistory
Discussion (0)
Help improve this page (1 flag)
EditHistory
Discussion (0)
Help improve this page (1 flag)
Holden Karnofsky
Links and related pages
Random Tag
Contributors
2Paul Crowley
2Thomas Larsen
1Ryan Greenblatt

Holden Karnofsky is a co-founder of effective altruism, and pioneered GiveWell.

He was the CEO of Open Philanthropy, but is currently on a leave of absence to work on AI Safety.

In May 2012, Karnofsky posted Thoughts on the Singularity Institute (SI), which became the most-upvoted article ever on Less Wrong. It offered a detailed critique of what is now the Machine Intelligence Research Institute, and spawned a great deal of discussion.

MIRI staff posted two replies:

  • Eliezer Yudkowsky, Reply to Holden on 'Tool AI'
  • Luke Muehlhauser, Reply to Holden on The Singularity Institute

Paul Crowley ("ciphergoth") posted discussion articles for each point raised:

  • Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.
  • Objection 2: SI appears to neglect the potentially important distinction between "tool" and "agent" AI.
  • Objection 3: SI's envisioned scenario is far more specific and conjunctive than it appears at first glance, and I believe this scenario to be highly unlikely.
  • Is SI the kind of organization we want to bet on?
  • Other objections to SI's views

Other discussion:

  • Phil Goetz, Holden's Objection 1: Friendliness is dangerous

Links and related pages

  • Holden Karnofsky's posts on LessWrong
  • Tool AI