AI ALIGNMENT FORUM
AF

Wikitags

Holden Karnofsky

Edited by Paul Crowley, Mathieu Putz, Ryan Greenblatt, et al. last updated 4th Oct 2024

Holden Karnofsky is a co-founder of , and pioneered .

He was the CEO of Open Philanthropy, but left in April 2024 to become a Visiting Scholar at the Carnegie Endowment for International Peace, where he's working on international security risks from advances in artificial intelligence.

In May 2012, Karnofsky posted Thoughts on the Singularity Institute (SI), which became the most-upvoted article ever on Less Wrong. It offered a detailed critique of what is now the , and spawned a great deal of discussion.

MIRI staff posted two replies:

  • , Reply to Holden on 'Tool AI'
  • , Reply to Holden on The Singularity Institute

Paul Crowley ("ciphergoth") posted discussion articles for each point raised:

  • Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.
  • Objection 2: SI appears to neglect the potentially important distinction between "tool" and "agent" AI.
  • Objection 3: SI's envisioned scenario is far more specific and conjunctive than it appears at first glance, and I believe this scenario to be highly unlikely.
  • Is SI the kind of organization we want to bet on?
  • Other objections to SI's views

Other discussion:

  • Phil Goetz, Holden's Objection 1: Friendliness is dangerous

Links and related pages

  • Holden Karnofsky's posts on LessWrong
GiveWell
Tool AI
Eliezer Yudkowsky
Luke Muehlhauser
Machine Intelligence Research Institute
effective altruism
Discussion0
Discussion0