AI ALIGNMENT FORUM
Wikitags
AF

Subscribe
Discussion0
1

Machine Intelligence Research Institute (MIRI)

Subscribe
Discussion0
1
Written by Ruben Bloom, Grognor, Oliver Habryka, Kaj Sotala, et al. last updated 30th Dec 2024

Machine Intelligence Research Institute, formerly known as the Singularity Institute for Artificial Intelligence (not to be confused with Singularity University) is a non-profit research organization devoted to reducing existential risk from unfriendly artificial intelligence and understanding problems related to friendly artificial intelligence. Eliezer Yudkowsky was one of the early founders and continues to work there as a Research Fellow. The Machine Intelligence Research Institute created and currently owns the LessWrong domain.

See Also

  • Technological singularity
  • Existential risk
  • Intelligence explosion
  • Friendly artificial intelligence
Posts tagged Machine Intelligence Research Institute (MIRI)
32The Rocket Alignment Problem
Eliezer Yudkowsky
7y
5
38What I’ll be doing at MIRI
Evan Hubinger
6y
3
19On motivations for MIRI's highly reliable agent design research
Jessica Taylor
8y
10
12My current take on the Paul-MIRI disagreement on alignability of messy AI
Jessica Taylor
8y
42
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
37An Untrollable Mathematician Illustrated
Abram Demski
7y
2
20Challenges with Breaking into MIRI-Style Research
Chris_Leong
3y
15
12Why I am not currently working on the AAMLS agenda
Jessica Taylor
8y
2
Add Posts