AI ALIGNMENT FORUMTags
AF

Machine Intelligence Research Institute (MIRI)

EditHistorySubscribe
Discussion (0)
Help improve this page (1 flag)
EditHistorySubscribe
Discussion (0)
Help improve this page (1 flag)
Machine Intelligence Research Institute (MIRI)
See Also
Random Tag
Contributors
3holomanga
2Grognor
2Paul Crowley
2Kaj Sotala
2Oliver Habryka
2Ruben Bloom

The Machine Intelligence Research Institute, formerly known as the Singularity Institute for Artificial Intelligence (not to be confused with Singularity University) is a non-profit research organization devoted to reducing existential risk from unfriendly artificial intelligence and understanding problems related to friendly artificial intelligence. Eliezer Yudkowsky was one of the early founders and continues to work there as a Research Fellow. The Machine Intelligence Research Institute created and currently owns the LessWrong domain.

External Links

  • Homepage of the Machine Intelligence Research Institute

See Also

  • Technological singularity
  • Existential risk
  • Intelligence explosion
  • Friendly artificial intelligence
Posts tagged Machine Intelligence Research Institute (MIRI)
Most Relevant
1
27The Rocket Alignment Problem
Eliezer Yudkowsky
4y
5
2
38What I’ll be doing at MIRI
Evan Hubinger
3y
3
2
15On motivations for MIRI's highly reliable agent design research
Jessica Taylor
6y
0
2
10My current take on the Paul-MIRI disagreement on alignability of messy AI
Jessica Taylor
6y
0
1
402018 AI Alignment Literature Review and Charity Comparison
Larks
4y
4
1
392019 AI Alignment Literature Review and Charity Comparison
Larks
3y
8
0
33An Untrollable Mathematician Illustrated
Abram Demski
5y
2
0
20Challenges with Breaking into MIRI-Style Research
Chris_Leong
1y
15
0
12Why I am not currently working on the AAMLS agenda
Jessica Taylor
6y
0
Add Posts