AI ALIGNMENT FORUM
AF

1945
Wikitags

Machine Intelligence Research Institute (MIRI)

Edited by Ruby, Grognor, habryka, Kaj_Sotala, et al. last updated 30th Dec 2024

Machine Intelligence Research Institute, formerly known as the Singularity Institute for Artificial Intelligence (not to be confused with Singularity University) is a non-profit research organization devoted to reducing existential risk from unfriendly artificial intelligence and understanding problems related to friendly artificial intelligence. Eliezer Yudkowsky was one of the early founders and continues to work there as a Research Fellow. The Machine Intelligence Research Institute created and currently owns the LessWrong domain.

See Also

  • Technological singularity
  • Existential risk
  • Intelligence explosion
  • Friendly artificial intelligence
Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged Machine Intelligence Research Institute (MIRI)
33The Rocket Alignment Problem
Eliezer Yudkowsky
7y
5
38What I’ll be doing at MIRI
evhub
6y
3
19On motivations for MIRI's highly reliable agent design research
jessicata
9y
10
12My current take on the Paul-MIRI disagreement on alignability of messy AI
jessicata
9y
42
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
38An Untrollable Mathematician Illustrated
abramdemski
7y
2
20Challenges with Breaking into MIRI-Style Research
Chris_Leong
4y
15
12Why I am not currently working on the AAMLS agenda
jessicata
8y
2
Add Posts