AI ALIGNMENT FORUM
AF

Wikitags

Researchers in value alignment theory

Edited by Eliezer Yudkowsky, paulfchristiano, et al. last updated 23rd Feb 2016

This page lists researchers in AI alignment.

  • Eliezer Yudkowsky (founder, MIRI)
  • Nick Bostrom (founder, FHI)
  • 3q (MIRI; parametric polymorphism, the Procrastination Paradox, and numerous other developments in Vingean reflection.)
  • Orthonormal (MIRI; modal agents)
  • StuartArmstrong (FHI; Utility indifference)
  • Paulfchristiano (UC Berkeley, approval-directed agents, previously proposed a formalization of indirect normativity)
  • StuartRussell (UC Berkeley; author of Artificial Intelligence: A Modern Approach; previously published on theories of reflective optimality; currently interested in inverse reinforcement learning.)
  • Jessicat (MIRI, reflective oracles)
  • Andrew Critch (MIRI)
  • ScottGarabant (MIRI, logical probabilities)
  • So8res (previously MIRI researcher, now Executive Director at MIRI)
Parents:
AI alignment
Children:
Nick Bostrom
Discussion0
Discussion0