This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Future Fund Worldview Prize
•
Applied to
Transformative AGI by 2043 is <1% likely
by
Ted Sanders
10mo
ago
•
Applied to
The Control Problem: Unsolved or Unsolvable?
by
Remmelt Ellen
11mo
ago
•
Applied to
Issues with uneven AI resource distribution
by
User_Luke
1y
ago
•
Applied to
AGI is here, but nobody wants it. Why should we even care?
by
MGow
1y
ago
•
Applied to
A Fallibilist Wordview
by
Toni MUENDEL
1y
ago
•
Applied to
AGI Impossible due to Energy Constrains
by
TheKlaus
1y
ago
•
Applied to
AI will change the world, but won’t take it over by playing “3-dimensional chess”.
by
Yafah Edelman
1y
ago
•
Applied to
How likely are malign priors over objectives? [aborted WIP]
by
David Johnston
1y
ago
•
Applied to
Loss of control of AI is not a likely source of AI x-risk
by
squek
1y
ago
•
Applied to
When can a mimic surprise you? Why generative models handle seemingly ill-posed problems
by
Noosphere89
1y
ago
•
Applied to
Review of the Challenge
by
SD Marlow
1y
ago
•
Applied to
Why do we post our AI safety plans on the Internet?
by
Peter S. Park
1y
ago
•
Applied to
AI X-risk >35% mostly based on a recent peer-reviewed argument
by
michaelcohen
1y
ago
•
Applied to
AI as a Civilizational Risk Part 6/6: What can be done
by
PashaKamyshev
1y
ago