AI ALIGNMENT FORUMTags
AF

Future Fund Worldview Prize

•
Applied to Issues with uneven AI resource distribution by User_Luke 3mo ago
•
Applied to AGI is here, but nobody wants it. Why should we even care? by MGow 3mo ago
•
Applied to A Fallibilist Wordview by Toni MUENDEL 4mo ago
•
Applied to AGI Impossible due to Energy Constrains by TheKlaus 4mo ago
•
Applied to AI will change the world, but won’t take it over by playing “3-dimensional chess”. by Yafah Edelman 4mo ago
•
Applied to How likely are malign priors over objectives? [aborted WIP] by David Johnston 5mo ago
•
Applied to Loss of control of AI is not a likely source of AI x-risk by squek 5mo ago
•
Applied to When can a mimic surprise you? Why generative models handle seemingly ill-posed problems by Noosphere89 5mo ago
•
Applied to Review of the Challenge by SD Marlow 5mo ago
•
Applied to Why do we post our AI safety plans on the Internet? by Peter S. Park 5mo ago
•
Applied to AI X-risk >35% mostly based on a recent peer-reviewed argument by michaelcohen 5mo ago
•
Applied to AI as a Civilizational Risk Part 6/6: What can be done by PashaKamyshev 5mo ago
•
Applied to AI as a Civilizational Risk Part 5/6: Relationship between C-risk and X-risk by PashaKamyshev 5mo ago
•
Applied to AI as a Civilizational Risk Part 4/6: Bioweapons and Philosophy of Modification by PashaKamyshev 5mo ago