Comment Author | Post | Deleted By User | Deleted Date | Deleted Public | Reason |
---|---|---|---|---|---|
AGI safety from first principles: Superintelligence | Henk Tillman | false | |||
“The Era of Experience” has an unsolved technical alignment problem | cousin_it | true | Comment deleted by its author. | ||
Towards a scale-free theory of intelligent agency | johnswentworth | true | |||
cousin_it's Shortform | cousin_it | true | Comment deleted by its author. | ||
Self-fulfilling misalignment data might be poisoning our AI models | cousin_it | true | Comment deleted by its author. | ||
How might we safely pass the buck to AI? | joshc | true | |||
How AI Takeover Might Happen in 2 Years | habryka | false | |||
Gradual Disempowerment, Shell Games and Flinches | cousin_it | true | Comment deleted by its author. | ||
Thane Ruthenis's Shortform | Thane Ruthenis | false | |||
Views on when AGI comes and on strategy to reduce existential risk | TsviBT | true |
Author | Post | Banned Users |
---|---|---|
Asymptotically Unambitious AGI |
_id | Banned From Frontpage | Banned from Personal Posts |
---|---|---|
User | Ended At | Type |
---|---|---|
allPosts | ||
allPosts | ||
allComments | ||
allComments | ||
allPosts |