This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Yaakov T
Posts
Sorted by New
Wiki Contributions
Transformative AI
22d
(
+34
/
-19
)
Transformative AI
22d
(
+178
/
-169
)
Language Models
4mo
(
+1122
/
-630
)
Corrigibility
6mo
(
+761
/
-10
)
Comments
[Intro to brain-like-AGI safety] 9. Takeaways from neuro 2/2: On AGI motivation
Yaakov T
5mo
2
But in that kind of situation, wouldn't those people also pick A over B for the same reason?
Reply
But in that kind of situation, wouldn't those people also pick A over B for the same reason?