AI ALIGNMENT FORUM
AF

agazi
010
Message
Dialogue
Subscribe

ex founder, MIT physics+CS

Now an engineer and pursuing independent research. Please reach out to chat about anything!

https://x.com/mikaelnida

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Instruction-following AGI is easier and more likely than value aligned AGI
agazi1y10

I think we can already see the early innings of this with large API providers figuring out how to calibrate post-training techniques (RHLF, constitutional AI) between economic usefulness and the "mean" of western morals. Tough to go against economic incentives

Reply
No posts to display.