AI ALIGNMENT FORUM
AF

127
Wikitags

Internal Alignment (Human)

Edited by Raemon, plex last updated 13th Jan 2021

Internal Alignment is a broadly desirable state. By default, humans sometimes have internal conflict. You might frame that as conflict between subagents, or subprocesses within the human. You might instead frame it as a single agent making complicated decisions. The "internal alignment" hypothesis is that you can become much more productive/happier/fulfilled by getting yourself into alignment with yourself.

Subscribe
Discussion
Subscribe
Discussion
Posts tagged Internal Alignment (Human)
74The shard theory of human values
Quintin Pope, TurnTrout
3y
33
31Announcing the Alignment of Complex Systems Research Group
Jan_Kulveit, technicalities
3y
11
Add Posts