academic foundations in cog sci, currently interested in AI alignment and interpretability work (and lots of other things!). i'm newish to this space so welcome constructive feedback (and suggested resources) when i'm wrong or missing something :)
Even if AGI has Approval Rewards (i.e., from LLMs or somehow in RL/agentic scenarios), Approval Rewards only work if the agent actually values the approver's approval. Maybe sometimes that valuation is more or less explicit, but there needs to be some kind of belief that the approval is important, and therefore behaviors should align with approval reward-seeking / disapproval minimizing outcomes.
As a toy analogy: many animals have preferences about food, territory, mates, etc., but humans don't really treat those signals as serious guides to our behaviors. Not because the signals aren't real, but because we don't see birds, for example, as being part of our social systems in ways that require us... (read 351 more words →)
Even if AGI has Approval Rewards (i.e., from LLMs or somehow in RL/agentic scenarios), Approval Rewards only work if the agent actually values the approver's approval. Maybe sometimes that valuation is more or less explicit, but there needs to be some kind of belief that the approval is important, and therefore behaviors should align with approval reward-seeking / disapproval minimizing outcomes.
As a toy analogy: many animals have preferences about food, territory, mates, etc., but humans don't really treat those signals as serious guides to our behaviors. Not because the signals aren't real, but because we don't see birds, for example, as being part of our social systems in ways that require us... (read 351 more words →)