Human preferences as RL critic values - implications for alignment — AI Alignment Forum