Instrumental deception and manipulation in LLMs - a case study — AI Alignment Forum