Using (Uninterpretable) LLMs to Generate Interpretable AI Code — AI Alignment Forum