Concept Poisoning: Probing LLMs without probes — AI Alignment Forum