It turns out that Anthropic accidentally trained against the chain of thought of Claude Mythos Preview in around 8% of training episodes. This is at least the second independent incident in which Anthropic accidentally exposed their model's CoT to the oversight signal.
In more powerful systems, this kind of failure would jeopardize safely navigating the intelligence explosion. It's crucial to build good processes to ensure development is executed according to plan, especially as human oversight becomes spread thin over increasing amounts of potentially untrusted and sloppy AI labor.
This particular failure is also directly harmful, because it significantly reduces our confidence that the model's reasoning trace is monitorable (reflective of the AI's intent to misbehave).[1]
I'm grateful that Anthropic has transparently reported on this issue as much as they...
I think it's quite likely that the CoT was relevant to what the reward model was looking for. For one, it's pretty natural to include some aspects of the model spec in the reward model rubric when judging the trajectory. The model spec would naturally penalize a bunch of stated intent to misbehave in the CoT. And even if the reward model was solely looking for task completion, it's pretty natural for it to penalize stated intent to cheat on the task or do some other unintended thing.
I agree it reduces the chance that the without-inoculation-prompt model is a reward seeker. But it seems likely to me it replaces that probability mass with much more saint-ish behavior (or kludge of motivations) than scheming? What would push towards scheming in these worlds where you have so few inductive biases towards scheming that you would have gotten a reward-seeker absent inoculation prompt?
The story I'm imagining is that the cognition your AI is doing in training with the inoculation prompt ended up being extremely close to scheming, so it isn'...
Note: you are ineligible to complete this challenge if you’ve studied Ancient or Modern Greek, or if you natively speak Modern Greek, or if for other reasons you know what mistakes I’m claiming Opus 4.6 makes. If you’re ineligible, please don’t help other people complete the challenge.
I have recently started using Claude Opus 4.6 to start studying Ancient Greek. Specifically, I initially used it to grade problem sets at the end of the textbook I’ve been using, but then I got worried about it being sycophantic towards my answers, so started having it just write out the answers itself.
I recently gave it this prompt, from the end of Chapter 3 of my textbook:
...Can you write out the answers to this Ancient Greek fill-in-the-blanks exercise so
I spent 2 hours on this the other day. Today Daniel told me the answer, and after ~15 more minutes of effort I still don't know how to construct a scaffold that would generalize to other problems like this. I might give more information after the answer is revealed here.
In this post, I'll go through some of my best guesses for the current situation in AI as of the start of April 2026. You can think of this as a scenario forecast, but for the present (which is already uncertain!) rather than the future. I will generally state my best guess without argumentation and without explaining my level of confidence: some of these claims are highly speculative while others are better grounded, certainly some will be wrong. I tried to make it clear which claims are relatively speculative by saying something like "I guess", "I expect", etc. (but I may have missed some).
You can think of this post as more like a list of my current views rather than a structured post with a thesis, but I think it...
The amount of compute determines the optimal number of active params, available systems determine how the number of total params maps to efficiency of inference. Chinese models had to make strange choices on both counts, not having enough compute to make models with a lot of active params, but then needing to compensate for that with so many total params that the available systems couldn't work with them efficiently, and so giving up completely on fitting them in a few scale-up worlds. As a result, you get things like DeepSeek-V3 with 37B active and 671B t...
The above treatment of "CDT precommitment games" is problematic: the concept made sense in the context of "FDT to CDT translation" but it's not clear what it's doing here (i.e. what is the first argument?) Here is a better treatment.
Definition: A CDT decision problem is the following data. We have a set of variables and for each we have its range , its set of parents and its kernel
The parent relation must induce an acyclic directed graph. We also have a selected subset of decision variables and a selected subset of outcome variables s.t. . For each ... (read more)