I think (perhaps) the distinction that I was trying to make in my previous post "Cars and Elephants": a handwavy argument/analogy against mechanistic interpretability is basically the distinction between engineering and reverse engineering.
Reverse engineering is analogous to mechanistic interpretability; engineering is analogous to "well-founded AI" (to borrow Stuart Russell's term).So it seems worth exploring the pros and cons of these two approaches to understanding x-safety-relevant properties of advanced AI systems. As a gross simplification, we could view the situation this way:
Under this view, these two approaches are working towards the same end from different starting points. A few more thoughts:
I know people will say that we don't actually understand how "Well founded AI" approaches work any better. I don't feel equipped to evaluate that claim beyond extremely simple cases, and don't expect most readers are either.
At least if your goal is to get something like an AGI system, the safety of which we have justified confidence in. This is perhaps too ambitious of a goal.
An angle I think is relevant here is that a sufficiently complex, "well founded" AI system is still going to be fairly difficult to understand. i.e. a large codebase, where everything is properly commented and labeled, might still have lots of unforeseen bugs and interactions the engineers didn't intend.
So I think before you deploy a powerful "Well Founded" AI system, you'll probably still need a kind of generalized reverse-engineering/interpretability skill to explain how the entire process works in various test cases.
I don't really buy this argument.