I think "what should be done" is generally different question that "what kind of mindsets there are" and I would prefer to disentangle them.
My claims about mindsets roughly are
I don't think so, or if it is, than to a version of "security mindset" by Eliezer Yudkowsky, not a version by Bruce Schneier.
Very roughly speaking, security mindset is about differences between probabilities 99,99% and 1-10^(-16). From a mathematical perspective the difference between 1-10^(-16) and 1 is still more similar to the difference between 1-10^(-4) and 1.
Notable feature anybody who seriously studies security learns quickly is, it is in practice impossible to proof the security of anything useful except OTP. The whole rest of securi... (read more)
When Scott says "mathematician mindset can be useful for AI alignment", I take it that your interpretation is "we should try to make sure that when we build AGI, we can prove that our system is safe/robust/secure", whereas I think the intended interpretation is "we should try to make sure that when we build AGI, we have a deep formal understanding of how this kind of system works at all so that we're not flying blind". Similar to how we understand the mathematics of how rockets work in principle, and if we found a way to ... (read more)