There are people not motivated to solve AI alignment, who do work related to AI alignment. E.g. people work on adversarial robustness, understanding how to do science mechanically, or on advancing other paradigms that are more interpretable than modern ML. These people might be interested in the science, or work on it for some other personal reason. They probably will do a worse job, compared to, if they would try to advance AI alignment, even when they work on something that is useful for AI alignment.
This basic idea was mentioned by Buck in a talk.
The following is a list of reasons why somebody who tries to solve alignment directly, would be better at solving alignment (though this sentence alone makes it sound obvious):
There are probably many more points I have not thought of. How much you want to solve alignment compared to other things is a spectrum. What you care about might naturally drift. When you work on something for a long time, you get attached to your work. That's something to keep in mind.
It's interesting to think about the difference, between trying to solve alignment and just doing related work. It can help to notice when we fall into this trap ourselves. Also, getting clear on this might help in doing the good things (e.g. the things in the list above) even more.