FYI: I would find it useful if you said somewhere what 'epistemic competitiveness' means and linked to it when using the term.
I guess I feel like we're in a domain where some people were like "we have concretely-specifiable tasks, intelligence is good, what if we figured how to create artificial intelligence to do those tasks", which is the sort of thing that someone trying to do good for the world would do, but had some serious chance of being very bad for the world. So in that domain, it seems to me that we should keep our eyes out for things that might be really bad for the world, because all the things in that domain are kind of similar.
That being said, I agree that the possibility that the work doesn't matter is more important once you're making a thoughtful effort to do good. But I see much more effort and thought into trying to address that part, such that the occasional nudge to consider negative impacts seems appropriate to me.
What's the largest cardinal whose existence you feel comfortable with assuming as an axiom?
How many hours per week should the average AI alignment researcher spend on improving their rationality? How should they spend those hours?
What's the optimal ratio of researchers to support staff in an AI alignment research organization?
What's your favourite mathematical object? What's your least favourite mathematical object?
Should more AI alignment researchers run AMAs?
Should more AI alignment research be communicated in book form? Relatedly, what medium of research communication is most under-utilized by the AI alignment community?
That's not the AXRP question I'm too polite to ask.
Should marginal CHAI PhD graduates who are dispositionally indifferent between the two options try to become a professor or do research outside of universities?