There seems to be some amount of consensus that people working on AI safety (at least within fairly mainstream ML / AI paradigms) shouldn't worry much about the effects of their projects on AI capabilities. New researchers might even try to push capabilities research forward, to build career capital. The best argument for this, IMO, is that someone else is probably going to do it if you don't, likely within the next 6 months (given the current pace of research).
I mostly agree with this view, but I do still think a bit about effects of my research on capabilities, and think others should as well. Being concerned about advancing capabilities has, in the past, moved me away from pursuing ambitious capabilities projects which might have been very good for my career if they paid off, but I always saw someone else do the thing I was considering soon afterwards anyways...
But as far as I know, nobody has tried to evaluate this question thoroughly and systematically. This is concerning, because it seems like current attitudes could plausibly be a result of motivated reasoning (i.e. "I want to keep doing my research, and probably would do so even if I saw a compelling case against it") and groupthink ("nobody else is worrying about this"). I'm not sure it's really tractable, but I think it could be worth ~1-4 people spending a bit of time (possibly up to ~6-24 months, if it ends up looking tractable after some initial thought/investigation) on trying to do a fairly comprehensive treatment of this question.
The main deliverables could be practical guidelines for AI safety researchers, e.g.:
The project could intersect with current "dual-use" considerations (e.g. RE GPT-2).
(also worth mentioning): I know MIRI now has secret research, and I think they have a reasonable case for that, since they aren't in the mainstream paradigms. I do think it would be good for them to have a "hit publication" within the ML community, and might be worth pushing some out-of-the-box ideas which might advance capabilities. The reason is that MIRI has very little credibility, or even name recognition in the ML community, ATM, and I think it would be a big deal in terms of "perception of AI safety concerns within the ML community" if that changed. And I think the ML communities perceptions are important, because the ML community's attitude seems of critical importance for getting good Xrisk reduction policies in place (IIRC, I talked to someone at MIRI who disagreed with that perspective).
The idea to write this post came out of discussion with Joe Collman.
I am a bit surprised to see you begin this post by saying there seems to be a consensus that people shouldn't worry about capabilities consequences of their work, but then, I come from the miri-influenced crowd. I agree that it would be good to have a lot more clarity on how to think about this.
I agree it could be somewhat good for miri to have a hit ml publication, particularly if it was something unlikely to shift progress significantly. I could imagine a universe where this happened if miri happened upon a very interesting safety-advanced thing, the way adversarial counterexamples were this big new thing slightly outside the usual ml way of doing business (ie, not achieving high scores on a task with some improved technique). But it seems fairly unlikely to be worth it to try to play the usual ml game at the level of top ml groups simply for the sake of prestige, because it is likely too hard to gain prestige that way with so many others trying. It seems better in spirit to gain credibility by doing what miri does best and getting recognition for what's good (of the open research). O suspect we have some deep disagreements about background models.
I think the best way to reach ml people in the long run is not through credibility, but through good arguments presented well. Let me clarify: credibility/prestige definitely play a huge role in what the bulk of people think. But the credibility system is good enough that the top credible people are really pretty smart, so to an extent can be swayed by good arguments presented well. This case can definitely be overstated and I feel like I'm presenting a picture which will right be criticised as over-optimistic. But I think there are some success stories, and it's the honest leverage path (in contrast to fighting for prestige in a system in which lots of people are similarly doing so).
Anyway, I've hardly said anything about your main point. I don't know how to think about it, and I wish I did. I usually try to think about differential progress and then fail, and fall back on an assessment of how surprised I'd be if something lead to big AI progress, and am cautious if it seems within the realm of possibility.
I do think this is an overly optimistic picture. The amount of traction an argument gets seems to be something like a product of how good the argument is, how credible those making the argument are, and how easy it is to process the argument.
Also, regarding this:
But the credibility system is good enough that the top credible people are really pretty smart, so to an extent can be swayed by good arguments presented well.
It's not just intelligence that determines if people will be swayed; I think other factors (like "rationality", "open-mindedness", and other personality factors play a very big role.