This is probably not the answer you are looking for, but as you are considering putting a lot of work into this...
Does anyone know if this has been done? If not, I might try to make it.
Probably has been done, but depends on what you mean with strongest arguments.
Does strongest mean that the argument has a lot of rhetorical power, so that it will convince people that alignment failure is more plausible than it actually is? Or does strongest mean that it gives the audience the best possible information about the likelihood of various levels of misalignment, where these levels go from 'annoying but can be fixed' to 'kills everybody and converts all matter in its light cone to paperclips'.
Also, the strongest argument when you address an audience of type A, say policy makers, may not be the strongest argument for an audience of type B, say ML researchers.
My main message here, I guess, is that many distilled collections of arguments already exist, even book-length ones like Superintelligence, Human Compatible, and The Alignment Problem. If you are thinking about adding to this mountain of existing work, you need to carefully ask yourself who your target audience is, and what you want to convince them of.