I have a framing of AI risks scenarios that I think is more general and more powerful than most available online, and that might be a good frame before going into examples. It's not posted yet (I'm finishing the sequence now) but I could sent somethings to you if you're interested. ;)
A professor I'm friendly with has been teaching a course on AI ethics this semester, and he asked me if I could come give a guest lecture on "AI apocalypse" scenarios. What should I include in the lecture?
Details:
If anybody has relevant material I could use, such as slides or activities, that would be great! Also, if anybody wants to help develop the material for this class, please message me (preferably at my work email - Aryeh.Englander@jhuapl.edu).
As a bonus, I expect that material for a class of this sort may turn out to be useful for plenty of other people on this and related forums, either for themselves or as a tool they can use when presenting the same topic to others.
[Note: I am posting this here with permission from the professor.]