Thanks, this is a great thing to be thinking about and a good list of ideas!
Do other subjects come to mind?
Public speaking skills, persuasion skills, debate skills, etc.
Practice no-cost-too-large productive periods
I like this idea. At AI Impacts we were discussing something similar: having "fire drills" where we spend a week (or even just a day) pretending that a certain scenario has happened, e.g. "DeepMind just announced they have a turing-test-passing system and will demo it a week from now; we've got two journalists asking us for interviews and need to prep for the emergency meeting with the AI safety community tonight at 5." We never got around to testing out such a drill but I think variants on this idea are worth exploring. Inspired by what you said, perhaps we could have "snap drills" where suddenly we take our goals for the next two months and imagine that they need to be accomplished in a week instead, and see how much we can do. (Additionally, ideas like this seem like they would have bonus effects on morale, teamwork, etc.)
I don’t know what is entailed in cultivating that virtue. Perhaps meditation? Maybe testing one’s self at literal risk to one’s life?
This virtue is extremely important to militaries. Does any military use meditation as part of its training? I would guess that the training given to medics and officers (soldiers for whom clear thinking is especially important) might have some relevant lessons. Then again, maybe the military deals with this primarily by selecting the right sort of people rather than taking arbitrary people and training them. If so, perhaps we should look into applying similar selection methods in our own organizations to identify people to put in charge when the time comes.
Any more ideas?
In this post I discuss some:
Perhaps it would be good to have an Official List of all the AI safety strategies, so that whatever rationale people give for why this AI is safe can be compared to the list. (See this prototype list.)
Perhaps it would be good to have an Official List of all the AI safety problems, so that whatever rationale people give for why this AI is safe can be compared to the list, e.g. "OK, so how does it solve outer alignment? What about mesa-optimizers? What about the malignity of the universal prior? I see here that your design involves X; according to the Official List, that puts it at risk of developing problems Y and Z..." (See this prototype list.)
Perhaps it would be good to have various important concepts and arguments re-written with an audience of skeptical and impatient AI researchers in mind, rather than the current audience of friends and LessWrong readers.
Thinking afresh, here's another idea: I have a sketch of a blog post titled "What Failure Feels Like." The idea is to portray a scenario of doom in general, abstract terms (like Paul's post does, as opposed to writing a specific, detailed story) but with a focus on how it feels to us AI-risk-reducers, rather than focusing on what the world looks like in general or what's going on inside the AIs. I decided it would be depressing and not valuable to write. However, maybe it would be valuable as a thing people could read to help emotionally prepare/steel themselves for the time when they "are confronted with the stark reality of how doomed we are." IDK.
I guess overall my favorite idea is to just periodically spend time thinking about what you'd do if you found out that takeoff was happening soon. E.g. "Deepmind announces turing-test system" or "We learn of convincing roadmap to AGI involving only 3 OOMs more compute" or "China unveils project to spend +7 OOMs on a single training run by 2030, with lesser training runs along the way" I think that the exercise of thinking about near-term scenarios and then imagining what we'd do in response will be beneficial even on long timelines, but certainly super beneficial on short timelines (even if, as is likely, none of the scenarios we imagine come to pass).