Self-immolation is a hypothetical act that could be executed by a leading AI (capabilities) lab, involving self-destruction, including the destruction of all its resources relevant for furthering progress towards AGI (or more broadly, extremely dangerous capabilities). It would also be a signal to the world that existential/catastrophic risk from AI has been taken seriously by one of the leading AI capabilities actors.
A weaker version would involve a credibly signaled and faithfully executed pivot away from AGI progress towards safer, narrower, bounded AI systems (see Tool AI).
The idea has been independently proposed at least two times:
The name was suggested by Ryan Grenblatt in “AI companies are unlikely to make high-assurance safety cases if timelines are short”.