“Reframing Superintelligence” + LLMs + 4 years
Background In January 2019, FHI published Reframing Superintelligence,[1] a book-length technical report on prospects for advanced AI. OpenAI published the first paper on GPT-2 a month later. Advances since then have been strange and rapid, and I’d like to revisit the report in light of what we have learned. In brief, I think that the abstract conceptual model of AI development and organization proposed in Reframing fits today’s reality quite well, even though LLM-based technologies have diverged far from anything I’d anticipated. Below, you'll find an abstract of the abstract of the report, followed by a series of section-level mini-summaries[2] with update comments. I’ve omitted sections that are either outside the intended focus of this article or are too broad and forward-looking to summarize. A significant impetus behind “Reframing Superintelligence” was to challenge a prevailing notion of advanced AI (equating superintelligent-level AI with a superintelligent agent), which has, in my view, been assigned disproportionate weight and skewed the balance of alignment research. The report offers an alternative framework that includes both risks and opportunities that are overlooked by agent-centric perspectives. Note that this reframing is additive rather than subtractive: My intention is not to disregard agent-focused concerns — their importance is assumed, not debated.[3] Indeed, the AI services model anticipates a world in which dangerous superintelligent agents could emerge with relative ease, and perhaps unavoidably. My aim is to broaden the working ontology of the community to include systems in which superintelligent-level capabilities can take a more accessible, transparent, and manageable form, open agencies rather than unitary agents. This framework highlights different risks and expands the the solution-space for familiar problems. Finally, when I refer “LLMs”, please read this as encompassing multimodal models (GPT-4!) with considerations that carry