Background In January 2019, FHI published Reframing Superintelligence,[1] a book-length technical report on prospects for advanced AI. OpenAI published the first paper on GPT-2 a month later. Advances since then have been strange and rapid, and I’d like to revisit the report in light of what we have learned. In...
TL;DR: Strong problem-solving systems can be built from AI systems that play diverse roles, LLMs can readily play diverse roles in role architectures, and AI systems based on role architectures can be practical, safe, and effective in undertaking complex and consequential tasks. Abstract This article explores the practicalities and challenges...
Notes on AI for complex, consequential problems Eric Drexler Centre for the Governance of AI University of Oxford Introduction This document argues for “open agencies” — not opaque, unitary agents — as the appropriate model for applying future AI capabilities to consequential tasks that call for combining human guidance with...
Epistemic status: The core ideas seem robust and stable after long reflection and many discussions. Many researchers identify AI safety with control of a monolithic, superintelligent AI system, and if questioned about multicomponent alternatives, argue that multiple superintelligent-level systems would inevitably collude and act as one. This view seems quite...
Attention conservation notice: This discussion is intended for readers with an interest in prospects for knowledge-rich intelligent systems and potential applications of improved knowledge representations to AI capabilities and alignment. It contains no theorems. Abstract Future AI systems will likely use quasilinguistic neural representations (QNRs) to store, share, and apply...