Paths To High-Level Machine Intelligence
This post is part 3 in our sequence on Modeling Transformative AI Risk. We are building a model to understand debates around existential risks from advanced AI. The model is made with Analytica software, and consists of nodes (representing key hypotheses and cruxes) and edges (representing the relationships between these cruxes), with final output corresponding to the likelihood of various potential failure scenarios. You can read more about the motivation for our project and how the model works in the Introduction post. The previous post in the sequence, Analogies and General Priors on Intelligence, investigated the nature of intelligence as it pertains to advanced AI. This post explains parts of our model most relevant to paths to high-level machine intelligence (HLMI). We define HLMI as machines that are capable, either individually or collectively, of performing almost all economically-relevant information-processing tasks that are performed by humans, or quickly (relative to humans) learning to perform such tasks. Since many corresponding jobs (such as managers, scientists, and startup founders) require navigating the complex and unpredictable worlds of physical and social interactions, the term HLMI implies very broad cognitive capabilities, including an ability to learn and apply domain-specific knowledge and social abilities. We are using the term “high-level machine intelligence” here instead of the related terms “human-level machine intelligence”, “artificial general intelligence”, or “transformative AI”, since these other terms are often seen as baking in assumptions about either the nature of intelligence or advanced AI that are not universally accepted. In relation to our model as a whole, this post focuses on these modules: The module AI Progression & Requirements investigates when we should expect HLMI to be developed, as well as what kind of HLMI we should expect (e.g., whole brain emulation, HLMI from current deep learning methods, etcetera).