Computing Overhang

  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.
Created by Alex_Altair at 1y

As Yudkoswky puts it, human intelligence, created by this "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - deliberation. On the other hand, the rest and almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. On the other hand,As such, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that's completely different than the process that gave birth to our brains.

As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its design signaturethis "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - and it has developed poorly adapted to deliberation. On the other hand, almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior.

On the other hand, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI.AGI that's completely different than the process that gave birth to our brains.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight, no deliberate plan in creating the human mind,our brains evolved during a natural selection process and our intelligence didn'thus weren't developdeliberatly created with the goal of eventually being modeled by AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its design signature - and it has developed poorly adapted to deliberation. On the other hand, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused.focused on.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight, no deliberate plan in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI. As Yudkoswky puts it,it, human intelligence, created by evolution, is characterized by thisits design signature - and it has developed poorly adapted to deliberation. On the other hand, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight, no deliberate plan in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its ownthis design signature - and it has developed poorly adapted to deliberation. On the other hand, when considering,considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to creatingachieve AGI.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insightinsight, no deliberate plan in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its own design signature - and it has developed poorly adapted to deliberation. On the other hand, when considering, the design of complex systems where the designer - us - collaborates with the system being constructed, we are with a new signature and a different to creating AGI.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its own design signature - and it has developed poorly adapted to deliberation. On the other hand, when considering, the design of complex systems where the designer - us - collaborates with the system being constructed, we are with a new signature and a different to creating AGI.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its own design signature - it has developed poorly adapted to deliberation. On the other hand, when considering, the design of complex systems where the designer - us - collaborates with the system being constructed, faces uswe are with a new signature and a different to creating AGI.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its own design signature - it has developed while poorly adapted to deliberation. On the other hand, considering where we stand now,when considering, the design of complex systems where the designer - us - collaborates with the system being constructed, faces us with a new signature and a new approachdifferent to creating AGI.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typicallycan grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that thisa simple brute-force extension of techniques is not the optimal use of the available computing resources andresources. This leaves the need for improvement is on the side of these implementations.algorithmic implementations, where most work is currently focused.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI,AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by its own design signature - it evolvedhas developed while poorly adapted to deliberation. On the other hand, considering where we stand now, the design of complex systems where the designer - us - collaborates with the system being constructed, faces us with a human context.new signature and a new approach to creating AGI.

Computing overhang refers to a situation whenwhere new algorithms are created that can exploit the large amounts of avaliableexisting computing power in afar more efficient wayefficiently than before. This limit oncan happen if previously used algorithms have been suboptimal.

In the usecontext of avaliable resourcesArtificial General Intelligence, this signifies a situation where it becomes possible to create AGIs that can occur unintentionally, when developingbe run using only a sub-optimal algorithm, or on purpose - through planned bottlenecking, for example. In AI development, if new discoveries in deliberative or recursively self-improving algorithms are made, it could bring about a rapid shift from human control to AI control.small fraction of the easily available hardware resources. This could occur fromlead to an intelligence explosion, or byto a simple mass multiplicationmassive increase in the number of AIs which would suddenly bypass this limitationAGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and take advantage of all the computing power avaliable.

This overhang could then be exploited during this explosion bypresent an AI using all its resources to become more intelligent or multiple different AIs battling for such resouces. Theoretically, this distinction isn't very clear nor useful, as the scenarios produced by both situations would likely convergeexistential risk.

As an example, considerIn 2010, the PresidentPresident's Council report (2010)of Advisors on Science and Technology reported on, mentioning an improvement benchmark production planning model having become faster by a factor of 43 million on a benchmark production planning model. Inbetween 1988 and 2003. Of this improvement however,improvement, only a factor of near 1000roughly 1,000 was due to better hardware, while a factor of 4300043,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently the avaliable computing power.efficiently.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the avaliableavailable computing resources and the need for improvement is on the side of these implementations.