| v1.14.0 | Replaced the Intelligence Explosion link, which seems to not open properly anymore, with an archive link. | |||
| v1.13.0 | (+128/-23) Highlighted the connection between this and differential technological development | |||
| v1.12.0 | (+266/-243) Edited to reflect that this isn't only about AGI, to include the actual definition, and to link to the original source (which Facing the Singularity linked to). Unsure if I should've said "this article", but it seemed worth acknowledging the focus. | |||
| v1.11.0 | (+5/-2) /* Risk-increasing Progress */ | |||
| v1.10.0 | (+953/-970) | |||
| v1.9.0 | (+29/-5) | |||
| v1.8.0 | (+496) /* Risk-increasing Progress */ | |||
| v1.7.0 | (+48) /* References */ | |||
| v1.6.0 | (+37/-28) /* Risk-reducing Progress */ | |||
| v1.5.0 |
Technological advances -— without corresponding development of safety mechanisms -— simultaneously increase the capacity for both friendly and unfriendly AGI development. Presently, most AGI research is concerned with increasing its capacity rather than its safety and thus, most progress increases the risk for a widespread negative effect.
The above developments could also help in the creation of Friendly AI. However, Friendliness requires the development of both AGI and Friendliness theory, while an Unfriendly Artificial Intelligence might be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not combined with work on Friendliness.
Differential intellectual progress was defined by Luke Muehlhauser and Anna Salamon as "prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress". Muehlhauser and SalamonThey discuss differential intellectual progress in relation to Artificial General Intelligence (AGI) development (which will also be the focus of this article):
Muehlhauser and Salamon also note that differential technological development can be seen as a special case of this concept.
Differential intellectual progress describes a scenario which,was defined by Luke Muehlhauser and Anna Salamon as "prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress". Muehlhauser and Salamon discuss differential intellectual progress in terms of human safety, risk-reducingrelation to Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing(which will also be the Singularity, he defines it accordingly:focus of this article):
While theThe above developments could loweralso help in the riskcreation Friendly AI. However, Friendliness requires the development of creatingboth AGI and Friendliness theory, while an Unfriendly Artificial Intelligence (UAI), this ismight be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not the case presently. For example, an AGIcombined with access to massive datasets has the ability to use that information to increase its capacity to serve its purpose. Unless specifically programmed with ethical values respecting human life, it would inadvertently consume resources needed by humans in pursuit of this goal. This same paradigm applies for all risk-increasing progress.work on Friendliness.
While the above developments could lower the risk of creating an Unfriendly Artificial Intelligence (UAI), this is not the case presently. For example, an AGI with access to massive datasets has the ability to use that information to increase its capacity to serve its purpose. Unless specifically programmed with ethical values respecting human life, it would inadvertently consume resources needed by humans in pursuit of this goal. This same paradigm applies for all risk-increasing progress.
Differential intellectual progress describes a scenario which, in whichterms of human safety, risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly: