AI ALIGNMENT FORUM
AF

Wikitags

Regulation and AI Risk

Edited by Kaj Sotala, Niremetal, John Maxwell, et al. last updated 30th Dec 2024

Regulation and AI Risk is the debate on whether regulation could be used to reduce the risks of , and what forms of regulation would be appropriate.

Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI. Kaushal & Nolan (2015) point out that regulations on AGI development would result in a speed advantage for any project willing to skirt the regulations, and instead propose government funding (possibly in the form of an "AI Manhattan Project") for AGI projects meeting particular criteria.

While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is - AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.

Goertzel & Pitt (2012) suggest that for regulation to be enacted, there might need to be an - a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take a very long time for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer.

References

  • Ben Goertzel & Joel Pitt (2012): Nine Ways to Bias Open-Source AGI Toward Friendliness. Journal of Evolution and Technology - Vol. 22 Issue 1 – pgs 116-141.
  • Mark Gubrud (1997): Nanotechnology and International Security. Fifth Foresight Conference on Molecular Nanotechnology.
  • John McGinnis (2010): Accelerating AI. Northwestern University Law Review.
  • Matthew Scherer (2015): Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology.
  • Carl Shulman & Stuart Armstrong (2009): Arms control and intelligence explosions. European Conference on Computing and Philosophy.
  • Roman Yampolskiy & Joshua Fox (2012): Safety Engineering for Artificial General Intelligence. Topoi.
  • Mohit Kaushal & Scott Nolan (2015): Understanding Artificial Intelligence. Brookings.

See also

AGI Sputnik moment
AGI Sputnik moment
Subscribe
1
Subscribe
1
AI arms race
AI arms race
Existential risk
Unfriendly AI
Unfriendly artificial intelligence
Posts tagged Regulation and AI Risk
152Let’s think about slowing down AI
KatjaGrace
3y
3
50AGI in sight: our look at the game board
Andrea Miotti, Gabriel Alfour
2y
12
71AI companies are unlikely to make high-assurance safety cases if timelines are short
Ryan Greenblatt
6mo
4
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
24AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Lucas Perry
5y
21
13Excessive AI growth-rate yields little socio-economic benefit.
Cleo Nardo
2y
0
3Self-regulation of safety in AI research
Gordon Seidoh Worley
7y
0
33The 0.2 OOMs/year target
Cleo Nardo
2y
0
9If we had known the atmosphere would ignite
Jeffs
2y
0
22Ban development of unpredictable powerful models?
Alex Turner
2y
9
12The Slippery Slope from DALLE-2 to Deepfake Anarchy
Stephen Casper
3y
1
5Comparing Alignment to other AGI interventions: Basic model
Martín Soto
1y
0
1A New Model for Compute Center Verification
Damin Curtis
2y
0
Add Posts
Discussion0
Discussion0