AI ALIGNMENT FORUM
AF

Wikitags
You are viewing version 0.0.0 of this page. Click here to view the latest version.

Information Hazards

Edited by Alex_Altair last updated 6th Feb 2024
You are viewing revision 0.0.0, last edited by Alex_Altair

An information hazard is a concept coined by Nick Bostrom in a 2011 paper1 for Review of Contemporary Philosophy. He defines it as follows;

Bostrom points out that this is in contrast to the generally accepted principle of information freedom and that, while rare, the possibility of information hazards needs to be considered when making information policies. He proceeds to categorize and define a large number of sub-types of information hazards. For example, he defines artificial intelligence hazard as

The table below is reproduced from (Bostrom 2011).

TYPOLOGY OF INFORMATION HAZARDS

I. By information transfer mode

Idea hazard

Attention hazard

Template hazard

Signaling hazard

Evocation hazard

II. By effect

ADVERSARIAL RISKS

Intellectual property hazard

Commitment hazard

Knowing-too-much hazard

RISKS TO SOCIAL ORGANIZATION AND MARKETS

Unveiling hazard

Recognition hazard

RISKS OF IRRATIONALITY AND ERROR

Distraction and temptation hazard

Role model hazard

Biasing hazard

De-biasing hazard

Neuropsychological hazard

Information-burying hazard

RISKS TO VALUABLE STATES AND ACTIVITIES

Spoiler hazard

Mindset hazard

Belief-constituted value hazard

(mixed)

RISKS FROM INFORMATION TECHNOLOGY SYSTEMS

Information infrastructure misuse hazard

Artificial intelligence hazard

RISKS FROM DEVELOPMENT

See also

  • Dangerous Knowledge
  • Computation Hazards

References


  1. (PDF)↩
Subscribe
1
Subscribe
1
Discussion0
Discussion0
Posts tagged Information Hazards
25Don't Share Information Exfohazardous on Others' AI-Risk Models
Thane Ruthenis
2y
1
53Conjecture: Internal Infohazard Policy
Connor Leahy, Sid Black, Chris Scammell, Andrea_Miotti
3y
2
23Needed: AI infohazard policy
Vanessa Kosoy
5y
13
42The Fusion Power Generator Scenario
johnswentworth
5y
10
47Some background for reasoning about dual-use alignment research
Charlie Steiner
2y
6
35Principles of Privacy for Alignment Research
johnswentworth
3y
20
26Gradient Descent on the Human Brain
Jozdien, gaspode
1y
0
18AI Safety via Luck
Jozdien
2y
0
Add Posts