tilmanr | v1.5.0Jan 3rd 2022 | (+25/-24) Minor corrections | ||
Fai | v1.4.0Jan 20th 2021 | (+33/-6) changed "harm humans" to include also other sentient beings. | ||
Ruben Bloom | v1.3.0Oct 2nd 2020 | (+4/-1) | ||
Ruben Bloom | v1.2.0Oct 2nd 2020 | (+2198/-5) | ||
Ruben Bloom | v1.1.0Apr 16th 2020 | (+502/-1265) | ||
Alex_Altair | v0.0.1Jun 14th 2012 | (-6) | ||
Alex_Altair | v0.0.0Jun 14th 2012 | (+1329) Created page with "An '''information hazard''' is a concept coined by Nick Bostrom in a 2011 paper<ref>{{Cite journal |title=Information Hazards: A Typology of Potential Harms from Knowledge |autho..." |
An Information Hazard is some true information that could harm peoplepeople, or other sentient beings, if known. It is tricky to determine policies on information hazards. Some information might genuinely be dangerous, but excessive controls on information has its own perils.
Nick Bostrom coined the term information hazard in a 2011 paper1 [1] for Review of Contemporary Philosophy. He defines it as follows:
An Information Hazard is some true information whichthat could harm people if known. It is tricky to determine policies on information hazards. Some information might genuinely be dangerous, but excessive controls on information has its own perils.
Nick Bostrom coined the term information hazard in a 2011 paper1 for Review of Contemporary Philosophy. He defines it as follows:
Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.
Bostrom points out that this is in contrast to the generally accepted principle of information freedom and that, while rare, the possibility of information hazards needs to be considered when making information policies. He proceeds to categorize and define a large number of sub-types of information hazards. For example, he defines artificial intelligence hazard as:
Artificial intelligence hazard: There could be computer-related risks in which the threat would derive primarily from the cognitive sophistication of the program rather than the specific properties of any actuators to which the system initially has access.
The following table is reproduced from Bostrom 2011 [1].
TYPOLOGY OF INFORMATION HAZARDS | ||
I. By information transfer mode | ||
Data hazard | ||
Idea hazard | ||
Attention hazard | ||
Template hazard | ||
Signaling hazard | ||
Evocation hazard | ||
II. By effect | ||
TYPE | SUBTYPE | |
ADVERSARIAL RISKS | Competiveness hazard | Enemy Hazard |
Intellectual property hazard | ||
Commitment hazard | ||
Knowing-too-much hazard | ||
RISKS TO SOCIAL ORGANIZATION AND MARKETS | Norm hazard | Information asymmetry Hazard |
Unveiling hazard | ||
Recognition hazard | ||
RISKS OF IRRATIONALITY AND ERROR | Ideological hazard | |
Distraction and temptation hazard | ||
Role model hazard | ||
Biasing hazard | ||
De-biasing hazard | ||
Neuropsychological hazard | ||
Information-burying hazard | ||
RISKS TO VALUABLE STATES AND ACTIVITIES | Psychological reaction hazard | Disappointment hazard |
Spoiler hazard | ||
Mindset hazard | ||
Belief-constituted value hazard | ||
(mixed) | Embarrassment hazard | |
RISKS FROM INFORMATION TECHNOLOGY SYSTEMS | Information system hazard | Information infrastructure failure hazard |
Information infrastructure misuse hazard | ||
Artificial intelligence hazard | ||
RISKS FROM DEVELOPMENT | Development hazard |
An information hazardInformation Hazard is a concept coined by Nick Bostrom in a 2011 paper1 for Review of Contemporary Philosophy. He defines it as follows;
Bostrom points out that thissome information which could harm people if known. It is in contrasttricky to the generally accepted principle of information freedom and that, while rare, the possibility of information hazards needs to be considered when making information policies. He proceeds to categorize and define a large number of sub-types ofdetermine policies on information hazards. For example, he defines artificial intelligence hazard asSome information might genuinely be dangerous, but excessive controls on information has its own perils.
The table belowThis tag is reproduced from (Bostrom 2011).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
what to do with them. Not for actual Information |
|
|
An information hazard is a concept coined by Nick Bostrom in a 2011 paper1 for Review of Contemporary Philosophy. He defines it as follows;
Bostrom points out that this is in contrast to the generally accepted principle of information freedom and that, while rare, the possibility of information hazards needs to be considered when making information policies. He proceeds to categorize and define a large number of sub-types of information hazards. For example, he defines artificial intelligence hazard as
The table below is reproduced from (Bostrom 2011).
TYPOLOGY OF INFORMATION HAZARDS |
I. By information transfer mode |
Idea hazard |
Attention hazard |
Template hazard |
Signaling hazard |
Evocation hazard |
II. By effect |
ADVERSARIAL RISKS |
Intellectual property hazard |
Commitment hazard |
Knowing-too-much hazard |
RISKS TO SOCIAL ORGANIZATION AND MARKETS |
Unveiling hazard |
Recognition hazard |
RISKS OF IRRATIONALITY AND ERROR |
Distraction and temptation hazard |
Role model hazard |
Biasing hazard |
De-biasing hazard |
Neuropsychological hazard |
Information-burying hazard |
RISKS TO VALUABLE STATES AND ACTIVITIES |
Spoiler hazard |
Mindset hazard |
Belief-constituted value hazard |
(mixed) |
RISKS FROM INFORMATION TECHNOLOGY SYSTEMS |
Information infrastructure misuse hazard |
Artificial intelligence hazard |
RISKS FROM DEVELOPMENT |
This tag is for discussing the phenomenon of Information Hazards and what to do with them. Not for actual Information Hazards themselves.
An example might be a formula for easily creating cold fusion in your
garagegarage, which would be very dangerous.AlternativelyAlternatively, it might be an ideawhichthat causes great mental harm to people.