AI ALIGNMENT FORUM
AF

Wikitags
You are viewing version 0.0.8 of this page. Click here to view the latest version.

Confirmation Bias

Edited by A legion of trolls, Vladimir_Nesov, PeerInfinity, et al. last updated 27th Feb 2023
You are viewing revision 0.0.8, last edited by Mati_Roy

Positive or confirmation bias is a tendency to test hypotheses with positive rather than negative examples, thus risking to miss obvious disconfirming tests.

Kevin Kelly argues that negative results should be "saved, shared, compiled and analyzed, instead of being dumped. Positive results may increase their credibility when linked to negative results." 1 If so then this bias is particularly dangerous as the lack of negative results would themselves cast doubt on even entirely valid conclusions.

As an extreme example, imagine one hundred algorithms for stock market prediction placed in one hundred safety deposit boxes under a hundred different assumed names. Ten years later, to great fanfare, only one box, the one containing the most accurate post-facto results, is opened. There is no universally accepted and undeniable way to prove that any of the other 99 did or didn't exist without a discipline that forces negative result reporting. Even a person who had filed away only one algorithm under their own name, once, would therefore be suspect.

Blog posts

  • Positive Bias: Look Into the Dark
  • Positive Bias Test (C++ program) by MBlume

External links

  • Online test implementing Wason's experiment. (this link is currently not working, as of Jan. 19, 2012)
  • online version of MBlume's C++ program
  • 2 Kevin Kelly on future science

References

  • (PDF)

See also

  • Motivated skepticism, Availability bias
  • Surprise
  • Narrative fallacy, Privileging the hypothesis
  • Write Your Hypothetical Apostasy

Subscribe
1
Subscribe
1
Discussion0
Discussion0
Posts tagged Confirmation Bias
73The Plan - 2022 Update
johnswentworth
3y
14
Add Posts