AI ALIGNMENT FORUM
AF

3895
Wikitags
Main
LW Wiki

LW Wiki

Edited by joaolkf, steven0461, Ruby, Kaj_Sotala, et al. last updated 4th Oct 2021

Moral uncertainty (or normative uncertainty) is uncertainty about what we ought, morally, to do given the diversity of moral doctrines. For example, suppose that we knew for certain that new technology would enable more humans to live on another planet with slightly less well-being than on Earth1. An average utilitarian would consider these consequences bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do?

Moral uncertainty includes a level of uncertainty above the more usual uncertainty of what to do given incomplete information since it deals also with uncertainty about which moral theory is right. Even with complete information about the world, this kind of uncertainty would still remain 1. In one level of uncertainty, one can have doubts on how to act because all the relevant empirical information isn’t available, for example, choosing whether to implement or not a new technology (e.g.: AGI, Biological Cognitive Enhancement, Mind Uploading) not fully knowing about its consequences and nature. But even if we ideally get to know each and every consequence of new technology, we would still need to know which is the right ethical perspective for analyzing these consequences.

One approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way? A better approach is to “perform the action with the highest expected moral value. We get the expected moral value of an action by multiplying the subjective probability that some theory is true by the value of that action if it is true, doing the same for all of the other theories, and adding up the results.” 2 However, we would still need a method of comparing value intertheories, an utilon in one theory may not be the same with an utilon in another theory. Outside consequentialism, many ethical theories don’t use utilions or even any quantifiable values. This is still an open problem.

Nick Bostrom and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord's proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.

Even with a high degree of moral uncertainty and a wide range of possible moral theories, there are still certain actions that seem highly valuable in any theory. Bostrom argues that Existential risk reduction is among them, showing that it is not only the most important task given most versions of consequentialism but highly recommended by many of the other widely acceptable moral theories3.

External links

  • Moral uncertainty — towards a solution?

Sequences

  • Moral uncertainty

See also

  • Expected utility
  • Value learning
  • Metaethics

References

  1. Crouch, William. (2010) “Moral Uncertainty and Intertheoretic Comparisons of Value” BPhil Thesis, 2010. p. 6. Available at: http://oxford.academia.edu/WilliamCrouch/Papers/873903/Moral_Uncertainty_and_Intertheoretic_Comparisons_of_Value↩
  2. Sepielli, Andrew. (2008) “Moral Uncertainty and the Principle of Equity among Moral Theories". ISUS-X, Tenth Conference of the International Society for Utilitarian Studies, Kadish Center for Morality, Law and Public Affairs, UC Berkeley. Available at: http://escholarship.org/uc/item/7h5852rr.pdf↩
  3. Bostrom, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity" Global Policy, forthcoming, 2012. p. 22. Available at: http://www.existential-risk.org/concept.pdf↩
Subscribe
Discussion
Subscribe
Discussion
Posts tagged Moral uncertainty
24Normativity
abramdemski
5y
9
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
22Six Plausible Meta-Ethical Alternatives
Wei Dai
11y
0
24AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Palus Astra
5y
21
11AXRP Episode 3 - Negotiable Reinforcement Learning with Andrew Critch
DanielFilan
5y
0
6For the past, in some ways only, we are moral degenerates
Stuart_Armstrong
6y
8
13Updated Deference is not a strong argument against the utility uncertainty approach to alignment
Ivan Vendrov
3y
2
8Morally underdefined situations can be deadly
Stuart_Armstrong
4y
0
4RFC: Meta-ethical uncertainty in AGI alignment
Gordon Seidoh Worley
7y
0
Add Posts