AI ALIGNMENT FORUM
AF

Wikitags

Expected utility

Edited by joaolkf, RobertM, TerminalAwareness, steven0461, Zack_M_Davis, et al. last updated 19th Feb 2025

If you have some way of scoring how much you prefer outcomes, and you can guess the probability that an action leads to an outcome, then you can weigh actions by looking at their average expected scores.

More formally, expected utility is the expected value in terms of the utility produced by an action. It is the sum of the utility of each of its possible consequences, individually weighted by their respective probability of occurrence.

A rational decision maker will, when presented with a choice, take the action with the greatest expected utility. Von Neumann and Morgenstern provided 4 basic axioms of rationality. They also proved the expected utility theorem, which states a rational agent ought to have preferences that maximize his total utility. Humans often deviate from rationality due to inconsistent preferences or the existence of cognitive biases.

Blog posts

  • Extreme risks: when not to use expected utility
  • Expected utility without the independence axiom
  • Money pumping: the axiomatic approach
  • In conclusion: in the land beyond money pumps lie extreme events
  • VNM expected utility theory: uses, abuses, and interpretation

See also

  • Allais paradox
  • Decision theory
  • Instrumental rationality
  • Prospect theory
Parents:
Expected utility formalism
Subscribe
Subscribe
Discussion0
Discussion0