The linked post was originally written in the EA forum before realizing that this forum contains already a lot about decision theory. Since I believe it still contains some original thoughts, I link-post it here. 

From the introduction:

What is the state of the art or best practise or common practise in the EA community for taking individual decisions if there are various forms of ambiguity and/or non-quantifiable uncertainty involved, such as...

  • model ambiguity (e.g. about prior probability distributions, conditional probabilities and other model parameters)
  • moral uncertainty (e.g. about risk attitudes, inequality aversion, time preferences, moral status of beings, value systems, etc.)
  • strategic ambiguity (e.g. how rational are other agents and what can we really assume that they will do given that there might be free will?)

My own thoughts on how it might be done at least in theory are summarized below.

In that context, I also wonder:

Is there some place for smart collective decision making in this, e.g. in oder to

  • increase epistemic quality of decisions through crowd-sourcing information
  • raise acceptability of decisions and thus improve implementation quality
  • deal robustly with moral uncertainty and diverse value assessments

And if so, what collective decision making mechanisms are most appropriate?

I'd be more than happy to hear about your thoughts on this! Jobst

New to LessWrong?

New Comment