This article is a stub. Alas, you can't help Wikipedia (or LessWrong) by expanding it. Except through good comments.

Here I'll present an old idea for a theory of population ethics. This post exists mainly so that I can have something to point to when I need this example.

Given a total population , each with total individual utility over the whole of their lives, order them from lowest utility to the highest so that implies . These utilities are assumed to have a natural zero point (the "life worth living" standard, or similar).

Then pick some discount factor , and define the total utility of the world with population (which is the total population of the world across all time) as

  • .

This is a prioritarian utility that gives greater weight to those least well off. It is not average utilitarianism, and would advocate creating a human with utility larger than than all other humans (as long as it was positive), and would advocate against creating a human with negative utility (for a utility in between, it depends on the details). In the limit , it's total utilitarianism. Increasing someone's individual utility always improves the score. It (sometimes) accepts the "sadistic conclusion", but I've argued that that conclusion is misnamed (the conclusion is a choice between two negative outcomes, meaning that calling it "sadistic" is a poor choice - the preferred outcome is not a good one, just a less bad one). Killing people won't help, unless they will have future lifetime utility that is negative (as everyone that ever lived is included in the sum). Note that this sets up a minor asymmetry between not-creating people and killing them.

Do I endorse this? No; I think a genuine population ethics will be more complicated, and needs a greater asymmetry between life and death. But it's good enough for an example in many situations that come up.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 3:10 AM

It does recommend against creating humans with lives barely worth living, and equivalently painlessly killing such people as well. If your population is a single person with utility 1000 and γ=.99, then this would recommend against creating a person with utility 1.

EDIT: I realised I wasn't clear that the sum was over everyone that ever lived. I've clarified that in the post.

Actually, it recommends killing only people who's future lifetime utility is about going to go negative, as the sum is over all humans in the world in total.

You're correct on the "not creating" incentives.

Now, this doesn't represent what I'd endorse (I prefer more asymmetry between life and death), but it's good enough as an example for most cases that come up.