A question many people in the effective altruism movement have struggled with around earning to give is how to handle potentially harmful careers. It's obviously self-defeating if you cause more harm in earning your money than the good it does when you donate it, but we want a higher threshold than that. As humans we need to have approaches that account for our self-serving biases, where we tend to underestimate the harm we cause and overestimate the good we do. Additionally, some kinds of harm (ex: murder) do not seem like the kind of thing you ought to be able to "cancel out" through donation, even if the donation clearly has larger benefits (ex: saves vastly many lives).

Unfortunately for most jobs, even questionable ones, the social impact is very hard to work out. Consider someone deciding to go into the oil industry: how much would they contribute to carbon emissions, after considering the oil company's elasticity of labor and the elasticity of production? Does cheaper oil displace even more carbon-intensive coal? How likely are extreme climate outcomes? Is the benefit of cheaper energy in lifting people out of poverty enough to make it positive on its own? Making a high-quality impact estimate for a career is a huge amount of work, and there are a lot of potential careers, especially when you consider that some roles in the oil industry might be far more replaceable than others.

What should we do in cases where the benefits seem much larger than the harms, but the harms are still significant? A potential rule I've been kicking around is, "don't do work that is illegal, or that would be illegal if the public knew what you were really doing." The idea is, we have a system for declaring profitable activities with negative externalities off limits, one that is intended for the more common case when someone is keeping what they earn for their own benefit. But we can't just use "don't do work that is illegal" because our legislative system can be slow to react to changes in the world or information that isn't yet widely available. For example, if most people understood the cost-benefit tradeoffs in research to assess the pandemic potential of viruses or create very powerful AI systems I expect both would be prohibited.

It is, however, only a heuristic. For example, it gives the wrong answer in cases where:

  • Crafting a law prohibiting the versions of an activity that are net negative would unavoidably cause people to stop doing closely related beneficial activities.

  • The law is wrong and carefully considered civil disobedience is needed to convince others.

I expect there are other areas where this rule permits careers altruistically-minded people should avoid (even if the benefits seem to dramatically outweigh the costs) or rejects ones that are very important. Suggesting examples of either would be helpful!

Choosing a career is the kind of large-consequences decision where going beyond our heuristics and thinking carefully about outcomes is often warranted. Still, I see a bunch of value in sorting out a framework of general rules and common exceptions, where people can think through about how their particular situation fits.

Comment via: facebook, lesswrong, the EA Forum, mastodon

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 6:41 PM

I expect there are other areas where this rule permits careers altruistically-minded people should avoid (even if the benefits seem to dramatically outweigh the costs) or rejects ones that are very important. Suggesting examples of either would be helpful!

Of the first sort: "The law is wrong and adherence to a stricter standard would be more right."

For example, eating farmed meat is legal, and in any conceivable legal system run by 2020s humans it would be legal. But I want an ethical system that can make sense of the fact that I want to eat vegetarian (and don't want to coerce others not to). Letting "what would enlightened legislators do?" be the whole of the moral sensemaking framework doesn't really give me a way to do this.

The post isn't trying to cover all cases of harmful careers, just ones where the career seems to be clearly net positive when approached from a costs-and-benefits framework, but still involves some harms. Trying to think about your class of objections, all the ones I can think of are covered by "that's actually net negative" and not "that's clearly positive, but you shouldn't do it anyway"?

For example, say someone cares a lot about animals and thought their best altruistic option might be working in their family's ranch. They'd (a) they'd earn a bunch of money (hypothetical!) that they'd donate to ACE recommendations, (b) they'd have some influence in the direction of better treatment of animals, but (c) they'd be complicit in raising animals for food. [1] It seems to me that the question here is whether (a) and (b) outweigh (c)? Or do you want to give additional weight to farms like this being incompatible with the stricter moral standard you think is correct?

[1] If the movement were working to outlaw ranches like this I see how working at one could undermine that, and so be another harm in addition to (c).

In the ranch case, I'm imagining that the protagonist believes that (a) and (b) do outweigh (c) to the net-positive.

But (c) is still significant, P says, so they conclude that "the benefits seem much larger than the harms, but the harms are still significant". Furthermore, is (c) "the kind of thing you ought to be able to 'cancel out' through donation [and/or harm-reducing influence]", or is it more like murder?

Is it sufficient that (a) and (b) outweigh (c), or is (c) the sort of thing we should avoid anyway?

In this situation, I feel like I'd be in exactly the target audience that a rule like you're proposing would be trying to serve, but deferring to legality doesn't work because society-that-makes-laws is way less strict than I want my decision-making to be about whether it considers (c) a notable harm at all!

don't do work that is illegal, or that would be illegal if the public knew what you were really doing.

This rule only makes sense if you trust this "system for declaring profitable activities with negative externalities off limits" (ignoring that it pretends the law reflects the public's will far more than it actually does, and problems with manufactured consent) more than your own moral judgment, self-serving as it may be. Perhaps you believe the law is mostly just, or that your own moral reasoning is horribly flawed (side note: either both of these are true, or neither). I don't.

some kinds of harm (ex: murder) do not seem like the kind of thing you ought to be able to "cancel out" through donation, even if the donation clearly has larger benefits (ex: saves vastly many lives).

I disagree. An earning-to-give hitman is the epitome of the EA philosophy.

I can't tell if you're joking? But at the risk of missing the joke, where do you see this in EA philosophy?

I'm not. I find your deontological murder exception ad hoc. My hypothetical hitman's moral foundations rest solidly on the notions, standard among "Effective Altruists," that one can value human lives in terms of money, that one can engage in tradeoffs involving lives, that the lives of people in Africa or wherever aren't worth more an order of magnitude less than those of people in, say, America, and that GiveWell's estimates of how much it costs to save a life are reasonable.