Followup toEntangled Truths, Contagious Lies, Evolutionary Psychology

What's up with that bizarre emotion we humans have, this sense of ethical caution?

One can understand sexual lust, parental care, and even romantic attachment.  The evolutionary psychology of such emotions might be subtler than it at first appears, but if you ignore the subtleties, the surface reasons are obvious.  But why a sense of ethical caution?  Why honor, why righteousness?  (And no, it's not group selection; it never is.)  What reproductive benefit does that provide?

The specific ethical codes that people feel uneasy violating, vary from tribe to tribe (though there are certain regularities).  But the emotion associated with feeling ethically inhibited—well, I Am Not An Evolutionary Anthropologist, but that looks like a human universal to me, something with brainware support.

The obvious story behind prosocial emotions in general, is that those who offend against the group are sanctioned; this converts the emotion to an individual reproductive advantage.  The human organism, executing the ethical-caution adaptation, ends up avoiding the group sanctions that would follow a violation of the code.  This obvious answer may even be the entire answer.

But I suggest—if a bit more tentatively than usual—that by the time human beings were evolving the emotion associated with "ethical inhibition", we were already intelligent enough to observe the existence of such things as group sanctions.  We were already smart enough (I suggest) to model what the group would punish, and to fear that punishment.

Sociopaths have a concept of getting caught, and they try to avoid getting caught.  Why isn't this sufficient?  Why have an extra emotion, a feeling that inhibits you even when you don't expect to be caught?  Wouldn't this, from evolution's perspective, just result in passing up perfectly good opportunities?

So I suggest (tentatively) that humans naturally underestimate the odds of getting caught.  We don't foresee all the possible chains of causality, all the entangled facts that can bring evidence against us.  Those ancestors who lacked a sense of ethical caution stole the silverware when they expected that no one would catch them or punish them; and were nonetheless caught or punished often enough, on average, to outweigh the value of the silverware.

Admittedly, this may be an unnecessary assumption.  It is a general idiom of biology that evolution is the only long-term consequentialist; organisms compute short-term rewards.  Hominids violate this rule, but that is a very recent innovation.

So one could counter-argue:  "Early humans didn't reliably forecast the punishment that follows from breaking social codes, so they didn't reliably think consequentially about it, so they developed an instinct to obey the codes."  Maybe the modern sociopaths that evade being caught are smarter than average.  Or modern sociopaths are better educated than hunter-gatherer sociopaths.  Or modern sociopaths get more second chances to recover from initial stumbles—they can change their name and move.  It's not so strange to find an emotion executing in some exceptional circumstance where it fails to provide a reproductive benefit.

But I feel justified in bringing up the more complicated hypothesis, because ethical inhibitions are archetypallythat which stops us even when we think no one is looking.  A humanly universal concept, so far as I know, though I am not an anthropologist.

Ethical inhibition, as a human motivation, seems to be implemented in a distinct style from hunger or lust.  Hunger and lust can be outweighed when stronger desires are at stake; but the emotion associated with ethical prohibitions tries to assert itself deontologically. If you have the sense at all that you shouldn't do it, you have the sense that you unconditionally shouldn't do it.  The emotion associated with ethical caution would seem to be a drive that—successfully or unsuccessfully—tries to override the temptation, not just weigh against it.

A monkey can be trapped by a food reward inside a hollowed shell—they can reach in easily enough, but once they close their fist, they can't take their hand out.  The monkey may be screaming with distress, and still be unable to override the instinct to keep hold of the food. We humans can do better than that; we can let go of the food reward and run away, when our brain is warning us of the long-term consequences.

But why does the sensation of ethical inhibition, that might also command us to pass up a food reward, have a similar override-quality—even in the absence of explicitly expected long-term consequences?  Is it just that ethical emotions evolved recently, and happen to be implemented in prefrontal cortex next to the long-term-override circuitry?

What is this tendency to feel inhibited from stealing the food reward?  This message that tries to assert "I override", not just "I weigh against"?  Even when we don't expect the long-term consequences of being discovered?

And before you think that I'm falling prey to some kind of appealing story, ask yourself why that particular story would sound appealing to humans.  Why would it seem temptingly virtuous to let an ethical inhibition override, rather than just being one more weight in the balance?

One possible explanation would be if the emotion were carved out by the evolutionary-historical statistics of a black-swan bet.

Maybe you will, in all probability, get away with stealing the silverware on any particular occasion—just as your model of the world would extrapolate.  But it was a statistical fact about your ancestors that sometimes the environment didn't operate the way they expected. Someone was watching from behind the trees.  On those occasions their reputation was permanently blackened; they lost status in the tribe, and perhaps were outcast or murdered.  Such occasions could be statistically rare, and still counterbalance the benefit of a few silver spoons.

The brain, like every other organ in the body, is a reproductive organ: it was carved out of entropy by the persistence of mutations that promoted reproductive fitness.  And yet somehow, amazingly, the human brain wound up with circuitry for such things as honor, sympathy, and ethical resistance to temptations.

Which means that those alleles drove their alternatives to extinction.  Humans, the organisms, can be nice to each other; but the alleles' game of frequencies is zero-sum.  Honorable ancestors didn't necessarily kill the dishonorable ones.  But if, by cooperating with each other, honorable ancestors outreproduced less honorable folk, then the honor allele killed the dishonor allele as surely as if it erased the DNA sequence off a blackboard.

That might be something to think about, the next time you're wondering if you should just give in to your ethical impulses, or try to override them with your rational awareness.

Especially if you're tempted to engage in some chicanery "for the greater good"—tempted to decide that the end justifies the means.  Evolution doesn't care about whether something actually promotes the greater good—that's not how gene frequencies change.  But if transgressive plans go awry often enough to hurt the transgressor, how much more often would they go awry and hurt the intended beneficiaries?

Historically speaking, it seems likely that, of those who set out to rob banks or murder opponents "in a good cause", those who managed to hurt themselves, mostly wouldn't make the history books.  (Unless they got a second chance, like Hitler after the failed Beer Hall Putsch.)  Of those cases we do read about in the history books, many people have done very well for themselves out of their plans to lie and rob and murder "for the greater good".  But how many people cheated their way to actual huge altruistic benefits—cheated and actually realized the justifying greater good?  Surely there must be at least one or two cases known to history—at least one king somewhere who took power by lies and assassination, and then ruled wisely and well—but I can't actually name a case off the top of my head.  By and large, it seems to me a pretty fair generalization that people who achieve great good ends manage not to find excuses for all that much evil along the way.

Somehow, people seem much more likely to endorse plans that involve just a little pain for someone else, on behalf of the greater good, than to work out a way to let the sacrifice be themselves.  But when you plan to damage society in order to save it, remember that your brain contains a sense of ethical unease that evolved from transgressive plans blowing up and damaging the  originator—never mind the expected value of all the damage done to other people, if you really do care about them.

If natural selection, which doesn't care at all about the welfare of unrelated strangers, still manages to give you a sense of ethical unease on account of transgressive plans not always going as planned—then how much more reluctant should you be to rob banks for a good cause, if you aspire to actually help and protect others?

 

Part of the sequence Ethical Injunctions

Next post: "Ethical Injunctions"

Previous post: "Protected From Myself"

New Comment
63 comments, sorted by Click to highlight new comments since:

Hmm. Very interesting way to ground the musings of the last post. Very precarious line of reasoning, as you acknowledge, but interesting still.

This reminds me of the fact that humans have an evolved tendency for pure time preference, and indeed hyperbolic time preference, rather than accurate modeling of future rewards.

"- at least one king somewhere who took power by lies and assassination, and then ruled wisely and well - but I can't actually name a case off the top of my head." If ruling wisely and well means increased GDP or quality of life (e.g. attracting and accepting immigrants), we might nominate Singapore's Lee Kuan Yew, Chile's Pinochet, Gorbachev, Ian Smith, Deng Xiaoping, etc.

Also, every major democratic political leader lies abundantly to obtain office, as it's a necessity to actually persuade the voters. So Bill Clinton, Jean Chretien, Winston Churchill should qualify for at least half of your list of villainy.

Eliezer: Are you really that sure that the ethical impulse you speak of is due to nature?

I am probably not alone in suggesting that it is due to nurture. It may seem to you that the ethical override is as hard-wired in you as hunger or thirst, but it may be that what is actually hardwired is not an ethical override. It is the listen-to-your-parents override.

It is kind of peculiar, is it not?, that ethic overrides such as you describe seem to be common among people who began their lives in religion, but not quite as common, and not quite as overriding, in people who did not. Contrast the principled attitudes of uptight religious people with those who were raised without stories of hell and damnation to scare them. Which type of person can be expected to avoid sex until they're married? And what for? A hard-wired ethical override? Or because evolution taught us that if parents tell us not to eat certain berries, we should not, or we will die?

I don't think that the ethical override you speak of is nearly as common as you purport. You only need to venture into a suitable part of Africa, where your head will be removed for the slightest of reasons, or into communities which raise their children ways quite dissimilar to how Catholic or Jewish children are raised.

Many of us have the ethical override because we are designed to internalize, on pain of death, the serious lessons taught by our environments. Remove the environmental lesson, and the ethical override disappears.

It is the listen-to-your-parents override

I thought this was clear from my talking about how the specific ethical codes that people feel uneasy violating, vary from tribe to tribe. Certain tribes in the gloom of Africa might remove my head without hesitation, having no taboo against killing strangers, but still hesitate to kill tribe members without social sanction.

[-]Nate5-10

"Sociopaths have a concept of getting caught, and they try to avoid getting caught. Why isn't this sufficient? Why have an extra emotion, a feeling that inhibits you even when you don't expect to be caught?"

It seems to me that by having this emotion, you always "get caught", which is much more effective at preventing a certain behavior than only being caught occasionally by others. Sociopaths must rely on others to police them, typicall people police themselves.

Emperor Claudius I is the best candidate I can think of for a good ruler who took power by dubious means.

Nelson Mandela comes to mind as someone who took up violence as a last resort, and wasn't too bad as a leader in the end.

American revolutionaries as well ended human lives for the greater good, and I'm sure there was a fair amount of agitprop. Could have had worse governance after.

Remember soldiers are people too! Even enemy soldiers who beat you up or imprison you. If the ends never justify killing people, then there should never be any insurrections...

Well, you can bomb the opposition's headquarters on a weekend at 2 AM instead of on a tuesday at 2 PM.

I'm not sure I understand why you say it can't be group selection. It seems perfectly possible to me, albeit much rarer than individual selection.

Suppose all the tribes of humans (or monkeys, for that matter) on earth were populated by perfectly rational sociopaths. Then suppose an individual mutant developed a conscience. If this mutant gets lucky and passes his or her genes on a good number of times, you might end up with a tribe of people with consciences. This tribe would have an enormous advantages over the other sociopathic tribes, and would almost certainly out-perform them if other variables were roughly equal.

I think the same argument can be made for memes and religion. If people believe some god in the sky is watching them, they are less likely to perform socially destructive behavior (like theft or violence when they can get away with it). Thus, societies who practiced this sort of self-deception would be more successful than ones which did not. Yes it would be rare for an entire tribe to adopt these beliefs (for individuals its a prisoner's dilemma), but once it happened that tribe would have a huge advantage over tribes of sociopaths.

I think the objections are against group selection as an explanation in general, not as applied to this particular case.

To the extent that a commitment to ethics is externally verifiable, it would encourage other people to cooperate, just as a tendency to anger (a visible commitment to retribution) is a disincentive to doing harm.

Also, even if it is not verifiable, a person who at least announces their intention to hold to an ethical standard has raised the impact their failure to do so will have on their reputation, and thus the announcement itself should have some impact on the expectation that they will behave ethically.

[-]Yvain2120

"Historically speaking, it seems likely that, of those who set out to rob banks or murder opponents "in a good cause", those who managed to hurt themselves, mostly wouldn't make the history books. (Unless they got a second chance, like Hitler after the failed Beer Hall Putsch.) Of those cases we do read about in the history books, many people have done very well for themselves out of their plans to lie and rob and murder "for the greater good". But how many people cheated their way to actual huge altruistic benefits - cheated and actually realized the justifying greater good? Surely there must be at least one or two cases known to history - at least one king somewhere who took power by lies and assassination, and then ruled wisely and well - but I can't actually name a case off the top of my head. By and large, it seems to me a pretty fair generalization that people who achieve great good ends manage not to find excuses for all that much evil along the way."

History seems to me to be full of examples of people or groups successfully breaking moral rules for the greater good.

The American Revolution, for example. The Founding Fathers committed treason against the crown, started a war that killed thousands of people, and confiscated a lot of Tory property along the way. Once they were in power, they did arguably better than anyone else of their era at trying to create a just society. The Irish Revolution also started in terrorism and violence and ended in a peaceful democractic state (at least in the south); the war of Israeli independence involved a lot of terrorism on the Israeli side and ended with a democratic state that, regardless of what you think of it now, didn't show any particularly violent tendencies before acquiring Palestine in the 1967 war.

Among people who seized power violently, Augustus and Cyrus stand out as excellent in the ancient world (and I'm glad Caligula was assassinated and replaced with Claudius). Ho Chi Minh and Fidel Castro, while I disagree with their politics, were both better than their predecessors and better than many rulers who came to power by more conventional means in their parts of the world.

There are all sorts of biases that would make us less likely to believe people who "break the rules" can ever turn out well. One is the halo effect. Another is availability bias - it's much easier to remember people like Mao than it is to remember the people who were quiet and responsible once their revolution was over, and no one notices the genocides that didn't happen because of some coup or assassination. "Violence leads only to more violence" is a form of cached deep wisdom. And there's probably a false comparison effect: a post-coup government may be much better than the people they replaced while still not up to first-world standards.

And of course, "history is written by the victors". When the winners do something bad, it's never interpreted as bad after the fact. Firebombing a city to end a war more quickly, taxing a populace to give health care to the less fortunate, intervening in a foreign country's affairs to stop a genocide: they're all likely to be interpreted as evidence for "the ends don't justify the means" when they fail, but glossed over or treated as common sense interventions when they work. Consider the amount of furor raised over our supposedly good motives in going into Iraq and failing vs. the complete lack of discussion about going into Yugoslavia and succeeding.

Grant, read the links.

Grant: group selection does happen, but only very slowly. Natural selection works when its units are destroyed, and tribes go extinct pretty rarely compared to individuals.

Merely being poor does not make a selection unit unfit, as far as evolution is concerned. It has to disappear.

Er, Tiiba, that's not correct. Systematic differential reproduction works just as well as death. The problem is that it's nigh mathematically impossible for group selection to overcome a countervailing individual selection pressure, i.e., a 3% individual sacrifice that, when universal in a group, doubles the fitness of that group, will not survive under realistic conditions. Again, see the links.

If we're talking survival of the fittest here, I can't see how adhering to moral codes could have outweighed the advantage (for men) of committing adultery, rape, stealing, and murder.

JA, then where did humans get the brain circuitry that supports morals? Tooth Fairy?

If we're talking survival of the fittest here, I can't see how adhering to moral codes could have outweighed the advantage (for men) of committing adultery, rape, stealing, and murder.
Such men could never be accepted in any kind of mass society. Everyone's in favor of stealing - until it comes to their own stuff. The only way to keep other people off of your stuff is to lay off their own.

I doubt adultery would be a meaningful concept in a world where rape was permitted. Why would the concept of marriage exist?

If we had a tendency to underestimate the chance of getting caught, I don't see why evolution couldn't have just coded in us a correction that raised this estimate. That seems lots simpler than the whole ethical machinery.

I'm not so sure you can discount group selection amongst humans as easily as you can for other animals. Human groups are a lot more cohesive than those of other species. A human tribe can decide that another tribe, due to their different culture or whatever, must die -- even when the expected cost of carrying out a war far outweighs the expected gain in resources. We have subverted the original evolutionary purpose of our adaptations with culture, and I'm unconvinced that the arguments against group selection still hold weight with humans.

Robin: It's a fair point, but as a general matter, I'm not sure it's biologically easy to have mutations that encode knowledge. Why be scared of snakes, instead of having the abstract knowledge "snakes are more dangerous than I would otherwise estimate them to be"? That's how you would build an AI, but evolution goes down the path of things that happen easily as the result of mutations.

I would reply that the brain just doesn't seem to be built around that kind of architecture. Even when an estimate does get raised, it gets raised as the result of feeling some emotion that binds to the estimate. If evolution is going to build a feeling anyway, it may be easier to feel that you just shouldn't do something; than to feel that you should override your mental model that says you won't get caught, with an expectation of getting caught anyway for reasons not in your mental model. Yes, the latter would be more elegant, but evolution is nothing if not inelegant.

Maelin: Read. The. Damned. Links. Here, I link it again. Even if whole bands went extinct more often than individuals within bands, that still wouldn't be enough, mathematically speaking, to let group selection win out.

Robin, I mentioned hyperbolic discounting above. Why do we have evolved pure time preference rather than just discounting for risk, return on investment, etc? If tweaking motives requires fewer independent changes than a massive improvement in our ability to reason about and predict the future, then evolution will do the former.

Robin -- because it needs to be more specific. "Always be more afraid of bad things happening" would reduce effectiveness in other areas. Even "always be more afraid of people catching you and doing bad things to you" would be a handicap if you need to fight an enemy tribe. The requirement is, specifically, "don't violate your own tribe's ethical standards".

I'm not sure you aren't "making too much stew from one oyster". I certainly feel a whole lot less ethically inhibited if I'm really, really certain I'm not going to be punished. When I override, it feels very deliberate - "system two" grappling and struggling with "system one"'s casual amorality, and with a significant chance of the override attempt failing.

How common are ethical inhibitions?

This entire post is kind of surreal to me, as I'm pretty confident I've never felt the emotion described here before. I guess this makes some behavior I've seen before seem more understandable, but it's still a strange to see this described as a human universal when I don't seem to have that response.

Is there a standard term for this that I could use to research it? I did some searching on wikipedia with phrases used in the post, but I couldn't find anything.

Stephen: you've never did/didn't do something because the alternative "simply felt wrong/immoral"?

Psy-Kosh: I don't think I have, but I'm not very sure on that point. I don't remember ever wanting to do something that I both felt would be wrong and wouldn't have consequences otherwise. The part that was particularly unusual to me was the idea of something not only being "wrong", but universally unacceptable, as in:

If you have the sense at all that you shouldn't do it, you have the sense that you unconditionally shouldn't do it.

No one else has brought this up, so maybe I'm just dense, but I'm having trouble distinguishing the "point" from the "counterpoint" at this part of the post:

Elezier makes a "point":

So I suggest (tentatively) that humans naturally underestimate the odds of getting caught. We don't foresee all the possible chains of causality, all the entangled facts that can bring evidence against us. Those ancestors who lacked a sense of ethical caution stole the silverware when they expected that no one would catch them or punish them; and were nonetheless caught or punished often enough, on average, to outweigh the value of the silverware.

He then appears to present a possible "counterpoint":

Admittedly, this may be an unnecessary assumption. . . . So one could counter-argue: "Early humans didn't reliably forecast the punishment that follows from breaking social codes, so they didn't reliably think consequentially about it, so they developed an instinct to obey the codes." Maybe the modern sociopaths that evade being caught are smarter than average. Or modern sociopaths are better educated than hunter-gatherer sociopaths. Or modern sociopaths get more second chances to recover from initial stumbles - they can change their name and move. It's not so strange to find an emotion executing in some exceptional circumstance where it fails to provide a reproductive benefit.

But then he seems to say that this counterpoint doesn't suffice for him:

But I feel justified in bringing up the more complicated hypothesis, because ethical inhibitions are archetypallythat which stops us even when we think no one is looking. A humanly universal concept, so far as I know, though I am not an anthropologist.

I'm not seeing the difference between the point and the counterpoint. Am I just misinterpreting the logic of the argument in thinking that these are supposed to be opposing points? Or, if not, how are they different?

Eliezer: if the "ethical override" differs from culture to culture, and some people don't even have it, what's universal about it?

I'm not saying the phenomenon does not exist, but calling it an "ethical override" seems a misnomer. It might be more accurate to regard it as a form of hypnosis. If you're familiar with how hypnosis works, this seems similar to the environment impressing on you, as a child, that some arbitrary things should / should not be done. Since generally such instructions relay accumulated knowledge which one cannot earn or safely test in one's lifetime, it increases an individual's genetic fitness to heed such instructions, i.e. be "hypnotizable".

If we had a tendency to underestimate the chance of getting caught, I don't see why evolution couldn't have just coded in us a correction that raised this estimate. That seems lots simpler than the whole ethical machinery.

Ethics can also code for the degree of "badness" of the behaviour (ie how strongly others would react against it). The relative strength of "do not kill" vs "do not steal" makes no sense in terms of the likelyhood of being caught (killing being safer than stealing), but makes sense when the consequences of being caught are added it.

Also, having some vague sense of shame that your upbringing can then train will allow you to slip into social norms with a minimum of fuss - soldiers will learn the difference between sleeping with a prostitute and tattling on your colleagues, and pastors will learn the opposite lesson. Simply increasing the risk of being caught doesn't allow this fine distinction.

pdf, the only reason that suggestion works is that we're not in the business of bombing headquarters at 2AM on a weekend. If both sides were scheduling bombings at 2AM, I'd bet they'd be at work at 2AM.

Eliezer: if the "ethical override" differs from culture to culture, and some people don't even have it, what's universal about it?

On cultural difference, it still is an advantage to have a sense of shame/proper conduct that can then be moulded to the culture's norms (see my previous comment). The sense of shame might be universal, even if the contents of the ethic is different.

As for "some people don't even have it", this is not a complete counter-argument - there might be an evolutionary optimal equilibrium of sociopaths to non-sociopaths.

JA, then where did humans get the brain circuitry that supports morals? Tooth Fairy?

I think we evolved general feelings of empathy as a side-effect of mating and child-raising, and the rest is comprised of cultural education. We don't have a moral sense, we have empathy and we have culture.

[-]HH40

I haven't seen this mentioned before, but apologies if it has been dealt with.

Isn't it possible that, to some extent at least, that our honor & ethical codes have evolved as a type of peacock feathers? That is, they evolved as markers of superior fitness because they make us more vulnerable. Peacock feathers are markers of fitness because they make the bird visible to predators - a bird that survives in spite of it can thus "claim" fitness. Similarly, ethical and honor codes constrain behavior: you are prevented from doing what would otherwise benefit you; those who could survive and flourish despite not stooping to theft or deception even when it could help them can thus signal fitness by saying "I am succeeding even though I have fewer options than others because I won't engage in theft or deception, therefore I am superior." Could the honor codes not have evolved this way? As badges of superiority?

For more on the evolution of our moral faculty, see work by people like Marc Hauser & Jonathan Haidt. The former, and perhaps the latter, sees the moral faculty as analogous to Chomsky's universal grammar.

[-]Afu00

I see no one has replied to Tyrrell's point. Probably because it goes at the heart of most arguments put forth in by evolutionary psychologists, that is that they are just so stories that can be told equally convincing in the opposite manner and their is not way of testing between two opposing arguments.

I'd like to take a slightly deeper criticism of EP in relationship to this post, that is the way EP in general ignores the effect of culture or simply assumes that culture some how arises directly from intelligence. First of all, at present we have no idea how something as complicated and higher order as morality would be controlled be controlled by genetics. We simply know too little about the brain and the genes that control it. Furthermore, the fact that the genetic code for the brain is much less complicated than the brain itself makes me skeptical that genes could directly cause something like morality.

I do not believe in any magical explanations of morality, so where does that leave me? Culture is what causes and controls morality and other brain phenomenon. Culture is an emmergent phenomenon on top of biology, that is the biologic evolution was necessary for culture to arise, but culture is able to act causally separate from biological evolution. On a superficial level this is obvious. What language you speak is obviously dependent on what culture you grow up in and not on your genes. I would argue that there are also deeper, more basic cultural practices, such as morality, that are probably pre linguistic, which gave rise to the universal elements of human nature such as linguistics.

This comment too long already, so I will just end by saying that it is foolish to completely discard cultural effects on human evolution, especially since we have knowledge about how culture affects things like morality, but almost no knowledge about how genetics affect it.

@Eliezer.

I don't understand this sentence:

"By and large, it seems to me a pretty fair generalization that people who achieve great good ends manage not to find excuses for all that much evil along the way."

I mean, if they really achieved great good ends and those ends have more positive utility than the negative utility of the evil along the way wouldn't this be a case where the end actually justifies the means?

Eliezer: "The problem is that it's nigh mathematically impossible for group selection to overcome a countervailing individual selection pressure..."

While Eliezer's point here is quite correct within its limited context of individual selection versus group selection, it seems obvious, supported by numerous examples in nature around us, that his case is overly simplistic, failing to address multi-level or hierarchical selection effects, and in particular, the dynamics of selection between groups.

This would appear to bear also on difficulty comprehending selection between (and also within) multi-level agencies in the moral domain.

There are ample examples of group selection taking place in nature. It's rare, and for good reason, but it DOES happen.

If we possess a mathematical model that indicates group selection cannot take place, we must therefore conclude that a fatal flaw exists in our model, and need to examine our premises to determine where the error lies.

@Caledonian: "...we must therefore conclude that a fatal flaw exists in our model..."

It's not necessarily that a "fatal flaw" exists in a model, but that all models are necessarily incomplete.

Eliezer's reasoning is valid and correct -- over a limited context of observations supporting meaning-making. It may help to consider that groups promote individual members, biological organisms promote genes, genes promote something like "material structures of increasing synergies"...

In cybernetic terms, in the bigger picture, there's nothing particularly privileged about the role of the gene, nor about biological evolutionary processes as a special case of a more fundamental organizing principle.

Here's what I find difficult to understand from an evolutionary perspective: why do we have a sense that we ought to do what is right as opposed to what society wants us to do? Why are we even capable of making this distinction?

@George Weinberg: "...from an evolutionary perspective: why do we have a sense that we ought to do what is right as opposed to what society wants us to do?"

In other words, why don't humans function as mindless drones serving the "greater good" of their society? Like ants or bees? Well, if you were an ant or a bee, even one capable of speculating on evolutionary theory, you wouldn't ask that question, but rather its obverse. ;-)

Peter Watts wrote an entertaining bit of fiction, Blindsight on a similar question, but to ask why would evolution do X rather than Y, imputes an inappropriate teleology.

Otherwise, if you were asking as to the relative merits of X versus Y, I think the most powerful answer would hinge on the importance of diversity at multiple levels for robust adaptability, rather than highest degree of adaptation.

And, it might help to keep in mind that biological organisms are adaptation executers, not fitness maximizers, and also that evolutionary economics favors satisficing over "optimizing."

[-]G300

In other words, why don't humans function as mindless drones serving the "greater good" of their society?

No, that's not what I'm asking at all. What I'm saying is, if ethics were all about avoiding "getting caught", then the very idea that there could be an ethical "right thing to do" as opposed to what society wants one to do would be incoherent.

Similarly, if being helpful to an individual were all about expected reciprocity, there would be no distinction between that which is helpful and that which inspires gratitude. A statement like "I'm going to do this for your own good, even though you will hate me for it" would not only never be said, it would be meaningless gibberish at the "green ideas sleep furiously" level.

@G: " if ethics were all about avoiding "getting caught", then the very idea that there could be an ethical "right thing to do" as opposed to what society wants one to do would be incoherent."

Well, I don't think anyone here actually asserted that the basis of ethics was avoiding getting caught, or even fear of getting caught. It seems to me that Eliezer posited an innate moral sense inhibiting risk-taking in the moral domain, and in my opinion this is more a reflection of his early childhood environment of development than any innate moral sense such as pride or disgust. Even though I think Eliezer was working from the wrong basis, I think he's offered a valid observation on the apparent benefit of "deep wisdom" with regard to tending to avoid "black swans."

But there seems to be an even more direct problem with your query, in that it's strictly impractical in terms of the information model it would entail, that individual agents would somehow be equipped with the same model of "right" as the necessarily larger model supported by society.

Apologies in advance, but I'm going to bow out of this discussion now due to diminishing returns and sensitivity to our host.

Also, every major democratic political leader lies abundantly to obtain office, as it's a necessity to actually persuade the voters. So Bill Clinton, Jean Chretien, Winston Churchill should qualify for at least half of your list of villainy.

Have the ones who've lied more, done better? In cases where the politician who told more lies won, has that politician gone on to rule well in an absolute sense? Is it actually true that no one who refused to lie (and this is not the same as always telling the whole truth) could win political office? Are the lies expected, and in that sense, less than true betrayals of someone who trusts you? Are there understood Rules of Politics that include lies but not assassinations, which the good politicians abide by, so that they are not really violating the ethics of their tribe? Will the world be so much worse off if sufficiently good people refuse to tell outright lies and are thereby barred from public office; or would we thereby lose a George Washington or Marcus Aurelius or two, and thereby darken history?

American revolutionaries as well ended human lives for the greater good

Police must sometimes kill the guilty. Soldiers must sometimes kill civilians (or if the enemy knows you're reluctant, that gives them a motive to use civilians as a shield). Spies sometimes have legitimate cause to kill people who helped them, but this has probably been done far more often than it has been justified by a need to end the Nazi nuclear program. I think it's worth noting that in all such cases, you can write out something like a code of ethics and at least try to have social acceptance of it. Politicians, who lie, may prefer not to discuss the whole thing, but politicians are only a small slice of society. Are there many who transgress even the unwritten rules and end up really implementing the greater good? (And no, there's no unwritten rule that says you can rob a bank to stop global warming.)

...but if you're placing yourself under unusual stress, you may need to be stricter than what society will accept from you. In fact, I think it's fair to say that the further I push any art, such as rationality or AI theory, the more I perceive that what society will let you get away with is tremendously too sloppy a standard.

Availability bias... When the winners do something bad, it's never interpreted as bad after the fact. Firebombing a city to end a war more quickly, taxing a populace to give health care to the less fortunate, intervening in a foreign country's affairs to stop a genocide: they're all likely to be interpreted as evidence for "the ends don't justify the means" when they fail, but glossed over or treated as common sense interventions when they work.

A fair point, and one of the difficult things in reasoning about ethics is the extent to which we can expect historical data to be distorted by moral self-deception as well as more common fogs of war.

I'm not sure you aren't "making too much stew from one oyster". I certainly feel a whole lot less ethically inhibited if I'm really, really certain I'm not going to be punished. When I override, it feels very deliberate - "system two" grappling and struggling with "system one"'s casual amorality, and with a significant chance of the override attempt failing.
This entire post is kind of surreal to me, as I'm pretty confident I've never felt the emotion described here before... I don't remember ever wanting to do something that I both felt would be wrong and wouldn't have consequences otherwise.

I don't know whether to attribute this to genetic variance, environmental variance, misunderstanding, or a small number of genuine sociopaths among Overcoming Bias readers. Maybe Stephen is referring to "not wanting" in terms of finally deciding to do something he felt was wrong, rather than being tempted by the rewards thereof?

I think this is the first post I've seen in the OB series to infer that a kind of thinking must be beneficial from the fact that humans have a cognitive bias in its favor. Interesting.

at least one king somewhere who took power by lies and assassination, and then ruled wisely and well

I know John D. Rockefeller and Andrew Carnegie ended up donating large amounts to charity. That said, I don't think they are as bad as they're portrayed. There might have been a little unethical stuff, but mostly they got to where they were by good business practices.

But how many people cheated their way to actual huge altruistic benefits - cheated and actually realized the justifying greater good? Surely there must be at least one or two cases known to history - at least one king somewhere who took power by lies and assassination, and then ruled wisely and well - but I can't actually name a case off the top of my head.

That may be because they were successful. Every time the FBI stops a potential terrorist attack through the use of an undercover informant or some deceptive trap for the terrorists, we don't hear about it. We tend to hear about the deceptive schemes that go wrong, because a successful deception will never be found out. However the deceptions we hear about and, maybe more importantly, remember, are the ones that a) went wrong, and b) were attention gathering, which often means unethical. I somewhat doubt that any major improvements to society were created without anyone resorting to a little bit of backroom arm twisting.

I find it important that ethical inhibitions are not absolute. Sure, they feel absolute, but history shows they aren't nearly as absolute as they feel. Even if we (tentatively) accept that our moral inhibitions are an adaptation, it seems probable that our ability to violate them is also an adaptation. Finally, what worked in an ancestral environment is not guaranteed to work now. In short: even if ethical inhibitions evolved as described, it doesn't follow that obeying them is a good idea.

Surely there must be at least one or two cases known to history—at least one king somewhere who took power by lies and assassination, and then ruled wisely and well—but I can't actually name a case off the top of my head.

Basil I, Byzantine Emperor from 867-886.

It could also have an element of sexual selection involved. If it is desirable to find a mate who will act "ethically", and humans are good at telling which humans are more "ethical" then others, then that would naturally strengthen the tendency to be ethical.

I don't get the flippant exclusion of group selection.

To the best of my knowledge, humans are the only species that has been exposed to continuous group selection events over hundreds of thousands of years, and I would argue that we've gotten very, very good at the activity as a result.

I'm referring, of course, to war. The kind that ends in extermination. The kind that, presumably, humans practiced better than all of our close hominid relatives who are all conspicuously extinct as of shortly after we spread throughout the world.

This is why I'm not much buying the 'tribes don't die often enough to allow group selection to kick in' argument - obviously, a whole lot of tribes are quite dead, almost certainly at the hands of humans. Even if the tribe death-rate right now is not that high, the deaths of entire hominid species to homo sapiens implies that it has been high in the past. And even with a low tribe death rate, replace 'tribe-murder' with 'tribe-rape' and you still have a modest group selection pressure.

So I don't know why you're talking about the impact of individual evolution in morality. Any prospective species whose morality was guided primarily by individual concerns, rather than the humans-will-rape-and-or-murder-us group concerns, probably got raped-and-or-murdered by a tribe of humans, the species we know to be the most efficient group killing machines on earth.

Under this paradigm - the one where we analyze human psychology as something that made us efficient tribe-murderers - sociopathy makes sense. Indeed, it's something I would argue we all have, with a complicated triggering system built to try to distinguish friend from foe. Full-on sociopathy would probably be to our war-sociopathy as sickle-cell anemia is to malaria resistance; a rare malfunction of a useful trait ('useful' in the evolutionary sense of 'we tribe-murdered all the hominids that didn't have it'). And that's not counting sociopaths who are that way because they simply got so confused that their triggering system broke, no genetics required.

We can't give our senses of honor or altruism a free pass in this analysis, either. If our universal sociopathy is war-sociopathy, then our universal virtue is peace-virtue, also dictated by trigger mechanisms. What we describe as virtue and the lack of it co-evolved in an environment where we used virtue in-group, and outright predation out-group. Groups that were good at both succeeded. Groups that failed at the first were presumably insufficiently efficient at group murder to survive. Groups that failed at the second were murdered by groups good at the second.

Practically the only individual adaptation I can see in that situation is the ability to submit to being conquered, or any other non-fatal-and-you-still-get-to-breed form of humiliation, which might mean you survive while they kill the rest of your tribe. But too much of even that will reduce in-group cohesion: A tribe can only take so many prisoners of a species whose members can express (and as you argue in belief-in-belief, even internalize) the opposite of their actual beliefs, such as "I don't want to murder you in your sleep as vengance for killing my tribe and enslaving me".

I'm referring, of course, to war. The kind that ends in extermination. The kind that, presumably, humans practiced better than all of our close hominid relatives who are all conspicuously extinct as of shortly after we spread throughout the world.

Minor point: it's my understanding that wars of extermination are not a human universal.

Strictly speaking they don't need to be; they "just" need to be common enough among human cultures for that to exert distinguishable selection pressure, and successful enough that the groups that come up with the idea don't end up autodarwinating. Though the latter is group selection pressure of a kind, too.

I'd rather stay agnostic on whether or not this is the case; we have very little reliable data on culture under non-marginal paleolithic conditions. I haven't heard of any conclusive skeletal evidence for war in that era (murder yes, war no), but this isn't my field so I could easily be missing some.

I'd rather stay agnostic on whether or not this is the case; we have very little reliable data on culture under non-marginal paleolithic conditions. I haven't heard of any conclusive skeletal evidence for war in that era (murder yes, war no), but this isn't my field so I could easily be missing some.

This paper argues that coalitionary killings were rare among hunger-gatherer societies, and that warfare as we currently understand it did not come into exitence until the rise of agriculture and sedantism, because prior to those develpments, the average hunter-gatherer band simply didn't have enough accumulated material wealth to make the benefit of raids into another band's territory outweigh the risk of getting ambushed by unseen defenders with projectile weapons who know the territory better than you.

However, the development of the throwing spear, used in conjunction with ambush hunting techniques, ushered in an era in which the enhanced lethality of weaponry amplified the costs of assessment errors, and the necessity of movement also placed intruders at a comparative disadvantage with respect to both detection and assessment. Moreover, asymmetrical detection rather than a numerical imbalance of power determined the outcome of hostile encounters. This reconfiguration of the decisive factors in lethal conflict not only raised the stakes (or potential costs) for would-be aggressors but also rendered the benefit of intercommunity dominance unattainable. Because superior numbers were not invariably decisive in encounters between hunting parties, an initial success would neither materially reduce the stakes for aggressors in subsequent attacks nor make it possible to freely encroach on the territory of a neighboring group that had sustained a casualty. Under these circumstances, aggression resulted in stalemate and a condition analogous to a war of attrition rather than territorial gain.

These developments marked a major turning point in the evolution of lethal intergroup violence and in the character of interrelations between neighboring groups. Although fitness continued to be related to territory size (for food-limited populations in occupied environments), selective circumstances no longer favored aggression as a means of achieving territorial gain. [...] In other words, the development of the throwing spear altered the means of production as well as the social relations of production, distribution, and consumption within groups in fundamental ways that also transformed intergroup relations and influenced subsequent hominid evolution.

That paper seems to focus on raiding activities; if repeated raiding activities are difficult, then wouldn't that increase the utility of extermination warfare?

Indeed, the paper you cite posits that exactly that started happening:

The earliest conclusive archaeological evidence for attacks on settlements is a Nubian cemetery (site 117) near the present-day town of Jebel Sahaba in the Sudan dated at 12,000-14,000 B.P. (7, 12). War originated independently in other parts of the world at dates as late as 4,000 B.P. (13). Otterbein argues that agriculture was only able to develop initially at locations where ambushes, battles, and raids were absent (14).

And that such war predated agriculture.

I noted that humans are the only hominoid species alive.To the best of my admittedly limited archaeological knowledge, the others became extinct during the timeframe of the first two phases the paper describes; yet, if that were the case, wouldn't other hominoid communities have likely survived to see the total war phase of human development?

I would thus posit that total war is much older than even their existing data suggests.

As I said, it's a minor point. I'm pretty sure the grandparent's argument will stand or fall the same way regardless.

Well, it's not conclusive evidence by any means, but I did note that we have no hominoid relatives; they're all extinct with a capital E. To me, that implies something more than just us being better at hunting-gathering.

And if we, as a species, did exterminate one or more other hominid species, then it seems a small leap of logic to conclude we did the same to each other whenever similar circumstances came up.

Two points. First, the extinction of nonhuman hominids happened at about the same time as a more general die-out of megafauna. Overhunting by H. sapiens is one popular explanation for why that happened, but it's not the only one, and if one of the alternatives ends up being true (or partly true) then it could easily have affected our hominid relatives as well.

Second, species inadvertently cause each other to go extinct all the time without going to war with each other, just by competing for a niche; consider any of the introduced species that have been causing ecological problems recently. Again, this could easily have happened to our hominid relatives over the timescales we're discussing.

These ideas, of course, aren't mutually exclusive.

And if we, as a species, did exterminate one or more other hominid species, then it seems a small leap of logic to conclude we did the same to each other whenever similar circumstances came up.

Remember, different hominid species were, y'know, different species, with different (apparently suboptimal) adaptations. So them getting exterminated is more likely in any case.

Humans underestimating the chance of being caught seems to beg the question of why they underestimate the chance of being caught in the first place. Why have humans evolved ethical inhibition, as opposed to a better sense of the likelihood of being caught? Still, evolution isn't perfect.

I suspect that humans have evolved a better sense of the likelihood of being caught, many times. The thing is, one of the things such a sense is useful for is improving our ability to cheat with impunity. Which creates more selection pressure to get better at catching cheaters, which reduces our ability to reliably estimate the likelihood of being caught.

I'm fairly sure I've felt my ethical caution emotion activating when it really wasn't warranted by the situation. When I'm not dealing with an ethical question, I'm capable of overcoming it with significant effort. But when I am dealing with an ethical question I generally listen to that emotion rather than fighting it (hence why I wouldn't rob banks for the greater good even if I expected not to get caught).