Related: Leave a Line of Retreat, Living In Many Worlds

"It all adds up to normality." Greg Egan, Quarantine

You're on an airplane at 35,000 feet, and you strike up a conversation about aerodynamic lift with the passenger in your row. Things are going along just fine until they point out to you that your understanding of lift is wrong, and that planes couldn't fly from the effect you thought was responsible.

Should you immediately panic in fear that the plane will plummet out of the sky?

Obviously not; clearly the plane has been flying just fine up until now, and countless other planes have flown as well. There has to be something keeping the plane up, even if it's not what you thought, and even if you can't yet figure out what it actually is. Whatever is going on, it all adds up to normality.

Yet I claim that we often do this exact kind of panicked flailing when there's a challenge to our philosophical or psychological beliefs, and that this panic is entirely preventable.


I've experienced and/or seen this particular panic response when I, or others, encounter good arguments for propositions including

  • My religion is not true. ("Oh no, then life and morality are meaningless and empty!")
  • Many-worlds makes the most sense. ("Oh no, then there are always copies of me doing terrible things, and so none of my choices matter!")
  • Many "altruistic" actions actually have hidden selfish motives. ("Oh no, then altruism doesn't exist and morality is pointless!")
  • I don't have to be the best at something in order for it to be worth doing. ("Oh no, then others won't value me!") [Note: this one is from therapy; most people don't have the same core beliefs they're stuck on.]

(I promise these are not in fact strawmen. I'm sure you can think of your own examples. Also remember that panicking over an argument in this way is a mistake even if the proposition turns out to be false.)

To illustrate the way out, let's take the first example. It took me far too long to leave my religion, partly because I was so terrified about becoming a nihilist if I left that I kept flinching away from the evidence. (Of course, the religion proclaimed itself to be the origin of morality, and so it reinforced the notion that anyone else claiming to be moral was just too blind to see that their lack of faith implied nihilism.)

Eventually I did make myself face down, not just the object-level arguments, but the biases that had kept me from looking directly at them. And then I was an atheist, and still I was terrified of becoming a nihilist (especially about morality).

So I did one thing I still think was smart: I promised myself not to change all of my moral rules at once, but to change each one only when (under sober reflection) I decided it was wrong. And in the meantime, I read a lot of moral philosophy.

Over the next few months, I began relaxing the rules that were obviously pointless. And then I had a powerful insight: I was so cautious about changing my rules because I wanted to help people and not slide into hurting them. Regardless of what morality was, in fact, based on, the plane was still flying just fine. And that helped me sort out the good from the bad among the remaining rules, and to stop being so afraid of what arguments I might later encounter.

So in retrospect, the main thing I'd recommend is to promise yourself to keep steering the plane mostly as normal while you think about lift (to stretch the analogy). If you decide that something major is false, it doesn't mean that everything that follows from it has to be discarded immediately. (False things imply both true and false things!)

You'll generally find that many important things stand on their own without support from the old belief. (Doing this for the other examples I gave, as well as your own, is left to you.) Other things will collapse, and that's fine; that which can be destroyed by the truth should be. Just don't make all of these judgments in one fell swoop.

One last caution: I recommend against changing meta-level rules as a result of changing object-level beliefs. The meta level is how you correct bad decisions on the object level, and it should only be updated by very clear reasoning in a state of equilibrium. Changing your flight destination is perfectly fine, but don't take apart the wing mid-flight.

Good luck out there, and remember:

It all adds up to normality.

[EDIT 2020-03-25: khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you're updating on that. (Maybe there's a new crisis in the external world that contradicts one of your beliefs, or maybe you update to believe that the thing you're about to do could actually cause a major catastrophe.)

In that case, you can try and land the plane safely- focus on getting to a safer state for yourself and the world, so that you have time to think things over. And if you can't do that, then you have no choice but to rethink your piloting on the fly, accepting the danger because you can't escape it. But these experiences will hopefully be very rare for you, current global crisis excepted.]

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 9:26 PM

I like this post.

promise yourself to keep steering the plane mostly as normal while you think about lift

This is a good, short, memorable proverb to remember the point of the post by.

I think the strongest version of this idea of adding p to normality is "new evidence/knowledge that contradicts previous beliefs does not invalidate previous observations." Therefore, when one's actions are contingent on things happening that have already been observed to happen, things add up to normality because it is already known that those things happen -- regardless of any new information.But this strict version of 'adding up to normality' does not apply in situations where one's actions are contingent on unobservables. In cases where new evidence/knowledge may cause someone to dramatically revise the implications of previous observations, things don't add up to normality. Whether this is the case or not for you as an individual depends on your gears-level understanding of your observations.

So in retrospect, the main thing I'd recommend is to promise yourself to keep steering the plane mostly as normal while you think about lift

I somewhat disagree with this. I think, in these kinds of situations, the recommendation should be more along the lines of "promise yourself to make the best risk/reward trade-off you can given your state of uncertainty." If you're flying in a plane that has a good track record of flying, definitely don't touch anything because its more risky to break something that has evidence of working than it is rewarding to fix things that might not actually work. But if you're flying in the world's first plane and realize you don't understand lift, land it as soon as possible.

Some Reasons Things Add Up to Normality

  • If you think the thing you don't understand might be a Chesterton's Fence, there's a good chance it will add up to normality
  • If you think the thing you don't understand can be predicted robustly by inductive reasoning and you only care about being able to accurately predict the thing itself, there's a good chance it will add up to normality

Some Examples where Things Don't Add Up

Example #1 (Moral Revisionism)

You're an eco-rights activist who has tirelessly worked to make the world a better place by protecting wildlife because you believe animals have the right to live good lives on this planet too. Things are going just fine until your friend claims that R-selection implies most animals live short horrible lives and you realize you have no idea whether animals actually live good lives in the wild. Should you immediately panic in fear that you're making things worse?

Yes. Whether or not the claim in question is accurate, your general assumption that protecting wildlife implies improved animal welfare was not well-founded enough to address significant moral risk. You should really stop doing wildlife stuff until you get this figured out or you could actually cause bad things to happen.

Example #2 (Prediction Revisionism)

You've built an AGI and, with all your newfound free-time and wealth, you have a lengthy chat with a mathematician. Things are going along just fine until they point out to you that your understanding of the safety measures used to ensure alignment are wrong, and that the AGI shouldn't be aligned from the safety measures you thought were responsible.Should you immediately panic in fear that the AGI will destroy us all?

Yes. The previous observations are not sufficient to make reliable predictions. But note that a random bystander who is uninvolved with AGI development would be justified in not panicking -- their gears-level understanding hinges on believing that the people who created the AGI are competent enough to address safety, not on believing that the specific details designed to make the AGI safe actually work.

I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.

Obviously it applies if you're the lead on a new technological project and suddenly realize a plausible catastrophic risk from it.

I don't think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.

I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.

Yeah, but my point is not about catastrophic risk -- it's about the risk/reward trade-off in general. You can have risk>reward in scenarios that aren't catastrophic. Catastrophic risk is just a good general example of where things don't add up to normality (catastrophic risks by nature correspond to not-normal scenarios and also coincide with high risk). Don't promise yourself to steer the plane mostly as normal, promise yourself to pursue the path that reduces risk over all outcomes you're uncertain about.

I don't think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.

Good point, it really depends on the details of the example but this is just because of the different risk-reward trade-offs, not because you ought to always treat things as adding up to normality. I'll counter that while you shouldn't leave the job (high risk, hard to reverse), you should see if you could use your PTO as soon as possible so you can figure things out without potentially causing further negative impact. It all depends on the risk-reward trade-off:

  • If stopping activism corresponds to something like leaving a job, which is hard to reverse, doing so involves taking on a lot of risk if you're uncertain and waiting for a bit can reduce that risk.
  • If stopping activism corresponds to something like shifting your organizations priorities, and your organization's path can be reversed, then stopping work (after satisfying all existing contracts of course) is pretty low risk and you should stop
  • If stopping activism corresponds to donating large amounts of money (in earning-to-give contexts), your strategy can easily be reversed and you should stop now.

This is true even if you only have "small" amounts of impact.

Caveat:

People engage in policies for many reasons at once. So if you think the goal of your policy is X, but it's actually X, Y and Z, then dramatic actions justified on uncertainty about X alone will probably be harmful due to Y and Z effects even if its the appropriate decision with respect to X. Because it's easy to notice when why a thing might go wrong (like X) and hard to notice why they're going right (like Y and Z), adding-up-to-normality serves as a way to generally protect Y and Z.

Don't know if you saw, but I updated the post yesterday because of your (and khafra's) points.

Also, your caveat is a good reframe of the main mechanism behind the post.

I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.

In the absence of an external crisis, taking relatively safe actions (and few irreversible actions) is correct in the short term, and the status quo is going to be reasonably safe for most people if you've been living it for years. If you can back off from newly-suspected-wrong activities for the time being without doing so irreversibly, then yes that's better.

I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.

Ah, yeah I agree with this observation -- and it could be good to just assume things add up to normality as a general defense against people rapidly taking naive actions. Scarcity bias is a thing after all and if you get into a mindset where now is the time to act, it's really hard to prevent yourself from acting irrationally.

Huzzah, convergence! I appreciate the points you've made.

I agree with this, but a counterpoint is that it's very hard for people to change longstanding habits and behaviors at all, and sometimes a major internal update is a good moment to make significant behavior changes because that's the only time most people can manage major behavioral changes at all.

This reminds me of the Discourse on Method.

[T]here is seldom so much perfection in works composed of many separate parts, upon which different hands had been employed, as in those completed by a single master. Thus it is observable that the buildings which a single architect has planned and executed, are generally more elegant and commodious than those which several have attempted to improve, by making old walls serve for purposes for which they were not originally built. Thus also, those ancient cities which, from being at first only villages, have become, in course of time, large towns, are usually but ill laid out compared with the regularity constructed towns which a professional architect has freely planned on an open plain; so that although the several buildings of the former may often equal or surpass in beauty those of the latter, yet when one observes their indiscriminate juxtaposition, there a large one and here a small, and the consequent crookedness and irregularity of the streets, one is disposed to allege that chance rather than any human will guided by reason must have led to such an arrangement. And if we consider that nevertheless there have been at all times certain officers whose duty it was to see that private buildings contributed to public ornament, the difficulty of reaching high perfection with but the materials of others to operate on, will be readily acknowledged. In the same way I fancied that those nations which, starting from a semi-barbarous state and advancing to civilization by slow degrees, have had their laws successively determined, and, as it were, forced upon them simply by experience of the hurtfulness of particular crimes and disputes, would by this process come to be possessed of less perfect institutions than those which, from the commencement of their association as communities, have followed the appointments of some wise legislator. It is thus quite certain that the constitution of the true religion, the ordinances of which are derived from God, must be incomparably superior to that of every other. And, to speak of human affairs, I believe that the pre-eminence of Sparta was due not to the goodness of each of its laws in particular, for many of these were very strange, and even opposed to good morals, but to the circumstance that, originated by a single individual, they all tended to a single end. In the same way I thought that the sciences contained in books (such of them at least as are made up of probable reasonings, without demonstrations), composed as they are of the opinions of many different individuals massed together, are farther removed from truth than the simple inferences which a man of good sense using his natural and unprejudiced judgment draws respecting the matters of his experience. And because we have all to pass through a state of infancy to manhood, and have been of necessity, for a length of time, governed by our desires and preceptors (whose dictates were frequently conflicting, while neither perhaps always counseled us for the best), I farther concluded that it is almost impossible that our judgments can be so correct or solid as they would have been, had our reason been mature from the moment of our birth, and had we always been guided by it alone.
It is true, however, that it is not customary to pull down all the houses of a town with the single design of rebuilding them differently, and thereby rendering the streets more handsome; but it often happens that a private individual takes down his own with the view of erecting it anew, and that people are even sometimes constrained to this when their houses are in danger of falling from age, or when the foundations are insecure. With this before me by way of example, I was persuaded that it would indeed be preposterous for a private individual to think of reforming a state by fundamentally changing it throughout, and overturning it in order to set it up amended; and the same I thought was true of any similar project for reforming the body of the sciences, or the order of teaching them established in the schools: but as for the opinions which up to that time I had embraced, I thought that I could not do better than resolve at once to sweep them wholly away, that I might afterwards be in a position to admit either others more correct, or even perhaps the same when they had undergone the scrutiny of reason. I firmly believed that in this way I should much better succeed in the conduct of my life, than if I built only upon old foundations, and leaned upon principles which, in my youth, I had taken upon trust. For although I recognized various difficulties in this undertaking, these were not, however, without remedy, nor once to be compared with such as attend the slightest reformation in public affairs. Large bodies, if once overthrown, are with great difficulty set up again, or even kept erect when once seriously shaken, and the fall of such is always disastrous. Then if there are any imperfections in the constitutions of states (and that many such exist the diversity of constitutions is alone sufficient to assure us), custom has without doubt materially smoothed their inconveniences, and has even managed to steer altogether clear of, or insensibly corrected a number which sagacity could not have provided against with equal effect; and, in fine, the defects are almost always more tolerable than the change necessary for their removal; in the same manner that highways which wind among mountains, by being much frequented, become gradually so smooth and commodious, that it is much better to follow them than to seek a straighter path by climbing over the tops of rocks and descending to the bottoms of precipices.
...
And finally, as it is not enough, before commencing to rebuild the house in which we live, that it be pulled down, and materials and builders provided, or that we engage in the work ourselves, according to a plan which we have beforehand carefully drawn out, but as it is likewise necessary that we be furnished with some other house in which we may live commodiously during the operations, so that I might not remain irresolute in my actions, while my reason compelled me to suspend my judgement, and that I might not be prevented from living thenceforward in the greatest possible felicity, I formed a provisory code of morals, composed of three or four maxims, with which I am desirous to make you acquainted.
The first was to obey the laws and customs of my country, adhering firmly to the faith in which, by the grace of God, I had been educated from my childhood and regulating my conduct in every other matter according to the most moderate opinions, and the farthest removed from extremes, which should happen to be adopted in practice with general consent of the most judicious of those among whom I might be living. For as I had from that time begun to hold my own opinions for nought because I wished to subject them all to examination, I was convinced that I could not do better than follow in the meantime the opinions of the most judicious; and although there are some perhaps among the Persians and Chinese as judicious as among ourselves, expediency seemed to dictate that I should regulate my practice conformably to the opinions of those with whom I should have to live; and it appeared to me that, in order to ascertain the real opinions of such, I ought rather to take cognizance of what they practised than of what they said, not only because, in the corruption of our manners, there are few disposed to speak exactly as they believe, but also because very many are not aware of what it is that they really believe; for, as the act of mind by which a thing is believed is different from that by which we know that we believe it, the one act is often found without the other. Also, amid many opinions held in equal repute, I chose always the most moderate, as much for the reason that these are always the most convenient for practice, and probably the best (for all excess is generally vicious), as that, in the event of my falling into error, I might be at less distance from the truth than if, having chosen one of the extremes, it should turn out to be the other which I ought to have adopted. And I placed in the class of extremes especially all promises by which somewhat of our freedom is abridged; not that I disapproved of the laws which, to provide against the instability of men of feeble resolution, when what is sought to be accomplished is some good, permit engagements by vows and contracts binding the parties to persevere in it, or even, for the security of commerce, sanction similar engagements where the purpose sought to be realized is indifferent: but because I did not find anything on earth which was wholly superior to change, and because, for myself in particular, I hoped gradually to perfect my judgments, and not to suffer them to deteriorate, I would have deemed it a grave sin against good sense, if, for the reason that I approved of something at a particular time, I therefore bound myself to hold it for good at a subsequent time, when perhaps it had ceased to be so, or I had ceased to esteem it such.
My second maxim was to be as firm and resolute in my actions as I was able, and not to adhere less steadfastly to the most doubtful opinions, when once adopted, than if they had been highly certain; imitating in this the example of travelers who, when they have lost their way in a forest, ought not to wander from side to side, far less remain in one place, but proceed constantly towards the same side in as straight a line as possible, without changing their direction for slight reasons, although perhaps it might be chance alone which at first determined the selection; for in this way, if they do not exactly reach the point they desire, they will come at least in the end to some place that will probably be preferable to the middle of a forest. In the same way, since in action it frequently happens that no delay is permissible, it is very certain that, when it is not in our power to determine what is true, we ought to act according to what is most probable; and even although we should not remark a greater probability in one opinion than in another, we ought notwithstanding to choose one or the other, and afterwards consider it, in so far as it relates to practice, as no longer dubious, but manifestly true and certain, since the reason by which our choice has been determined is itself possessed of these qualities. This principle was sufficient thenceforward to rid me of all those repentings and pangs of remorse that usually disturb the consciences of such feeble and uncertain minds as, destitute of any clear and determinate principle of choice, allow themselves one day to adopt a course of action as the best, which they abandon the next, as the opposite.

(This is probably 5% of the text. There is more interesting stuff there, but it's less relevant to this post.)

This is good, but I'd add a caveat: it works best in a situation where "normal" is obviously not catastrophic. The airplane example is central to this category. However lift works, air travel is the safest method of getting from one continent to another ever devised by humanity. If you take DMT and finally become aware of the machine elves supporting the weight of each wing, you should congratulate them on their diligence and work ethic.

The second example, morality under MWI, veers closer to the edge of "normal is obviously not catastrophic." MWI says you're causally disconnected from other branches. If your good and bad actions had morally equivalent effects, you would not anticipate different observations than you would under "normality."

As lincolnquirk pointed out, Covid and other long tail events are diametrically opposed to the "normal is obviously not catastrophic" category. Instead of the object-level belief being changed by a discussion on aerodynamic theory, it's being changed by the plane suddenly falling out of the sky, in a way that's incompatible with our previous model.

So, I'd tweak your adage: "promise yourself to keep steering the plane mostly as normal while you think about lift, as long as you're in the reference class of events where steering the plane mostly as normal is the correct action."

I'd modify that, since panic can make you falsely put yourself in weird reference classes in the short run. It's more reliable IMO to ask whether anything has shifted massively in the external world at the same time as it's shifted in your model.

How about promise yourself to keep steering the plane mostly as normal while you think about lift, as long as the plane seems to be flying normally?

It all adds up to normality.

That seems to me to be a superposition of two different arguments.

There's a philosophy-of-science claim that any theory that isn't obviously wrong must be compatible with all observations to date.

And there's a kind of normative claim that you shouldn't change your behaviour a lot when you switch from one ontology to another.

The sameness of predicted observations is just the sameness of predicted observation, not everything. Interpretations of quantum mechanics, to be taken seriously, must agree on the core set of observations, but they can and do vary in their ontological implications. They have to differ about something., or they wouldn't be different interpretations.

But it is entirely possible for ethics to vary with ontology. It is uncontroversial that the possibility of free will impacts ethics, at the theoretical level. Why shouldn't the possibility of many worlds?

Oh no, then there are always copies of me doing terrible things, and so none of my choices matter!

May not be necessarily true, but it is not necessarily false. It is not absurd, it is a reasonable thing to worry about ... at the theoretical level.

But that doesn't contradict the other version of "it all adds up to normality", because that claim is a piece of practical advice. Although it seems possible for deep theoretical truths of metaphysics to impact ethics, the connection is to complex and doubtful to be allowed to affect day-to-day behaviour.

I’ve seen this advice / philosophical point a few times (and I mostly agree with it), but I don’t feel like I have a complete understanding of it. Specifically, when does this not apply?

For instance, coronavirus: to me, this doesn’t “add up to normality” and I’m trying to sort out how it’s an exception. As soon as we heard about the coronavirus, the correct action was to take prep advice seriously and go prepare; and governments moved far too slowly on updating their recommendations; etc. Life after coronavirus is super different than life before. If you were reciting “it all adds up to normality” while reading about corona, you’d probably miss some important opportunities to take quick action.

My guess is that the rule is not supposed to apply to coronavirus (perhaps it’s too object-level?) but I don’t exactly understand why not.

I think khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you're updating on that. (In this case, the signs would be watching people you respect tell you to start prepping immediately- act on the warning lights in the cockpit rather than waiting for the engines to fail.)

The rule might fail the covid test, but still be the correct tradeoff. Also, even though the mainstream moved relatively slowly about covid, you would not reduce your risk that much by being more vigilant than them. They were still pretty fast.

I have to disagree with you there.  Thanks to my friends' knowledge, I stopped my parents from taking a cross-country flight in early March, before much of the media reported that there was any real danger in doing so. You can't wave off the value of truly thinking through things.

But don't confuse "my model is changing" with "the world is changing", even when both are happening simultaneously. That's my point.

One very common pitfall here that you mention, and that is inherited from Eliezer's writings, is related to the potential infinite universes and many worlds. "But many worlds implies..." No, it doesn't. Whether some physical model of the world that is believed to be the one truth by the site founder gets some experimental evidence for or against some day need not affect your morality here and now. Or ever, for that matter, unless there is some day a proven way to interact with those hypothetical selves. The effects of your actions are limited to a tiny part of the observable universe, and that is only if you believe that you have free will. Which is another pitfall, "but if I don't have free will, nothing matters." Nothing objectively matters anyway, the meaning is inside the algorithm that is your mind. Hopefully that algorithm is robust enough to resist the security holes in it, called here infohazards and such.

"But many worlds implies..." No, it doesn't.

It seems implausible that a physical theory of the universe, especially one so fundamental to our understanding of matter, would have literally no practical implications. The geocentric and heliocentric model of the solar system give you the same predictions about where the stars will be in the sky, but the heliocentric model gives some important implications for the ethics of space travel. Other scientific revolutions have similarly had enormous effects on our interpretation of the world.

Can you point to why this physical dispute is different?

the heliocentric model gives some important implications for the ethics of space travel.

What are those implications? I tend to prefer dealing with applications, not implications, so not sure what you mean.

What are those implications?

Without heliocentrism (and its extension to other stars), it seems that the entire idea of going to space and colonizing the stars would not be on the table, because we wouldn't fundamentally even understand what stuff was out there. Since colonizing space is arguably the number one long-term priority for utilitarians, heliocentrism is therefore a groundbreaking theory of immense ethical importance. Without it, we would not have any desire to expand beyond the Earth.

I tend to prefer dealing with applications, not implications

Colonizing the universe is indeed an application.

Then exploring these crystal spheres without crashing into them might be a thing to do. Applications.