It is known that, when you simulate a deterministic system described by nonlinear differential equations, small differences in the initial conditions can be exponentially amplified, resulting in huge differences in the final result.

To describe this phenomenon Edward Lorentz famously said that “a butterfly flapping its wings in Brazil can produce a tornado in Texas”. This quote, popularized by Gleick’s 1987 bestseller on chaos theory, came to mean that small events and small decisions can have huge and unpredictable consequences.

The problem with this conception is that it is extrapolating from only two data points a correlation that (almost surely) does not exist. 

Let us suppose that some aliens run a simulation of our universe, starting in 1 January, with our present initial conditions x(0). This simulation could be deterministic or probabilistic, depending on your philosophical standpoint on how our universe works. The aliens go on simulating until 1 July, and observe that on 1 July there is no tornado in Texas. Then they run again the simulation, but this time they slightly modify the initial condition x(0) (a butterfly flips its wings). This time, on 1 July there is a tornado in Texas. Does this couple of observation mean, in any meaningful way, that the butterfly caused the tornado?

To answer this question, we must run many simulations, sampling all the possibile initial conditions. If our universe is not deterministic, it would make sense also to repeat many times the simulation for each initial condition. Then we could measure the correlation between the correlation between the event “butterfly flips wings on 1 january” and the event “tornado in Texas on 1 july”. But this correlation will be almost likely 0. The more the system is chaotic, the more correlations will decay exponentially with time.

There are systems (like human history) in which small decisions can have big consequences. For example, I guess that the aliens simulating our universe could detect some positive correlation between the events “Francis Ferdinand gets shot in 1914” and “New Zealand is at war in 1917”. I do not think that this correlation is very big, but it could be fairly greater than 0. But this is because Francis Ferdinand heir to the throne was a very special person, whose life had big and predictable correlations with the lives of millions of other people.

If the system is predictable, it is easy to think to cases of big correlations between small decisions and big events. Pressing a small button can start a factory. But you can not control weather by waving at the wind. The more the system is caothic, and the more it is exponentially unlikely to have big correlations.

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 5:42 PM

The butterfly thing was never more than a slogan, and a misleading one.

The real story about tornadoes in six months time is that if you compute God's Deterministic Will backwards from 1 July, then the part of the state space described by "tornado in Texas" has been smeared out and folded over so much by the time you get back to 1 January, that there is no human-possible measurement that can separate that part of the state space from the rest. In particular, there will be no butterfly-flapping event to attribute the tornado to, in the sense that the tornado shows up if and only if the butterfly flaps.

Similarly, if you take the part of the state space on 1 January described by "this butterfly in Brazil flaps its wings" and project that forwards, it will be so smeared out and folded over as to overlap all the tornado and non-tornado events on 1 July.

It especially does not mean that you can set out to produce a tornado by provoking a butterfly somewhere to flap its wings.

My take is that different systems respond to changes in initial conditions (or more practically: perturbations) differently.

Biological systems regulate themselves and reduce the impact of (sufficiently small) perturbations to zero over time.  

Normal physical systems like weather usually do not have such regulating mechanisms - at least not on the smaller scales. Here a change in initial conditions or a perturbation propagates in time and space mostly linearly. The magnitude of observed differences after time T should be proportional to T plus a delay to reach points at distance X (the amplitude would not grow unbounded but reach a maximum variability).

Technical systems can transport a small difference (mostly) unchanged over long distances in space or time. Think of digital television of an effect or a videotape of it.

Cognitive systems and evolutionary ones are left as an exercise for the reader.  

Strictly speaking, for a chaotic system the magnitude of the differences should be amplified exponentially with time (with rates given by Lyapunov exponents).

Well there is a interplay between different sense of "cause".

If you think how one controls a nuclear arsenal, buttons is totally how humans "cause" things. However if you conditioned between "button connected to radio" vs "button not connected to radio" and "radio message received by officer" vs "radio message not received by officer", and "officer has key" vs "officer doesn't have key" and "silo doors open" vs "silo doors don't open", the bit about pushing the button is likey to be insignifcant in compared to the other bits. So there probably ins't a good statistical correlation with the bare button with nuclear winter.

Another limit case would be stock market crashes. They are not designed to crash and mostly people don't want them to crash. Typically the is no single reason why they happen. But it would still be strange to say that they happen for no reason or that nothing caused the crash.

When you consider the button you are likely to keep the "button as part of this machinery" as the constant reference class and wary the environment. Similarly when yuo ared considering the butterfly you want to consider "this butterfly" and not "any butterfly" (like "any button" is not relevant). Part of keeping it that butterfly is to keep the environment of it somewhat constant "butterfly by this lake", "butterfly now". These provide these functionality structures.

In a kind of reverse thing you could ask given some assembly of machinery, can it be interpreted as a factory with a start button? For many man-built factories you indeed find these "linchpin" influencers. One could also be interested in "death star exhaust ports", points that have great influence despite not designed to do so. And they would be "linchpin influencers" even before the exploit is found.

Imagine a hundred trillion butterflies that each flap their wings in one synchronized movement, generating a massive gust of wind which is strong enough to topple buildings flatten mountains. If they were positioned correctly, they'd probably also be able to create a tornado that would not have occurred if the butterflies were not there flapping their wings, just by pushing air currents into place. Would that tornado be "caused" by the butterflies? I think most people would answer yes. If the swarm had not performed their mighty flap, the tornado would not have occurred.

Now, imagine that there's an area where the butterfly-less conditions are almost sufficient to trigger a tornado at a specified location. Again, without any butterflies, no tornado occurs. A hundred trillion butterflies would do the job, but it also turns out that fifty trillion butterflies can also trigger a tornado using the same synchronized flap technique under these conditions. Then you find that a hundred butterflies would also trigger the tornado, and finally, it turns out that the system is so sensitive that a single butterfly's wing-flap would be sufficient for the weather conditions to lead to a tornado. The boolean outcome of tornado vs no tornado, in this case, is the same for a hundred trillion flaps as it is for one. So if a hundred trillion flaps could be considered to cause a tornado, why can't the one? 

Of course, there are an uncountable number of things which are "causing" the tornado to occur. It would be ridiculous to say that the butterfly is solely responsible for the tornado, but the butterfly flap can be considered to be one of the initial conditions of the weather, and chaotic systems are by definition sensitive to initial conditions. 

A deterministic system simulated by a perfectly accurate deterministic simulator would, given a set of inputs, produce the same outputs every time. If you change the value of a butterfly flap and the output is a tornado that otherwise would not occur, that does indeed mean that by some causal chain of events, the flap results in a tornado. A perfectly accurate deterministic simulator is, indeed, the only way a causal relationship between one event and another can be established with absolute certainty, because it is the only way to completely isolate a single variable to determine its effects on a system. Imagine the simulations as an experiment. The hypothesis is "This specific wing-flap of a butterfly in this specific environment causes a tornado in Texas in three months." The simulator generates two simulations: one with the wing-flap and one with no wing-flap. The simulation with no wing-flap is the control simulation, and the simulation with the wing-flap is the experimental simulation. Because every single input variable other than wing-flap or no wing-flap is the same between the two simulations, and only the wing-flap simulation has the tornado, it must be that the wing-flap caused the tornado. This applies only to that specific wing-flap in that exact position and time. We cannot, for example, extrapolate that wing-flaps cause tornados in general. 

If the universe is nondeterministic, then chaos theory doesn't apply and neither does the butterfly effect. 

Hi, I think I see what you mean. You can certainly say that the flap, as a part of the initial conditions, is part of the causes of the tornado. But this is true in the same sense in which all of the initial conditions are part of the cause of the tornado. The flap caused the tornado together with everything else. All the initial ocean temperatures, the position of the jet streams, the northern annular mode index, everything. If everything is the cause, then "being the cause of the tornado" is a property which carries exactly 0 bits of information, since everything is the cause.

I prefer to think that an event A "caused" another event B if the probability of B, conditioned on A happening, is at least greater than the prior probability of A.

The point is that in this scenario, the tornado does not occur unless the butterfly flaps its wings. That does not apply to "everything", necessarily, it only applies to other things which must exist for the tornado to occur. 

Probability is an abstraction in a deterministic universe (and, as I said above, the butterfly effect doesn't apply to a nondeterministic universe.) The perfectly accurate deterministic simulator doesn't use probability, because in a deterministic universe there is only one possible outcome given a set of initial conditions. The simulation is essentially demonstrating "there is a set of initial conditions such that when butterfly flap = 0 there is no Texas tornado, but when butterfly flap = 1 and no other initial conditions are changed, there is a Texas tornado." 

I see, but you are talking about an extremely idiosyncratic measure (only two points) on the space of initial conditions. One could as easily find another couple of initial conditions, in which the wing flip prevents the tornado.

If there were a prediction market on tornadoes, its estimations should not change in neither direction after observing the butterfly.

"there is a set of initial conditions such that when butterfly flap = 0 there is no Texas tornado, but when butterfly flap = 1 and no other initial conditions are changed, there is a Texas tornado." 

Phrased this way it is obviously true.  

 

However, why are you saying that chaos requires determinism? I can think of some Markovian master equations with quite a chaotic behavior.

Imo the idea of butterfly effect carries (for most people) the same confusion as the idea of free will.

Causally weak events can result in disproportionately large downstream effects, but only under extremely specific conditions that happen to be unstable in just the right way. A random weak event producing an extremely strong outcome should, therefore, generally come as quite a surprise.