I used to think that acausal coordination was a weird thing that AI’s might do in the future, but that they certainly wouldn’t learn from looking at human behavior. I don’t believe that anymore, and think there are lots of examples of acausal coordination in everyday life.

Examples people do

Voting

The political science argument against voting goes:

The probability that my vote tilts the election is tiny, so the expected value to me is tiny, so it’s not worth my time to vote.

And the standard rebuttal to this is:

If everyone thought that way democracy would fall apart, so I should vote.

More precisely, I expect that the people who justify voting on these grounds are doing the following reasoning:

I’m like the people who share my views in my tendency to vote. That means that there is a correlation between my decision to vote (or not) and the tendency of other people in my coalition to vote (or not). Lots of people who share my views vote, enough to make our collective vote worth my while. So I should vote, so that we all vote and we all win.

This is an example of acausal coordination! The pro-voting position amounts to reasoning about what other actors with correlated thought processes will do and picking the option which, if each actor does the same reasoning and comes to the same conclusion, leads to a better outcome.

Recycling/Carbon Footprint Reduction

The usual argument against recycling/reducing your personal carbon footprint goes:

I only have control over my environmental impacts. The effect of choosing to reduce these is tiny, so there’s no point in bearing even small costs to do it.

And the standard rebuttal is:

If everyone decided to reduce their footprint/recycle/etc. we’d have the problem(s) solved.

Again, the rebuttal is fundamentally an argument about how to acausally coordinate with other people to achieve a collective goal. Whether or not I recycle is logically connected to whether or not other people who share my reasoning recycle. There are lots of those people, which makes me want to recycle so that they recycle so that we collectively help the environment a significant amount.

Dieting

Why do people feel (psychologically) bad when they go over the limits on their diet? I don’t think it’s because they screwed up once, I think it’s because they view their commitment to a diet as a coordination effort between their present and future selves. Specifically, the reasoning goes:

I couldn’t stick to my diet this time. My ability to stick to my diet is logically connected to the ability of future versions of me to stick to their diets, so by failing to do so now I have failed to coordinate with future versions of myself.

The most explicit example I’ve seen of this in action is Zvi’s reasoning about diets:

For each meal I would consume, I decided what quantity was worth it and forbade myself from ever consuming more. I motivated myself to stick to that rule in the face of hyperbolic discounting by reminding myself that I would make the same decision next time that I was making now, so I was deciding what action I would always take in this situation. More generally, sticking to the rules I’d decided to follow meant I would stick to rules I’d decided to follow, which was clearly an extremely valuable asset to have on my side.

An example people don’t do: paying extra taxes

As far as I can tell, almost no one voluntarily pays extra taxes. And yet, there is an argument for doing so:

If everyone decided to pay extra taxes, the government would have more money for services/we could quickly pay down the debt/etc.

Why does voting coordination work but extra-tax-paying doesn’t? For some people it could be a general disapproval of the things tax dollars pay for, but I don't think that's all that's going on here. For instance, many people support raising taxes, including on themselves, so you might at least expect those people to coordinate to pay extra taxes.

My guess is that the issue is that almost no one pays extra taxes, so there’s no step where you say “There are lots of people who might pay extra taxes, whose choice is logically connected to mine.” That means that your personal choice to pay extra taxes isn’t correlated with being in a world where many people pay extra taxes, and so you don’t see it as worthwhile.

Virtues and Traditions

I think a lot of virtues can be recast as acausal coordination. Virtues like honor/honesty/integrity can be seen as recognition that my choices are correlated with yours, and so my choosing to be virtuous is correlated with you choosing to be virtuous, so I should choose to be virtuous to ensure better outcomes.

Many traditions and religious practices follow this pattern too. For instance, honoring the dead and respect for older generations are both cases of coordinating across people at different times.

 

(Thanks to Justis Mills for feedback on this post, and to Katherine McDaniel for discussions about these ideas.)

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 4:18 AM

Interesting take, but I'll note that these are not acausal, just indirect-causal.  Voting is a good example - counts are public, so future voters KNOW how many of their fellow citizens take it seriously enough to participate.  

In all of these examples, there is a signaling path to future impact.  Which humans are perhaps over-evolved to focus on.

Hmmmm. I agree that there is a signal path to future impact (at least in voting). Two responses there:

  1. There isn't such a signal in recycling. I have no idea how much my town recycles. Ditto for carbon offsets. How many of my closest friends offset the carbon from their flights? I have no idea.
  2. Counts being public tells me how many people voted, but there's something a little funny there. There's almost no signal from my vote in there (concretely, I don't think my vote changes the number from one that tells other people "voting isn't worth it" to "voting is worth it"). I notice I'm confused how to think about this though, and maybe you can clarify/expand on your indirect signal point?

Right. Nothing that happens in the same Hubble volume can really be said to not be causally connected. Nonetheless I like the point of the OP even if it's made in an imprecise way.

The usual story of acausal coordination involves agent P modeling agent Q, and Q modeling P. Put differently, both P and Q model the joint system (P+Q) that has both P and Q in it. But it doesn't have to be (P+Q), it could be some much simpler R instead. I think a more central example of acausal coordination is to simply follow shared ideas.

The unusual character of acausal coordination is caused by cases where R is itself an agent, an adjudicator between P and Q. As a shared idea, it would have instances in minds of both P and Q, and command some influence allowed by P and Q over their actions. It would need to use some sort of functional decision theory to make sense of its situation where it controls the world through causally unrelated actions of P and Q that only have R's algorithm in common behind them.

The adjudicator R doesn't need to be anywhere as complicated as P or Q, in particular it doesn't need to know P or Q in detail. Which makes it much easier for P and Q to know R than to know each other. It's just neither P nor Q that's doing the acausal coordination, it's R instead.

Note that, for rational *altruists* (with nothing vastly better to do like alignment), voting can be huge on CDT grounds -- if you actually do the math for a swing state, the leverage per voter is really high.  In fact, I think the logically counterfactual impact-per-voter tends to be lower than the impact calculated by CDT, if the election is very close. 

I like this post and agree that acausal coordination is not weird fringe behavior necessarily. But thinking about it explicitly in the context of making a decision, is. In normal circumstances, we have plenty of non-acausal ways of discussing what's going on, as you discuss. The explicit consideration is something that becomes important only outside the contexts most people act in.

 

That said, I disagree with the taxes example in particular, on the grounds that that's not how government finances work in a world of fiat currency controlled by said government. Extra taxes paid won't change how much gets spent or on what, it'll just remove money from circulation with possible downstream effects on inflation. Also, in some states in the US (like Massachusetts this year), where the government doesn't control the currency, there are rules that surpluses have to get returned in the form of tax refunds. So any extra state taxes I paid, would just get redistributed across the population in proportion to income.

I like the distinction between implementing the results of acausal decision theories and explicitly performing the reasoning involved. That seems useful to have.

The taxes example I think is more complicated: at some scale I do think that governments have some responsiveness to their tax receipts (e.g. if there were a surprise doubling of tax receipts governments might well spend more). It's not a 1:1 relation, but there's definitely a connection.