Bob spends 5 minutes thinking about x-risk. He's seen a few arguments about it, so he makes an internal model of the problem, accepts some of the arguments, amends some, comes up with counterarguments to others, comes up with arguments of his own. All of this belief-state has extremely large degrees of freedom. At the same time, these beliefs already generate an opinion on a vast number of possible x-risk arguments.

Alice spends 5 hours detailing her beliefs about x-risk in a post. Bob reads it, point by point. He's seen some of those arguments already, he does not update. Some of the arguments are new to him, but they do not surprise him, Bob's current beliefs are enough to reject them, he does not update. The post offers new refutations to some of his accepted beliefs, but he can immediately come up with counter-refutations, he does not update. The post offers refutations to arguments that he has already rejected, and to new arguments that he'd never even consider reasonable, Bob is falling asleep, he does not update. Etc.

Bob leaves a comment with one of his counterarguments to one of Alice's points. They spend hours going back and forth over the next few days. They're explaining their slightly different understanding of the arguments and assumptions, slightly different usage of terms, exchanging long sequences of arguments and counterarguments. Eventually Bob agrees that Alice is right, he updates! But he only updates on that one point in his first comment. He still has many other independent arguments, so the total update to his beliefs about x-risk is tiny.


Many other people read the same post and find it extremely convincing. How did this happen?

  • Carl has not spent 5 minutes thinking about the problem. He may have no interest, or he may not have heard of it before. So what happened here is not a "disagreement" but rather "education". This is somewhat good. In the future Alice and Carl will find it easy to work with this shared understanding and shared language. But how large is this group? And does Carl event care to work on the problem?
  • Dan values Alice enough as an authority to throw his own priors in the trash. This may be more or less justified. But a person who does not seriously consider their own beliefs will never contribute to improving them. It's hard to know if they even understand the arguments, and not just smile and nod.
  • Ed had thought about it, but he wasn't aware of one of Alice's arguments referencing some hard, relevant data. Unfortunately that is very scarce for abstract topics. The moral of the story might be to avoid talking about abstract topics as much as possible.


And what about Bob? Alice and Bob are both rational, reasonable people. And he had some interest in x-risk to begin with. But their discussion was a miserable waste of time. How did this happen?

  • Language has very low information content. Brief statements are vague and unconvincing. Precise statements take a long time to write, long time to parse, they are narrow and they introduce new confusions.
  • A blog post is too small to solve a disagreement. There is no way that Alice was going to predict and refute every argument in Bob's head. And it's unlikely that Bob will discover something totally unlike what he has already considered, just by looking at the fraction of Alice's beliefs that she managed to write down.
  • There is no history. Alice may tell Bob "this counterargument has been discussed before", but actually pointing to such a discussion is hard. And there is no chance that Bob will be able to ask new questions in that old thread, so there is little value in such reference.
  • There is no cooperation. Frank feels that he 100% agrees with Alice, but is it okay for him to explain "her" views to Bob? Would she agree that he holds the same views? There is no way to know, so Frank chooses to say nothing.
  • There is no consensus. Does Bob even know if Alice's views are mainstream and common sense, or something she came up with just now? Even if the post has 100s of karma, does that indicate anything? How much of that karma comes from Carl, Dan or Ed? Alice doesn't know either.
     
New Comment
2 comments, sorted by Click to highlight new comments since: Today at 7:30 PM

A lot depends on whether this is a high-bandwidth discussion/debate, or an anonymous post/read of public statements (or, on messages boards, somewhere in between).  In the interactive case, Alice and Bob could focus on cruxes and specific points of agreement/disagreement.  In the public/semi-public case, it's rare that either side puts that much effort in.

I'll also note that a lot of topics on which such disagreements persist are massively multidimensional and hard to quantify degree of closeness, so "agreement" is very hard to define.  No two humans (and likely no two distinct real agents) have identical priors, so Aumann's Agreement Theorem doesn't apply - they don't HAVE to agree. 

And finally, it's not clear how important the disagreements are, compared to the dimensions where the distance is small (near-agreement).  Intellectuals focus on the disagreement, both because it's the interesting part, and because that's where some amount of status comes from.  A whole lot of these disagreements end up having zero practical impact.  Though, of course, some DO matter, and it's a whole separate domain of disagreement which dimensions are important to agree on...

I'm talking specifically about discussions on LW. Of course in reality Alice ignores Bob's comment 90% of the time, and that's a problem in it's own right. It would be ideal if people who have distinct information would choose to exchange that information.

I picked a specific and reasonably grounded topic, "x-risk", or "the probability that we all die in the next 10 years", which is one number, so not hard to compare, unless you want to break it down by cause of death. In contrived philosophical discussions, it can certainly be hard to determine who agrees on what, but I have a hunch that this is the least of the problems in those discussions.

A lot of things have zero practical impact, and that's also a problem in it's own right. It seems to me that we're barely ever having "is working on this problem going to have practical impact?" type of discussions.